Tag: admission-controllers
Post
Creating a Mutating Webhook in OpenShift
If you have ever used tools like Istio, or OpenShift Service Mesh, you may have noticed that they have an ability to modify your Kubernetes deployments automatically injecting “side-cars” into your application definitions. Or perhaps you have come across tools that add certificates to your deployment, or add special environment variables to your definitions. This magic is brought to you by Kubernetes Admission Controllers. There are multiple types of admission controllers, but today we will focus on just one of them, “Mutating Webhooks”.
Tag: aide
Post
OpenShift FileIntegrity Scanning
Introduction The File Integrity Operator is used to watch for changed files on any node within an OpenShift cluster. Once deployed and configured, it will watch a set of pre-configured locations and report if any files are modified in any way that were not approved. This operator works in sync with MachineConfig so if you update a file through MachineConfig, once the files are updated, the File Integrity Operator will update its database of signatures to ensure that the approved changes do not trigger an alert.
Tag: ansible
Post
Creating a multi-host OKD Cluster
Introduction In the last two posts, I have shown you how to get an OKD All-in-One cluster up and running. Since it was an “All-in-One” cluster, there was no redundancy in it, and there was no ability to scale out. OKD and Kubernetes work best in a multi-server deployment, creating redundancy and higher availability along with the ability to scale your applications horizontally on demand. This final blog post is going to outline the steps to build a multi-host cluster.
Post
Openshift, Azure, and Ansible
Intro In my last post, I showed how to deploy an All-in-One OKD system. This is great for some initial learning, but keeping it up and running can get expensive over time. You can always shut it down and re-create it later, but this can take time and you can end up making typos and errors if you aren’t careful. If you want to be able to create (and destroy) these All-in-One environments in a more automatic way read on.
Tag: azure
Post
Creating a multi-host OKD Cluster
Introduction In the last two posts, I have shown you how to get an OKD All-in-One cluster up and running. Since it was an “All-in-One” cluster, there was no redundancy in it, and there was no ability to scale out. OKD and Kubernetes work best in a multi-server deployment, creating redundancy and higher availability along with the ability to scale your applications horizontally on demand. This final blog post is going to outline the steps to build a multi-host cluster.
Post
Openshift, Azure, and Ansible
Intro In my last post, I showed how to deploy an All-in-One OKD system. This is great for some initial learning, but keeping it up and running can get expensive over time. You can always shut it down and re-create it later, but this can take time and you can end up making typos and errors if you aren’t careful. If you want to be able to create (and destroy) these All-in-One environments in a more automatic way read on.
Post
OpenShift on Azure - The Manual Way
Intro The other day I was reading an article on OpenShift All-in-One and thought it would be interesting to re-create it with OKD, the community version of OpenShift. We are going to create an All-In-One (AiO) version of version 3.11 of OKD/OpenShift on Azure.
This post is going to show you how to do an manual install of OKD on just one host. Why would you want to do this? This will get you a fully working instance of OpenShift and even give you cluster admin rights so you can learn how to administer it.
Tag: citrix-adc
Post
Using Citrix Netscaler with OpenShift
Introduction The OpenShift platform is a “batteries included” distribution of Kubernetes. It comes with EVERYTHING you need to run a Kubernetes platform from a developer and sysadmin-friendly UI, to monitoring, alerting, platform configuration, and ingress networking. OpenShift was one of the first Kubernetes distributions to realize that having a Kubernetes platform that solved how to load-balance incoming requests for applications was important. OpenShift achieved this through the use of “Routes”. Upstream in Kubernetes this need has been implemented through the use of Ingress, and more recently Gateway API.
Tag: cluster
Post
Recovering an OCP/OKD Cluster After a Long Time Powered Off
Introduction If you are like me, you have multiple Lab clusters of OpenShift or OKD in your home or work Lab. Each of these clusters takes up a significant amount of resources and so you may shut them down to save power or compute resources. Or perhaps you are running a cluster in one of the many supported Cloud providers, and you power the machines down to save costs when you are not using them.
Tag: containers
Post
Using Podman on Mac OSX
Over five years ago I bought an Apple MacBook Pro to learn Go and deep dive into things like containers and Kubernetes. My reasoning was simple, OSX was “*nix” like, the keyboard was amazing, and I could use Docker Desktop to run and manage containers on this machine. I could have used a Windows machine or built a Linux machine, but I wanted the ease of use of Mac, without having to worry about the constant hassles of patching (Windows) or limitations on drivers and power management (Linux).
Post
NMState Operator and OpenShift Container Platform
Introduction OpenShift Container Platform and OpenShift Data Foundations can supply all your data storage needs, however sometimes you want to leverage an external storage array directly using storage protocols such as NFS or iSCSI. In many cases these storage networks will be served from dedicated network segments or VLANs and use dedicated network ports or network cards to handle the traffic.
On traditional Operating Systems like RHEL, you would use tools such as nmcli and network-manager to configure settings such as MTU and or create bonded connections, but in Red Hat Core OS, these tools are not directly available to you.
Post
Trying Tanzu with Tanzu Community Edition
Installing Tanzu Community Edition on vSphere Over the past year, I have heard much about VMware Tanzu, but have yet to experience what it is or how it works. Given my infrastructure background, I am interested in how it installs, and how does one maintain it long term. So with those questions in mind, I decided to try installing Tanzu Community Edition.
What is Tanzu? Tanzu is VMware’s productized version of Kubernetes, designed to run on AWS, Azure, and vSphere.
Post
OpenShift Windows Containers- Bring Your Own Host
OpenShift has supported Windows Containers with the Windows Machine Config Operator for the past year, starting with OCP 4.6. Initial Windows Container support required running your platform in Azure or AWS. With the release of 4.7, the WMCO also supported hosting machines in VMWare. However, when deploying in a VMWare environment you had to spend time configuring a base Windows image, using tools such as sysprep and VMware templates. What if you wanted to use a bare metal host(s), or wanted to take advantage of existing Windows servers that you already manage.
Tag: csi
Post
Using the Synology K8s CSI Driver with OpenShift
Introduction Adding storage to an OpenShift cluster can greatly increase the types of workloads you can run, including workloads such as OpenShift Virtualization, or databases such as MongoDB and PostgreSQL. Persistent volumes can be supplied in many different ways within OpenShift including using LocalVolumes, or OpenShift Data Foundation, or provided by an underlying Cloud Provider such as the vSphere provider. Storage providers for external storage arrays such as Pure CSI Driver, Dell, Infinidat CSI Driver and Synology CSI Driver also exist.
Tag: day-two
Post
Creating Custom Operator Hub Catalogs
Introduction By default every new OpenShift cluster has a fully populated Operator Hub, filled with various Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. These Operators can be installed by the cluster administrator in order to expand the features and functions of a given cluster. While this is great in many cases, not all enterprises want these Operators made available for install. OpenShift OperatorHub is fully configurable, and some of the various options on how to work with the Operator Catalog will be the topic of this blog post.
Post
Creating ExternalIPs in OpenShift with MetalLB
Introduction Since the 3.0 release of OpenShift it has come with what is called the OpenShift Routes. This can be thought of as a Layer 7 load balancer for TLS or HTTP applications in your cluster. This layer 7 load balancer works great for web applications and services that use HTTP, HTTPS using SNI, or TLS using SNI. However not all applications are HTTP-based, and some will use protocols other than TCP such as UDP and even SCTP.
Post
Understanding OpenShift MachineConfigs and MachineConfigPools
Introduction OpenShift 4 is built upon Red Hat CoreOS (RHCOS), and RHCOS is managed differently than most traditional Operating Systems. Unlike other Kubernetes distributions where you must manage the base Operating System as well as your Kubernetes distribution, with OpenSHift 4 the RHCOS Operating System and the Kubernetes platform are tightly coupled, and management of RHCOS including any system-level configurations is managed by MachineConfigs, and MachineConfigPools. These constructs allow you to manage system configuration and detect configuration drift on your Control Plane and Worker nodes.
Post
Using Citrix Netscaler with OpenShift
Introduction The OpenShift platform is a “batteries included” distribution of Kubernetes. It comes with EVERYTHING you need to run a Kubernetes platform from a developer and sysadmin-friendly UI, to monitoring, alerting, platform configuration, and ingress networking. OpenShift was one of the first Kubernetes distributions to realize that having a Kubernetes platform that solved how to load-balance incoming requests for applications was important. OpenShift achieved this through the use of “Routes”. Upstream in Kubernetes this need has been implemented through the use of Ingress, and more recently Gateway API.
Tag: docker
Post
Using Podman on Mac OSX
Over five years ago I bought an Apple MacBook Pro to learn Go and deep dive into things like containers and Kubernetes. My reasoning was simple, OSX was “*nix” like, the keyboard was amazing, and I could use Docker Desktop to run and manage containers on this machine. I could have used a Windows machine or built a Linux machine, but I wanted the ease of use of Mac, without having to worry about the constant hassles of patching (Windows) or limitations on drivers and power management (Linux).
Tag: externalip
Post
Creating ExternalIPs in OpenShift with MetalLB
Introduction Since the 3.0 release of OpenShift it has come with what is called the OpenShift Routes. This can be thought of as a Layer 7 load balancer for TLS or HTTP applications in your cluster. This layer 7 load balancer works great for web applications and services that use HTTP, HTTPS using SNI, or TLS using SNI. However not all applications are HTTP-based, and some will use protocols other than TCP such as UDP and even SCTP.
Tag: git
Post
Running Gitea on Synology Arrays
I continue to find that my Synology NAS arrays are the most versatile devices in my home lab. I run many small “helper” services on my arrays through the use of the Docker service built into the 6.x and 7.x releases of the Synology DSM. What are these helper services that I am running? Things like “Grafana”, “Prometheus”, “Minio” and the topic for discussion today “Gitea”.
What is Gitea? From their website it is “Gitea is a community managed lightweight code hosting solution written in Go.
Post
Signing your Git Commits with SSH Keys
In August of this year, there was a bunch of panic about GitHub being compromised, and 35K repos having malicious code in them. Further investigation clarified that it was Github Repos that were set up to do a “phishing” type attack, by creating repositories that were improperly named or Typosquatting. That being said it has led to further discussion and attention around Code Supply Chain, and ensuring that code contributions, libraries and releases are validated before use.
Tag: gitea
Post
Running Gitea on Synology Arrays
I continue to find that my Synology NAS arrays are the most versatile devices in my home lab. I run many small “helper” services on my arrays through the use of the Docker service built into the 6.x and 7.x releases of the Synology DSM. What are these helper services that I am running? Things like “Grafana”, “Prometheus”, “Minio” and the topic for discussion today “Gitea”.
What is Gitea? From their website it is “Gitea is a community managed lightweight code hosting solution written in Go.
Tag: github
Post
Signing your Git Commits with SSH Keys
In August of this year, there was a bunch of panic about GitHub being compromised, and 35K repos having malicious code in them. Further investigation clarified that it was Github Repos that were set up to do a “phishing” type attack, by creating repositories that were improperly named or Typosquatting. That being said it has led to further discussion and attention around Code Supply Chain, and ensuring that code contributions, libraries and releases are validated before use.
Tag: ingress
Post
Using Citrix Netscaler with OpenShift
Introduction The OpenShift platform is a “batteries included” distribution of Kubernetes. It comes with EVERYTHING you need to run a Kubernetes platform from a developer and sysadmin-friendly UI, to monitoring, alerting, platform configuration, and ingress networking. OpenShift was one of the first Kubernetes distributions to realize that having a Kubernetes platform that solved how to load-balance incoming requests for applications was important. OpenShift achieved this through the use of “Routes”. Upstream in Kubernetes this need has been implemented through the use of Ingress, and more recently Gateway API.
Tag: iscsi
Post
Using the Synology K8s CSI Driver with OpenShift
Introduction Adding storage to an OpenShift cluster can greatly increase the types of workloads you can run, including workloads such as OpenShift Virtualization, or databases such as MongoDB and PostgreSQL. Persistent volumes can be supplied in many different ways within OpenShift including using LocalVolumes, or OpenShift Data Foundation, or provided by an underlying Cloud Provider such as the vSphere provider. Storage providers for external storage arrays such as Pure CSI Driver, Dell, Infinidat CSI Driver and Synology CSI Driver also exist.
Post
NMState Operator and OpenShift Container Platform
Introduction OpenShift Container Platform and OpenShift Data Foundations can supply all your data storage needs, however sometimes you want to leverage an external storage array directly using storage protocols such as NFS or iSCSI. In many cases these storage networks will be served from dedicated network segments or VLANs and use dedicated network ports or network cards to handle the traffic.
On traditional Operating Systems like RHEL, you would use tools such as nmcli and network-manager to configure settings such as MTU and or create bonded connections, but in Red Hat Core OS, these tools are not directly available to you.
Tag: kata
Post
Using Kata Containers with OpenShift Container Platform
Introduction Containerization ushered in a new way to run workloads both on-prem and in the cloud securely and efficiently. By leveraging CGroups and Namespaces in the Linux kernel, applications can run isolated from each other in a secure and controlled manner. These applications share the same kernel and machine hardware. While CGroups and Namespaces are a powerful way of defining isolation between applications, faults have been found that allow breaking out of their CGroups jail.
Tag: kubectl
Post
Kubectl and OC Command Output
Introduction After running an OpenShift or Kubernetes cluster for a little while you find that you need to create reports on specific data about the cluster itself. Reporting on things like Project owners, container images in use, and project quota are just some of the things you might be asked about. There are multiple ways to do this, such as writing your own application that queries the API, or creating a shell script that wraps a bunch of cli commands.
Tag: kubernetes
Post
Using the Synology K8s CSI Driver with OpenShift
Introduction Adding storage to an OpenShift cluster can greatly increase the types of workloads you can run, including workloads such as OpenShift Virtualization, or databases such as MongoDB and PostgreSQL. Persistent volumes can be supplied in many different ways within OpenShift including using LocalVolumes, or OpenShift Data Foundation, or provided by an underlying Cloud Provider such as the vSphere provider. Storage providers for external storage arrays such as Pure CSI Driver, Dell, Infinidat CSI Driver and Synology CSI Driver also exist.
Post
Creating a Mutating Webhook in OpenShift
If you have ever used tools like Istio, or OpenShift Service Mesh, you may have noticed that they have an ability to modify your Kubernetes deployments automatically injecting “side-cars” into your application definitions. Or perhaps you have come across tools that add certificates to your deployment, or add special environment variables to your definitions. This magic is brought to you by Kubernetes Admission Controllers. There are multiple types of admission controllers, but today we will focus on just one of them, “Mutating Webhooks”.
Post
Trying Tanzu with Tanzu Community Edition
Installing Tanzu Community Edition on vSphere Over the past year, I have heard much about VMware Tanzu, but have yet to experience what it is or how it works. Given my infrastructure background, I am interested in how it installs, and how does one maintain it long term. So with those questions in mind, I decided to try installing Tanzu Community Edition.
What is Tanzu? Tanzu is VMware’s productized version of Kubernetes, designed to run on AWS, Azure, and vSphere.
Post
OpenShift FileIntegrity Scanning
Introduction The File Integrity Operator is used to watch for changed files on any node within an OpenShift cluster. Once deployed and configured, it will watch a set of pre-configured locations and report if any files are modified in any way that were not approved. This operator works in sync with MachineConfig so if you update a file through MachineConfig, once the files are updated, the File Integrity Operator will update its database of signatures to ensure that the approved changes do not trigger an alert.
Post
Kubectl and OC Command Output
Introduction After running an OpenShift or Kubernetes cluster for a little while you find that you need to create reports on specific data about the cluster itself. Reporting on things like Project owners, container images in use, and project quota are just some of the things you might be asked about. There are multiple ways to do this, such as writing your own application that queries the API, or creating a shell script that wraps a bunch of cli commands.
Tag: letsencrypt
Post
Creating a multi-host OKD Cluster
Introduction In the last two posts, I have shown you how to get an OKD All-in-One cluster up and running. Since it was an “All-in-One” cluster, there was no redundancy in it, and there was no ability to scale out. OKD and Kubernetes work best in a multi-server deployment, creating redundancy and higher availability along with the ability to scale your applications horizontally on demand. This final blog post is going to outline the steps to build a multi-host cluster.
Tag: loadbalancer
Post
Creating ExternalIPs in OpenShift with MetalLB
Introduction Since the 3.0 release of OpenShift it has come with what is called the OpenShift Routes. This can be thought of as a Layer 7 load balancer for TLS or HTTP applications in your cluster. This layer 7 load balancer works great for web applications and services that use HTTP, HTTPS using SNI, or TLS using SNI. However not all applications are HTTP-based, and some will use protocols other than TCP such as UDP and even SCTP.
Tag: machineconfig
Post
Understanding OpenShift MachineConfigs and MachineConfigPools
Introduction OpenShift 4 is built upon Red Hat CoreOS (RHCOS), and RHCOS is managed differently than most traditional Operating Systems. Unlike other Kubernetes distributions where you must manage the base Operating System as well as your Kubernetes distribution, with OpenSHift 4 the RHCOS Operating System and the Kubernetes platform are tightly coupled, and management of RHCOS including any system-level configurations is managed by MachineConfigs, and MachineConfigPools. These constructs allow you to manage system configuration and detect configuration drift on your Control Plane and Worker nodes.
Tag: mikrotik
Post
MikroTik RouterOS and WireGuard for Road Warriors.
Introduction As the world starts to open back up, I find that I am traveling more, but I still need access to my home network and Lab equipment for demos and testing. I have tried various VPNs over the years including OpenVPN and ZeroTier and IPsec. These all worked well, but required running a separate server to handle the VPN termination, and were difficult to configure and maintain. In 2020, a new player entered the ring called WireGuard.
Tag: networking
Post
NMState Operator and OpenShift Container Platform
Introduction OpenShift Container Platform and OpenShift Data Foundations can supply all your data storage needs, however sometimes you want to leverage an external storage array directly using storage protocols such as NFS or iSCSI. In many cases these storage networks will be served from dedicated network segments or VLANs and use dedicated network ports or network cards to handle the traffic.
On traditional Operating Systems like RHEL, you would use tools such as nmcli and network-manager to configure settings such as MTU and or create bonded connections, but in Red Hat Core OS, these tools are not directly available to you.
Tag: oc
Post
Kubectl and OC Command Output
Introduction After running an OpenShift or Kubernetes cluster for a little while you find that you need to create reports on specific data about the cluster itself. Reporting on things like Project owners, container images in use, and project quota are just some of the things you might be asked about. There are multiple ways to do this, such as writing your own application that queries the API, or creating a shell script that wraps a bunch of cli commands.
Tag: openshift
Post
Explaining OpenShift Router Configurations
Introduction While working with OpenShift Routes recently, I came across a problem with an application deployment that was not working. OpenShift was returning an “Application is not Available” page, even though the application pod was up, and the service was properly configured and mapped. After some additional troubleshooting, we were able to trace the problem back to how the OpenShift router communicates with an application pod. Depending on your route type, OpenShift will either use HTTP, HTTPS or passthrough TCP to communicate with your application.
Post
Creating Custom Operator Hub Catalogs
Introduction By default every new OpenShift cluster has a fully populated Operator Hub, filled with various Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. These Operators can be installed by the cluster administrator in order to expand the features and functions of a given cluster. While this is great in many cases, not all enterprises want these Operators made available for install. OpenShift OperatorHub is fully configurable, and some of the various options on how to work with the Operator Catalog will be the topic of this blog post.
Post
Creating ExternalIPs in OpenShift with MetalLB
Introduction Since the 3.0 release of OpenShift it has come with what is called the OpenShift Routes. This can be thought of as a Layer 7 load balancer for TLS or HTTP applications in your cluster. This layer 7 load balancer works great for web applications and services that use HTTP, HTTPS using SNI, or TLS using SNI. However not all applications are HTTP-based, and some will use protocols other than TCP such as UDP and even SCTP.
Post
Understanding OpenShift MachineConfigs and MachineConfigPools
Introduction OpenShift 4 is built upon Red Hat CoreOS (RHCOS), and RHCOS is managed differently than most traditional Operating Systems. Unlike other Kubernetes distributions where you must manage the base Operating System as well as your Kubernetes distribution, with OpenSHift 4 the RHCOS Operating System and the Kubernetes platform are tightly coupled, and management of RHCOS including any system-level configurations is managed by MachineConfigs, and MachineConfigPools. These constructs allow you to manage system configuration and detect configuration drift on your Control Plane and Worker nodes.
Post
Using Citrix Netscaler with OpenShift
Introduction The OpenShift platform is a “batteries included” distribution of Kubernetes. It comes with EVERYTHING you need to run a Kubernetes platform from a developer and sysadmin-friendly UI, to monitoring, alerting, platform configuration, and ingress networking. OpenShift was one of the first Kubernetes distributions to realize that having a Kubernetes platform that solved how to load-balance incoming requests for applications was important. OpenShift achieved this through the use of “Routes”. Upstream in Kubernetes this need has been implemented through the use of Ingress, and more recently Gateway API.
Post
Using the Synology K8s CSI Driver with OpenShift
Introduction Adding storage to an OpenShift cluster can greatly increase the types of workloads you can run, including workloads such as OpenShift Virtualization, or databases such as MongoDB and PostgreSQL. Persistent volumes can be supplied in many different ways within OpenShift including using LocalVolumes, or OpenShift Data Foundation, or provided by an underlying Cloud Provider such as the vSphere provider. Storage providers for external storage arrays such as Pure CSI Driver, Dell, Infinidat CSI Driver and Synology CSI Driver also exist.
Post
Creating a Mutating Webhook in OpenShift
If you have ever used tools like Istio, or OpenShift Service Mesh, you may have noticed that they have an ability to modify your Kubernetes deployments automatically injecting “side-cars” into your application definitions. Or perhaps you have come across tools that add certificates to your deployment, or add special environment variables to your definitions. This magic is brought to you by Kubernetes Admission Controllers. There are multiple types of admission controllers, but today we will focus on just one of them, “Mutating Webhooks”.
Post
Recovering an OCP/OKD Cluster After a Long Time Powered Off
Introduction If you are like me, you have multiple Lab clusters of OpenShift or OKD in your home or work Lab. Each of these clusters takes up a significant amount of resources and so you may shut them down to save power or compute resources. Or perhaps you are running a cluster in one of the many supported Cloud providers, and you power the machines down to save costs when you are not using them.
Post
NMState Operator and OpenShift Container Platform
Introduction OpenShift Container Platform and OpenShift Data Foundations can supply all your data storage needs, however sometimes you want to leverage an external storage array directly using storage protocols such as NFS or iSCSI. In many cases these storage networks will be served from dedicated network segments or VLANs and use dedicated network ports or network cards to handle the traffic.
On traditional Operating Systems like RHEL, you would use tools such as nmcli and network-manager to configure settings such as MTU and or create bonded connections, but in Red Hat Core OS, these tools are not directly available to you.
Post
OpenShift Windows Containers- Bring Your Own Host
OpenShift has supported Windows Containers with the Windows Machine Config Operator for the past year, starting with OCP 4.6. Initial Windows Container support required running your platform in Azure or AWS. With the release of 4.7, the WMCO also supported hosting machines in VMWare. However, when deploying in a VMWare environment you had to spend time configuring a base Windows image, using tools such as sysprep and VMware templates. What if you wanted to use a bare metal host(s), or wanted to take advantage of existing Windows servers that you already manage.
Post
Using Kata Containers with OpenShift Container Platform
Introduction Containerization ushered in a new way to run workloads both on-prem and in the cloud securely and efficiently. By leveraging CGroups and Namespaces in the Linux kernel, applications can run isolated from each other in a secure and controlled manner. These applications share the same kernel and machine hardware. While CGroups and Namespaces are a powerful way of defining isolation between applications, faults have been found that allow breaking out of their CGroups jail.
Post
OpenShift Cluster Storage Management
When it comes to persistent storage in your OpenShift clusters, there is usually only so much of it to go around. As an OpenShift cluster admin, you want to ensure that in the age of self-service, your consumers do not take more storage than their fair share. More importantly you want to ensure that your users don’t oversubscribe and consume more storage than you have. This is especially true when the storage system you are using leverages “Thin Provisioning.
Post
OpenShift FileIntegrity Scanning
Introduction The File Integrity Operator is used to watch for changed files on any node within an OpenShift cluster. Once deployed and configured, it will watch a set of pre-configured locations and report if any files are modified in any way that were not approved. This operator works in sync with MachineConfig so if you update a file through MachineConfig, once the files are updated, the File Integrity Operator will update its database of signatures to ensure that the approved changes do not trigger an alert.
Post
Kubectl and OC Command Output
Introduction After running an OpenShift or Kubernetes cluster for a little while you find that you need to create reports on specific data about the cluster itself. Reporting on things like Project owners, container images in use, and project quota are just some of the things you might be asked about. There are multiple ways to do this, such as writing your own application that queries the API, or creating a shell script that wraps a bunch of cli commands.
Post
Creating a multi-host OKD Cluster
Introduction In the last two posts, I have shown you how to get an OKD All-in-One cluster up and running. Since it was an “All-in-One” cluster, there was no redundancy in it, and there was no ability to scale out. OKD and Kubernetes work best in a multi-server deployment, creating redundancy and higher availability along with the ability to scale your applications horizontally on demand. This final blog post is going to outline the steps to build a multi-host cluster.
Post
Openshift, Azure, and Ansible
Intro In my last post, I showed how to deploy an All-in-One OKD system. This is great for some initial learning, but keeping it up and running can get expensive over time. You can always shut it down and re-create it later, but this can take time and you can end up making typos and errors if you aren’t careful. If you want to be able to create (and destroy) these All-in-One environments in a more automatic way read on.
Post
OpenShift on Azure - The Manual Way
Intro The other day I was reading an article on OpenShift All-in-One and thought it would be interesting to re-create it with OKD, the community version of OpenShift. We are going to create an All-In-One (AiO) version of version 3.11 of OKD/OpenShift on Azure.
This post is going to show you how to do an manual install of OKD on just one host. Why would you want to do this? This will get you a fully working instance of OpenShift and even give you cluster admin rights so you can learn how to administer it.
Tag: operations
Post
Creating Custom Operator Hub Catalogs
Introduction By default every new OpenShift cluster has a fully populated Operator Hub, filled with various Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. These Operators can be installed by the cluster administrator in order to expand the features and functions of a given cluster. While this is great in many cases, not all enterprises want these Operators made available for install. OpenShift OperatorHub is fully configurable, and some of the various options on how to work with the Operator Catalog will be the topic of this blog post.
Tag: operator-hub
Post
Creating Custom Operator Hub Catalogs
Introduction By default every new OpenShift cluster has a fully populated Operator Hub, filled with various Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. These Operators can be installed by the cluster administrator in order to expand the features and functions of a given cluster. While this is great in many cases, not all enterprises want these Operators made available for install. OpenShift OperatorHub is fully configurable, and some of the various options on how to work with the Operator Catalog will be the topic of this blog post.
Tag: operators
Post
Creating Custom Operator Hub Catalogs
Introduction By default every new OpenShift cluster has a fully populated Operator Hub, filled with various Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. These Operators can be installed by the cluster administrator in order to expand the features and functions of a given cluster. While this is great in many cases, not all enterprises want these Operators made available for install. OpenShift OperatorHub is fully configurable, and some of the various options on how to work with the Operator Catalog will be the topic of this blog post.
Tag: osx
Post
Using Podman on Mac OSX
Over five years ago I bought an Apple MacBook Pro to learn Go and deep dive into things like containers and Kubernetes. My reasoning was simple, OSX was “*nix” like, the keyboard was amazing, and I could use Docker Desktop to run and manage containers on this machine. I could have used a Windows machine or built a Linux machine, but I wanted the ease of use of Mac, without having to worry about the constant hassles of patching (Windows) or limitations on drivers and power management (Linux).
Tag: podman
Post
Using Podman on Mac OSX
Over five years ago I bought an Apple MacBook Pro to learn Go and deep dive into things like containers and Kubernetes. My reasoning was simple, OSX was “*nix” like, the keyboard was amazing, and I could use Docker Desktop to run and manage containers on this machine. I could have used a Windows machine or built a Linux machine, but I wanted the ease of use of Mac, without having to worry about the constant hassles of patching (Windows) or limitations on drivers and power management (Linux).
Tag: powershell
Post
OpenShift Windows Containers- Bring Your Own Host
OpenShift has supported Windows Containers with the Windows Machine Config Operator for the past year, starting with OCP 4.6. Initial Windows Container support required running your platform in Azure or AWS. With the release of 4.7, the WMCO also supported hosting machines in VMWare. However, when deploying in a VMWare environment you had to spend time configuring a base Windows image, using tools such as sysprep and VMware templates. What if you wanted to use a bare metal host(s), or wanted to take advantage of existing Windows servers that you already manage.
Tag: quotas
Post
OpenShift Cluster Storage Management
When it comes to persistent storage in your OpenShift clusters, there is usually only so much of it to go around. As an OpenShift cluster admin, you want to ensure that in the age of self-service, your consumers do not take more storage than their fair share. More importantly you want to ensure that your users don’t oversubscribe and consume more storage than you have. This is especially true when the storage system you are using leverages “Thin Provisioning.
Tag: recovery
Post
Recovering an OCP/OKD Cluster After a Long Time Powered Off
Introduction If you are like me, you have multiple Lab clusters of OpenShift or OKD in your home or work Lab. Each of these clusters takes up a significant amount of resources and so you may shut them down to save power or compute resources. Or perhaps you are running a cluster in one of the many supported Cloud providers, and you power the machines down to save costs when you are not using them.
Tag: routing
Post
Explaining OpenShift Router Configurations
Introduction While working with OpenShift Routes recently, I came across a problem with an application deployment that was not working. OpenShift was returning an “Application is not Available” page, even though the application pod was up, and the service was properly configured and mapped. After some additional troubleshooting, we were able to trace the problem back to how the OpenShift router communicates with an application pod. Depending on your route type, OpenShift will either use HTTP, HTTPS or passthrough TCP to communicate with your application.
Tag: s3
Post
Running Gitea on Synology Arrays
I continue to find that my Synology NAS arrays are the most versatile devices in my home lab. I run many small “helper” services on my arrays through the use of the Docker service built into the 6.x and 7.x releases of the Synology DSM. What are these helper services that I am running? Things like “Grafana”, “Prometheus”, “Minio” and the topic for discussion today “Gitea”.
What is Gitea? From their website it is “Gitea is a community managed lightweight code hosting solution written in Go.
Tag: security
Post
Signing your Git Commits with SSH Keys
In August of this year, there was a bunch of panic about GitHub being compromised, and 35K repos having malicious code in them. Further investigation clarified that it was Github Repos that were set up to do a “phishing” type attack, by creating repositories that were improperly named or Typosquatting. That being said it has led to further discussion and attention around Code Supply Chain, and ensuring that code contributions, libraries and releases are validated before use.
Post
OpenShift FileIntegrity Scanning
Introduction The File Integrity Operator is used to watch for changed files on any node within an OpenShift cluster. Once deployed and configured, it will watch a set of pre-configured locations and report if any files are modified in any way that were not approved. This operator works in sync with MachineConfig so if you update a file through MachineConfig, once the files are updated, the File Integrity Operator will update its database of signatures to ensure that the approved changes do not trigger an alert.
Tag: storage
Post
OpenShift Cluster Storage Management
When it comes to persistent storage in your OpenShift clusters, there is usually only so much of it to go around. As an OpenShift cluster admin, you want to ensure that in the age of self-service, your consumers do not take more storage than their fair share. More importantly you want to ensure that your users don’t oversubscribe and consume more storage than you have. This is especially true when the storage system you are using leverages “Thin Provisioning.
Tag: synology
Post
Running Gitea on Synology Arrays
I continue to find that my Synology NAS arrays are the most versatile devices in my home lab. I run many small “helper” services on my arrays through the use of the Docker service built into the 6.x and 7.x releases of the Synology DSM. What are these helper services that I am running? Things like “Grafana”, “Prometheus”, “Minio” and the topic for discussion today “Gitea”.
What is Gitea? From their website it is “Gitea is a community managed lightweight code hosting solution written in Go.
Post
Using the Synology K8s CSI Driver with OpenShift
Introduction Adding storage to an OpenShift cluster can greatly increase the types of workloads you can run, including workloads such as OpenShift Virtualization, or databases such as MongoDB and PostgreSQL. Persistent volumes can be supplied in many different ways within OpenShift including using LocalVolumes, or OpenShift Data Foundation, or provided by an underlying Cloud Provider such as the vSphere provider. Storage providers for external storage arrays such as Pure CSI Driver, Dell, Infinidat CSI Driver and Synology CSI Driver also exist.
Tag: tanzu
Post
Trying Tanzu with Tanzu Community Edition
Installing Tanzu Community Edition on vSphere Over the past year, I have heard much about VMware Tanzu, but have yet to experience what it is or how it works. Given my infrastructure background, I am interested in how it installs, and how does one maintain it long term. So with those questions in mind, I decided to try installing Tanzu Community Edition.
What is Tanzu? Tanzu is VMware’s productized version of Kubernetes, designed to run on AWS, Azure, and vSphere.
Tag: tls
Post
Explaining OpenShift Router Configurations
Introduction While working with OpenShift Routes recently, I came across a problem with an application deployment that was not working. OpenShift was returning an “Application is not Available” page, even though the application pod was up, and the service was properly configured and mapped. After some additional troubleshooting, we were able to trace the problem back to how the OpenShift router communicates with an application pod. Depending on your route type, OpenShift will either use HTTP, HTTPS or passthrough TCP to communicate with your application.
Tag: tutorial
Post
Signing your Git Commits with SSH Keys
In August of this year, there was a bunch of panic about GitHub being compromised, and 35K repos having malicious code in them. Further investigation clarified that it was Github Repos that were set up to do a “phishing” type attack, by creating repositories that were improperly named or Typosquatting. That being said it has led to further discussion and attention around Code Supply Chain, and ensuring that code contributions, libraries and releases are validated before use.
Post
Explaining OpenShift Router Configurations
Introduction While working with OpenShift Routes recently, I came across a problem with an application deployment that was not working. OpenShift was returning an “Application is not Available” page, even though the application pod was up, and the service was properly configured and mapped. After some additional troubleshooting, we were able to trace the problem back to how the OpenShift router communicates with an application pod. Depending on your route type, OpenShift will either use HTTP, HTTPS or passthrough TCP to communicate with your application.
Post
Creating Custom Operator Hub Catalogs
Introduction By default every new OpenShift cluster has a fully populated Operator Hub, filled with various Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. These Operators can be installed by the cluster administrator in order to expand the features and functions of a given cluster. While this is great in many cases, not all enterprises want these Operators made available for install. OpenShift OperatorHub is fully configurable, and some of the various options on how to work with the Operator Catalog will be the topic of this blog post.
Post
Understanding OpenShift MachineConfigs and MachineConfigPools
Introduction OpenShift 4 is built upon Red Hat CoreOS (RHCOS), and RHCOS is managed differently than most traditional Operating Systems. Unlike other Kubernetes distributions where you must manage the base Operating System as well as your Kubernetes distribution, with OpenSHift 4 the RHCOS Operating System and the Kubernetes platform are tightly coupled, and management of RHCOS including any system-level configurations is managed by MachineConfigs, and MachineConfigPools. These constructs allow you to manage system configuration and detect configuration drift on your Control Plane and Worker nodes.
Post
Using Podman on Mac OSX
Over five years ago I bought an Apple MacBook Pro to learn Go and deep dive into things like containers and Kubernetes. My reasoning was simple, OSX was “*nix” like, the keyboard was amazing, and I could use Docker Desktop to run and manage containers on this machine. I could have used a Windows machine or built a Linux machine, but I wanted the ease of use of Mac, without having to worry about the constant hassles of patching (Windows) or limitations on drivers and power management (Linux).
Post
Using the Synology K8s CSI Driver with OpenShift
Introduction Adding storage to an OpenShift cluster can greatly increase the types of workloads you can run, including workloads such as OpenShift Virtualization, or databases such as MongoDB and PostgreSQL. Persistent volumes can be supplied in many different ways within OpenShift including using LocalVolumes, or OpenShift Data Foundation, or provided by an underlying Cloud Provider such as the vSphere provider. Storage providers for external storage arrays such as Pure CSI Driver, Dell, Infinidat CSI Driver and Synology CSI Driver also exist.
Post
Creating a Mutating Webhook in OpenShift
If you have ever used tools like Istio, or OpenShift Service Mesh, you may have noticed that they have an ability to modify your Kubernetes deployments automatically injecting “side-cars” into your application definitions. Or perhaps you have come across tools that add certificates to your deployment, or add special environment variables to your definitions. This magic is brought to you by Kubernetes Admission Controllers. There are multiple types of admission controllers, but today we will focus on just one of them, “Mutating Webhooks”.
Post
Recovering an OCP/OKD Cluster After a Long Time Powered Off
Introduction If you are like me, you have multiple Lab clusters of OpenShift or OKD in your home or work Lab. Each of these clusters takes up a significant amount of resources and so you may shut them down to save power or compute resources. Or perhaps you are running a cluster in one of the many supported Cloud providers, and you power the machines down to save costs when you are not using them.
Post
NMState Operator and OpenShift Container Platform
Introduction OpenShift Container Platform and OpenShift Data Foundations can supply all your data storage needs, however sometimes you want to leverage an external storage array directly using storage protocols such as NFS or iSCSI. In many cases these storage networks will be served from dedicated network segments or VLANs and use dedicated network ports or network cards to handle the traffic.
On traditional Operating Systems like RHEL, you would use tools such as nmcli and network-manager to configure settings such as MTU and or create bonded connections, but in Red Hat Core OS, these tools are not directly available to you.
Post
Trying Tanzu with Tanzu Community Edition
Installing Tanzu Community Edition on vSphere Over the past year, I have heard much about VMware Tanzu, but have yet to experience what it is or how it works. Given my infrastructure background, I am interested in how it installs, and how does one maintain it long term. So with those questions in mind, I decided to try installing Tanzu Community Edition.
What is Tanzu? Tanzu is VMware’s productized version of Kubernetes, designed to run on AWS, Azure, and vSphere.
Post
OpenShift Windows Containers- Bring Your Own Host
OpenShift has supported Windows Containers with the Windows Machine Config Operator for the past year, starting with OCP 4.6. Initial Windows Container support required running your platform in Azure or AWS. With the release of 4.7, the WMCO also supported hosting machines in VMWare. However, when deploying in a VMWare environment you had to spend time configuring a base Windows image, using tools such as sysprep and VMware templates. What if you wanted to use a bare metal host(s), or wanted to take advantage of existing Windows servers that you already manage.
Post
Using Kata Containers with OpenShift Container Platform
Introduction Containerization ushered in a new way to run workloads both on-prem and in the cloud securely and efficiently. By leveraging CGroups and Namespaces in the Linux kernel, applications can run isolated from each other in a secure and controlled manner. These applications share the same kernel and machine hardware. While CGroups and Namespaces are a powerful way of defining isolation between applications, faults have been found that allow breaking out of their CGroups jail.
Post
OpenShift Cluster Storage Management
When it comes to persistent storage in your OpenShift clusters, there is usually only so much of it to go around. As an OpenShift cluster admin, you want to ensure that in the age of self-service, your consumers do not take more storage than their fair share. More importantly you want to ensure that your users don’t oversubscribe and consume more storage than you have. This is especially true when the storage system you are using leverages “Thin Provisioning.
Post
OpenShift FileIntegrity Scanning
Introduction The File Integrity Operator is used to watch for changed files on any node within an OpenShift cluster. Once deployed and configured, it will watch a set of pre-configured locations and report if any files are modified in any way that were not approved. This operator works in sync with MachineConfig so if you update a file through MachineConfig, once the files are updated, the File Integrity Operator will update its database of signatures to ensure that the approved changes do not trigger an alert.
Tag: vpn
Post
MikroTik RouterOS and WireGuard for Road Warriors.
Introduction As the world starts to open back up, I find that I am traveling more, but I still need access to my home network and Lab equipment for demos and testing. I have tried various VPNs over the years including OpenVPN and ZeroTier and IPsec. These all worked well, but required running a separate server to handle the VPN termination, and were difficult to configure and maintain. In 2020, a new player entered the ring called WireGuard.
Tag: vsphere
Post
Trying Tanzu with Tanzu Community Edition
Installing Tanzu Community Edition on vSphere Over the past year, I have heard much about VMware Tanzu, but have yet to experience what it is or how it works. Given my infrastructure background, I am interested in how it installs, and how does one maintain it long term. So with those questions in mind, I decided to try installing Tanzu Community Edition.
What is Tanzu? Tanzu is VMware’s productized version of Kubernetes, designed to run on AWS, Azure, and vSphere.
Tag: windows
Post
OpenShift Windows Containers- Bring Your Own Host
OpenShift has supported Windows Containers with the Windows Machine Config Operator for the past year, starting with OCP 4.6. Initial Windows Container support required running your platform in Azure or AWS. With the release of 4.7, the WMCO also supported hosting machines in VMWare. However, when deploying in a VMWare environment you had to spend time configuring a base Windows image, using tools such as sysprep and VMware templates. What if you wanted to use a bare metal host(s), or wanted to take advantage of existing Windows servers that you already manage.
Tag: wireguard
Post
MikroTik RouterOS and WireGuard for Road Warriors.
Introduction As the world starts to open back up, I find that I am traveling more, but I still need access to my home network and Lab equipment for demos and testing. I have tried various VPNs over the years including OpenVPN and ZeroTier and IPsec. These all worked well, but required running a separate server to handle the VPN termination, and were difficult to configure and maintain. In 2020, a new player entered the ring called WireGuard.