Tag: Cnv
OpenShift Virtualization and Resource Overcommitment
As OpenShift Virtualization continues to gain attention and attract new users in the field, certain topics come up over and over again:
- How do I overcommit CPU?
- How do I overcommit Memory??
- How do I make sure my overcommiting doesn’t have adverse reactions to my VMs?
Manually moving a Virtual Machine from VMware to OpenShift Virtualization
When it comes to migrating VMs from VMware to OpenShift Virtualization, the Migration Toolkit for Virtualization (MTV) is the easiest option. But what happens if you want to move an unsupported OS over to OpenShift Virtualization? Can this even be done? The short answer is “Yes”, and the longer answer is “It depends on the OS you want to move.”
Tag: Openshift
OpenShift Virtualization and Resource Overcommitment
As OpenShift Virtualization continues to gain attention and attract new users in the field, certain topics come up over and over again:
- How do I overcommit CPU?
- How do I overcommit Memory??
- How do I make sure my overcommiting doesn’t have adverse reactions to my VMs?
Manually moving a Virtual Machine from VMware to OpenShift Virtualization
When it comes to migrating VMs from VMware to OpenShift Virtualization, the Migration Toolkit for Virtualization (MTV) is the easiest option. But what happens if you want to move an unsupported OS over to OpenShift Virtualization? Can this even be done? The short answer is “Yes”, and the longer answer is “It depends on the OS you want to move.”
Installing OpenShift using Windows Subsystem for Linux
With interest in OpenShift and more specifically OpenShift Virtualization taking off, users who do not typically use Linux have a need for a Linux workstation in-order to deploy OpenShift. While the oc
command used to manage OpenShift does work on Windows other utilities such as the openshift-install
command used to deploy OpenShift clusters does not. So whats a Windows using future OpenShift administrator supposed to do?
Creating a storage network in OpenShift.
As my usage of OpenShift Virtualization increases, I am finding that I need to create a dedicated network for my storage arrays. For my lab storage I use two Synology devices, both are configured with NFS and iSCSI, and I use these storage types interchangeably. However in the current configuration, all storage traffic (NFS or iSCSI) is routed over two hops, and comes into the server over a 1Gb interface. I would like to change this to work similar to my vSphere lab setup, where all storage traffic goes over my dedicated network “vlan20”, which is not routed, and has a dedicated 10Gb switch.
Updating RHCOS Images with Custom Configurations
In the last blog post Dealing with a Lack of Entropy on your OpenShift Cluster we deployed the rng-tools software as a DaemonSet in a cluster. By using a DaemonSet, we took advantage of the tools that Kubernetes gives us for deploying an application to all targeted nodes in a cluster. This worked well for getting the rng daemon up and running on nodes that required it, but not all software will work this way. What if we need to install or update a package on the host Red Hat CoreOS (RHCOS) boot image? In the past this was always frowned upon/impossible. RHCOS is an immutable OS delivered to you by Red Hat that can not be modified.
Dealing with a Lack of Entropy on your OpenShift Cluster
Introduction
The Linux Kernel supplies two sources of random numbers, /dev/random
and /dev/urandom
. Theses character devices can supply random numbers to any application running on your machine. The random numbers supplied by the kernel on these devices come from the Linux kernel’s random-number entropy pool. The random-number entropy pool contains “sufficiently random” numbers meaning they are good for use in things like secure communications. But what happens if the random-number entropy pool runs out of numbers? If you are reading from the /dev/random
device, your application will block waiting for new numbers to be generated. Alternatively the urandom device is non-blocking, and will create random numbers on the fly, re-using some of the entropy in the pool. This can lead to numbers that are less random than required for some use cases.
OpenShift Machine Remediation
Kubernetes and thus OpenShift are designed to host applications in such a way that if a node hosting your application fails, it will reschedule the app on another node automatically, and everything “just keeps working”. This happens without any intervention by an administrator letting you continue on with your life, not getting bothered by some on-call alert system. But what about that node that failed? While the app may be up and running you have a node that is no longer pulling its weight, your cluster capacity is lessened and if you get enough of these failed nodes, other apps may be effected or your cluster may fail.
Deploying Infisical Secrets Manager on OpenShift with Helm
In a previous blog post Managing Secrets in OpenShift with Infisical, we walked through the process of configuring the Infisical Secrets Operator in OpenShift. The Infisical Secrets Operator allowed us to access secrets managed by Infisical from within OpenShift. But what if you want to host the Infisical application yourself, instead of relying on the Saas version, well then this post is for you. In this post we will talk about deploying the Infisical application itself, so that you can run a local instance of Infisical and keep all your secrets safe.
Managing Secrets in OpenShift with Infisical
Handling secrets in Kubernetes and more specifically OpenShift is an ever evolving space. There are many secrets managers available including Google Secrets Manger, HashiCorp Vault, CyberArk and Azure Key Vault, just to name a few. In this post we will be testing out a new player in the secrets management arena called Infisical.
Using cert-manager and Let's Encrypt with the Wildcard route in OCP
Introduction
So you have successfully set up your very own OpenShift cluster, and now you want to access the UI. You open a web browser and get a Warning:
You can click “Accept the Risk”, but what if there was a better way. Well, depending on your ability to access DNS and make changes to your DNS records, there just might be! This blog post will take you through the process of using the cert-manager Operator for Red Hat OpenShift to configure the Wild Card ingress certificate for your cluster. We will use the Let’s Encrypt service to retrieve a valid signed certificate and keep it up to date within your cluster. As an added bonus we will also update the API certificate so that is signed by a valid CA as well.
Using gMSA with Windows Containers in OCP
gMSA and OpenShift
In previous articles, we have shown how you can manage Windows Containers in OpenShift using the Windows Machine Config Operator. By configuring this feature, we are able to deploy and manage Windows Container Images just like any other Container Image with OpenShift. This gives us additional paths to application modernization allowing app developers to move over things like .Net legacy apps to OpenShift without having to re-write large portions of code.
Creating a Windows Template for use with OpenShift Windows Machine Config Operator
If you are looking to try out Windows Containers managed by Kubernetes, you are going to need at least one Windows Server to host the containers. You can follow the steps from OpenShift Windows Containers - Bring Your Own Host and manually add a Windows server to an OpenShift Cluster. You can also use the Windows Machine Config Operator (WMCO) to automatically scale Windows nodes up and down in your cluster.
Using the Synology K8s CSI Driver with OpenShift
This blog post has been updated with additional details and was originally published on 03-14-2022.
Adding storage to an OpenShift cluster can greatly increase the types of workloads you can run, including workloads such as OpenShift Virtualization, or databases such as MongoDB and PostgreSQL. Persistent volumes can be supplied in many different ways within OpenShift including using LocalVolumes, or OpenShift Data Foundation, or provided by an underlying Cloud Provider such as the vSphere provider. Storage providers for external storage arrays such as Pure CSI Driver, Dell, Infinidat CSI Driver and Synology CSI Driver also exist. While I do not have the a Pure Storage Array or an Infinibox in my home lab, I do have a Synology array, that supports iSCSI and this will be the focus of the blog. The Synology CSI driver supports the creation of ReadWriteOnce (RWO) persistent file volumes along with ReadWriteMany (RWX) persistent block volumes as well as the creation of snapshots on both these volume types.
Explaining OpenShift Router Configurations
Introduction
While working with OpenShift Routes recently, I came across a problem with an application deployment that was not working. OpenShift was returning an “Application is not Available” page, even though the application pod was up, and the service was properly configured and mapped. After some additional troubleshooting, we were able to trace the problem back to how the OpenShift router communicates with an application pod. Depending on your route type, OpenShift will either use HTTP, HTTPS or passthrough TCP to communicate with your application. By better understanding the traffic flow and the protocol used, we were able to quickly resolve the issue and get the application up and running. So with this in mind, I figured it would make sense to share this experience so others could benefit from this experience.
Creating Custom Operator Hub Catalogs
Introduction
By default every new OpenShift cluster has a fully populated Operator Hub, filled with various Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. These Operators can be installed by the cluster administrator in order to expand the features and functions of a given cluster. While this is great in many cases, not all enterprises want these Operators made available for install. OpenShift OperatorHub is fully configurable, and some of the various options on how to work with the Operator Catalog will be the topic of this blog post.
Creating ExternalIPs in OpenShift with MetalLB
Introduction
Since the 3.0 release of OpenShift it has come with what is called the OpenShift Routes. This can be thought of as a Layer 7 load balancer for TLS or HTTP applications in your cluster. This layer 7 load balancer works great for web applications and services that use HTTP, HTTPS using SNI, or TLS using SNI. However not all applications are HTTP-based, and some will use protocols other than TCP such as UDP and even SCTP. How do you make these applications available to consumers outside of your OpenShift Cluster? You might try using NodePort which will open a port on all worker nodes for your given service and forward that traffic onto the proper application. You can also manually configure ExternalIP and IP Failover to make an external IP available for your application in a highly available configuration, however, this is a time-consuming process.
Understanding OpenShift MachineConfigs and MachineConfigPools
Introduction
OpenShift 4 is built upon Red Hat CoreOS (RHCOS), and RHCOS is managed differently than most traditional Operating Systems. Unlike other Kubernetes distributions where you must manage the base Operating System as well as your Kubernetes distribution, with OpenSHift 4 the RHCOS Operating System and the Kubernetes platform are tightly coupled, and management of RHCOS including any system-level configurations is managed by MachineConfigs, and MachineConfigPools. These constructs allow you to manage system configuration and detect configuration drift on your Control Plane and Worker nodes.
Using Citrix Netscaler with OpenShift
Introduction
The OpenShift platform is a “batteries included” distribution of Kubernetes. It comes with EVERYTHING you need to run a Kubernetes platform from a developer and sysadmin-friendly UI, to monitoring, alerting, platform configuration, and ingress networking. OpenShift was one of the first Kubernetes distributions to realize that having a Kubernetes platform that solved how to load-balance incoming requests for applications was important. OpenShift achieved this through the use of “Routes”. Upstream in Kubernetes this need has been implemented through the use of Ingress, and more recently Gateway API.
Creating a Mutating Webhook in OpenShift
If you have ever used tools like Istio, or OpenShift Service Mesh, you may have noticed that they have an ability to modify your Kubernetes deployments automatically injecting “side-cars” into your application definitions. Or perhaps you have come across tools that add certificates to your deployment, or add special environment variables to your definitions. This magic is brought to you by Kubernetes Admission Controllers. There are multiple types of admission controllers, but today we will focus on just one of them, “Mutating Webhooks”. Mutating Webhooks are the specific class of Admission Controller that can inject changes into your Kubernetes definitions.
Recovering an OCP/OKD Cluster After a Long Time Powered Off
Introduction
If you are like me, you have multiple Lab clusters of OpenShift or OKD in your home or work Lab. Each of these clusters takes up a significant amount of resources and so you may shut them down to save power or compute resources. Or perhaps you are running a cluster in one of the many supported Cloud providers, and you power the machines down to save costs when you are not using them. If you leave the cluster powered off for more than 2 weeks you will find that when you power the cluster back on you are unable to connect to the cluster or the console. Most times, this is due to one or more internal certificates expiring. There is a quick fix for this which we will discuss below.
NMState Operator and OpenShift Container Platform
UPDATE: An updated blog post on this topic has been written and is available here: Creating a storage network in OpenShift
Introduction
OpenShift Container Platform and OpenShift Data Foundations can supply all your data storage needs, however sometimes you want to leverage an external storage array directly using storage protocols such as NFS or iSCSI. In many cases these storage networks will be served from dedicated network segments or VLANs and use dedicated network ports or network cards to handle the traffic.
OpenShift Windows Containers - Bring Your Own Host
OpenShift has supported Windows Containers with the Windows Machine Config Operator for the past year, starting with OCP 4.6. Initial Windows Container support required running your platform in Azure or AWS. With the release of 4.7, the WMCO also supported hosting machines in VMWare. However, when deploying in a VMWare environment you had to spend time configuring a base Windows image, using tools such as sysprep and VMware templates. What if you wanted to use a bare metal host(s), or wanted to take advantage of existing Windows servers that you already manage. Or perhaps you just wanted to try out Windows Containers without going through all the steps of setting up a Windows template to deploy a single machine?
Using Kata Containers with OpenShift Container Platform
Introduction
Containerization ushered in a new way to run workloads both on-prem and in the cloud securely and efficiently. By leveraging CGroups and Namespaces in the Linux kernel, applications can run isolated from each other in a secure and controlled manner. These applications share the same kernel and machine hardware. While CGroups and Namespaces are a powerful way of defining isolation between applications, faults have been found that allow breaking out of their CGroups jail. Additional measures such as SELinux can assist with keeping applications inside their container, but sometimes your application or workload needs more isolation than CGroups, Namespaces, and SELinux can provide.
OpenShift Cluster Storage Management
When it comes to persistent storage in your OpenShift clusters, there is usually only so much of it to go around. As an OpenShift cluster admin, you want to ensure that in the age of self-service, your consumers do not take more storage than their fair share. More importantly you want to ensure that your users don’t oversubscribe and consume more storage than you have. This is especially true when the storage system you are using leverages “Thin Provisioning.” How do you go about controlling this in OpenShift? Enter the ClusterResourceQuota and Project level ResourceQuotas.
OpenShift FileIntegrity Scanning
Introduction
The File Integrity Operator is used to watch for changed files on any node within an OpenShift cluster. Once deployed and configured, it will watch a set of pre-configured locations and report if any files are modified in any way that were not approved. This operator works in sync with MachineConfig so if you update a file through MachineConfig, once the files are updated, the File Integrity Operator will update its database of signatures to ensure that the approved changes do not trigger an alert. The File Integrity Operator is based on the OpenSource project AIDE Advanced Intrusion Detection Environment.
Kubectl and OC Command Output
Introduction
After running an OpenShift or Kubernetes cluster for a little while you find that you need to create reports on specific data about the cluster itself. Reporting on things like Project owners, container images in use, and project quota are just some of the things you might be asked about. There are multiple ways to do this, such as writing your own application that queries the API, or creating a shell script that wraps a bunch of cli commands. For very complex reports, these tactics may be required. For simpler requests, there is another way, using the provided command line client such as “oc” or “kubectl” and a built-in feature which allows you to specify the output format for your query.
Creating a multi-host OKD Cluster
Introduction
In the last two posts, I have shown you how to get an OKD All-in-One cluster up and running. Since it was an “All-in-One” cluster, there was no redundancy in it, and there was no ability to scale out. OKD and Kubernetes work best in a multi-server deployment, creating redundancy and higher availability along with the ability to scale your applications horizontally on demand. This final blog post is going to outline the steps to build a multi-host cluster. It will build on what we have done in the previous posts but extend the process out to make a multi-node cluster and add working SSL certificates.
Openshift, Azure, and Ansible
Intro
In my last post, I showed how to deploy an All-in-One OKD system. This is great for some initial learning, but keeping it up and running can get expensive over time. You can always shut it down and re-create it later, but this can take time and you can end up making typos and errors if you aren’t careful. If you want to be able to create (and destroy) these All-in-One environments in a more automatic way read on.
OpenShift on Azure - The Manual Way
Intro
The other day I was reading an article on OpenShift All-in-One and thought it would be interesting to re-create it with OKD, the community version of OpenShift. We are going to create an All-In-One (AiO) version of version 3.11 of OKD/OpenShift on Azure.
This post is going to show you how to do an manual install of OKD on just one host. Why would you want to do this? This will get you a fully working instance of OpenShift and even give you cluster admin rights so you can learn how to administer it. One thing you don’t want to do is run a production load on this! While this is a great way to learn OpenShift and even Kubernetes, this is not the way to run a production application.
Tag: Virtualization
OpenShift Virtualization and Resource Overcommitment
As OpenShift Virtualization continues to gain attention and attract new users in the field, certain topics come up over and over again:
- How do I overcommit CPU?
- How do I overcommit Memory??
- How do I make sure my overcommiting doesn’t have adverse reactions to my VMs?
Tag: Kubevirt
Manually moving a Virtual Machine from VMware to OpenShift Virtualization
When it comes to migrating VMs from VMware to OpenShift Virtualization, the Migration Toolkit for Virtualization (MTV) is the easiest option. But what happens if you want to move an unsupported OS over to OpenShift Virtualization? Can this even be done? The short answer is “Yes”, and the longer answer is “It depends on the OS you want to move.”
Tag: Vmware
Manually moving a Virtual Machine from VMware to OpenShift Virtualization
When it comes to migrating VMs from VMware to OpenShift Virtualization, the Migration Toolkit for Virtualization (MTV) is the easiest option. But what happens if you want to move an unsupported OS over to OpenShift Virtualization? Can this even be done? The short answer is “Yes”, and the longer answer is “It depends on the OS you want to move.”
Tag: Windows
Installing OpenShift using Windows Subsystem for Linux
With interest in OpenShift and more specifically OpenShift Virtualization taking off, users who do not typically use Linux have a need for a Linux workstation in-order to deploy OpenShift. While the oc
command used to manage OpenShift does work on Windows other utilities such as the openshift-install
command used to deploy OpenShift clusters does not. So whats a Windows using future OpenShift administrator supposed to do?
Using gMSA with Windows Containers in OCP
gMSA and OpenShift
In previous articles, we have shown how you can manage Windows Containers in OpenShift using the Windows Machine Config Operator. By configuring this feature, we are able to deploy and manage Windows Container Images just like any other Container Image with OpenShift. This gives us additional paths to application modernization allowing app developers to move over things like .Net legacy apps to OpenShift without having to re-write large portions of code.
Creating a Windows Template for use with OpenShift Windows Machine Config Operator
If you are looking to try out Windows Containers managed by Kubernetes, you are going to need at least one Windows Server to host the containers. You can follow the steps from OpenShift Windows Containers - Bring Your Own Host and manually add a Windows server to an OpenShift Cluster. You can also use the Windows Machine Config Operator (WMCO) to automatically scale Windows nodes up and down in your cluster.
Windows Containers on Windows 10 or 11, without Docker Desktop
When it comes to running Windows Containers, the only straight forward way to run them has been through Docker Desktop. Starting in August of 2021, the license that Docker Desktop was distributed under changed. It became “Free for personal use” only. If you were using it as a part of your day-2-day job, you were going to need a subscription/license. (see Docker Subscriptions for more details.) But what if you don’t need the fancy UI, and you just want to run Windows Containers on your Windows 10 or 11 host? One option is to download and manually install the Moby binaries from github. If you are looking for a more automated process, Stevedore may be for you.
OpenShift Windows Containers - Bring Your Own Host
OpenShift has supported Windows Containers with the Windows Machine Config Operator for the past year, starting with OCP 4.6. Initial Windows Container support required running your platform in Azure or AWS. With the release of 4.7, the WMCO also supported hosting machines in VMWare. However, when deploying in a VMWare environment you had to spend time configuring a base Windows image, using tools such as sysprep and VMware templates. What if you wanted to use a bare metal host(s), or wanted to take advantage of existing Windows servers that you already manage. Or perhaps you just wanted to try out Windows Containers without going through all the steps of setting up a Windows template to deploy a single machine?
Tag: Wsl
Installing OpenShift using Windows Subsystem for Linux
With interest in OpenShift and more specifically OpenShift Virtualization taking off, users who do not typically use Linux have a need for a Linux workstation in-order to deploy OpenShift. While the oc
command used to manage OpenShift does work on Windows other utilities such as the openshift-install
command used to deploy OpenShift clusters does not. So whats a Windows using future OpenShift administrator supposed to do?
Tag: Nas
Creating a storage network in OpenShift.
As my usage of OpenShift Virtualization increases, I am finding that I need to create a dedicated network for my storage arrays. For my lab storage I use two Synology devices, both are configured with NFS and iSCSI, and I use these storage types interchangeably. However in the current configuration, all storage traffic (NFS or iSCSI) is routed over two hops, and comes into the server over a 1Gb interface. I would like to change this to work similar to my vSphere lab setup, where all storage traffic goes over my dedicated network “vlan20”, which is not routed, and has a dedicated 10Gb switch.
Tag: Nmstate
Creating a storage network in OpenShift.
As my usage of OpenShift Virtualization increases, I am finding that I need to create a dedicated network for my storage arrays. For my lab storage I use two Synology devices, both are configured with NFS and iSCSI, and I use these storage types interchangeably. However in the current configuration, all storage traffic (NFS or iSCSI) is routed over two hops, and comes into the server over a 1Gb interface. I would like to change this to work similar to my vSphere lab setup, where all storage traffic goes over my dedicated network “vlan20”, which is not routed, and has a dedicated 10Gb switch.
Tag: Ocpv
Creating a storage network in OpenShift.
As my usage of OpenShift Virtualization increases, I am finding that I need to create a dedicated network for my storage arrays. For my lab storage I use two Synology devices, both are configured with NFS and iSCSI, and I use these storage types interchangeably. However in the current configuration, all storage traffic (NFS or iSCSI) is routed over two hops, and comes into the server over a 1Gb interface. I would like to change this to work similar to my vSphere lab setup, where all storage traffic goes over my dedicated network “vlan20”, which is not routed, and has a dedicated 10Gb switch.
Tag: Daemonset
Updating RHCOS Images with Custom Configurations
In the last blog post Dealing with a Lack of Entropy on your OpenShift Cluster we deployed the rng-tools software as a DaemonSet in a cluster. By using a DaemonSet, we took advantage of the tools that Kubernetes gives us for deploying an application to all targeted nodes in a cluster. This worked well for getting the rng daemon up and running on nodes that required it, but not all software will work this way. What if we need to install or update a package on the host Red Hat CoreOS (RHCOS) boot image? In the past this was always frowned upon/impossible. RHCOS is an immutable OS delivered to you by Red Hat that can not be modified.
Dealing with a Lack of Entropy on your OpenShift Cluster
Introduction
The Linux Kernel supplies two sources of random numbers, /dev/random
and /dev/urandom
. Theses character devices can supply random numbers to any application running on your machine. The random numbers supplied by the kernel on these devices come from the Linux kernel’s random-number entropy pool. The random-number entropy pool contains “sufficiently random” numbers meaning they are good for use in things like secure communications. But what happens if the random-number entropy pool runs out of numbers? If you are reading from the /dev/random
device, your application will block waiting for new numbers to be generated. Alternatively the urandom device is non-blocking, and will create random numbers on the fly, re-using some of the entropy in the pool. This can lead to numbers that are less random than required for some use cases.
Tag: MachineConfig
Updating RHCOS Images with Custom Configurations
In the last blog post Dealing with a Lack of Entropy on your OpenShift Cluster we deployed the rng-tools software as a DaemonSet in a cluster. By using a DaemonSet, we took advantage of the tools that Kubernetes gives us for deploying an application to all targeted nodes in a cluster. This worked well for getting the rng daemon up and running on nodes that required it, but not all software will work this way. What if we need to install or update a package on the host Red Hat CoreOS (RHCOS) boot image? In the past this was always frowned upon/impossible. RHCOS is an immutable OS delivered to you by Red Hat that can not be modified.
Understanding OpenShift MachineConfigs and MachineConfigPools
Introduction
OpenShift 4 is built upon Red Hat CoreOS (RHCOS), and RHCOS is managed differently than most traditional Operating Systems. Unlike other Kubernetes distributions where you must manage the base Operating System as well as your Kubernetes distribution, with OpenSHift 4 the RHCOS Operating System and the Kubernetes platform are tightly coupled, and management of RHCOS including any system-level configurations is managed by MachineConfigs, and MachineConfigPools. These constructs allow you to manage system configuration and detect configuration drift on your Control Plane and Worker nodes.
Tag: Rhcos
Updating RHCOS Images with Custom Configurations
In the last blog post Dealing with a Lack of Entropy on your OpenShift Cluster we deployed the rng-tools software as a DaemonSet in a cluster. By using a DaemonSet, we took advantage of the tools that Kubernetes gives us for deploying an application to all targeted nodes in a cluster. This worked well for getting the rng daemon up and running on nodes that required it, but not all software will work this way. What if we need to install or update a package on the host Red Hat CoreOS (RHCOS) boot image? In the past this was always frowned upon/impossible. RHCOS is an immutable OS delivered to you by Red Hat that can not be modified.
Tag: SCC
Dealing with a Lack of Entropy on your OpenShift Cluster
Introduction
The Linux Kernel supplies two sources of random numbers, /dev/random
and /dev/urandom
. Theses character devices can supply random numbers to any application running on your machine. The random numbers supplied by the kernel on these devices come from the Linux kernel’s random-number entropy pool. The random-number entropy pool contains “sufficiently random” numbers meaning they are good for use in things like secure communications. But what happens if the random-number entropy pool runs out of numbers? If you are reading from the /dev/random
device, your application will block waiting for new numbers to be generated. Alternatively the urandom device is non-blocking, and will create random numbers on the fly, re-using some of the entropy in the pool. This can lead to numbers that are less random than required for some use cases.
Tag: Infrastructure
OpenShift Machine Remediation
Kubernetes and thus OpenShift are designed to host applications in such a way that if a node hosting your application fails, it will reschedule the app on another node automatically, and everything “just keeps working”. This happens without any intervention by an administrator letting you continue on with your life, not getting bothered by some on-call alert system. But what about that node that failed? While the app may be up and running you have a node that is no longer pulling its weight, your cluster capacity is lessened and if you get enough of these failed nodes, other apps may be effected or your cluster may fail.
Tag: Node Management
OpenShift Machine Remediation
Kubernetes and thus OpenShift are designed to host applications in such a way that if a node hosting your application fails, it will reschedule the app on another node automatically, and everything “just keeps working”. This happens without any intervention by an administrator letting you continue on with your life, not getting bothered by some on-call alert system. But what about that node that failed? While the app may be up and running you have a node that is no longer pulling its weight, your cluster capacity is lessened and if you get enough of these failed nodes, other apps may be effected or your cluster may fail.
Tag: Infisical
Deploying Infisical Secrets Manager on OpenShift with Helm
In a previous blog post Managing Secrets in OpenShift with Infisical, we walked through the process of configuring the Infisical Secrets Operator in OpenShift. The Infisical Secrets Operator allowed us to access secrets managed by Infisical from within OpenShift. But what if you want to host the Infisical application yourself, instead of relying on the Saas version, well then this post is for you. In this post we will talk about deploying the Infisical application itself, so that you can run a local instance of Infisical and keep all your secrets safe.
Managing Secrets in OpenShift with Infisical
Handling secrets in Kubernetes and more specifically OpenShift is an ever evolving space. There are many secrets managers available including Google Secrets Manger, HashiCorp Vault, CyberArk and Azure Key Vault, just to name a few. In this post we will be testing out a new player in the secrets management arena called Infisical.
Tag: Secrets
Deploying Infisical Secrets Manager on OpenShift with Helm
In a previous blog post Managing Secrets in OpenShift with Infisical, we walked through the process of configuring the Infisical Secrets Operator in OpenShift. The Infisical Secrets Operator allowed us to access secrets managed by Infisical from within OpenShift. But what if you want to host the Infisical application yourself, instead of relying on the Saas version, well then this post is for you. In this post we will talk about deploying the Infisical application itself, so that you can run a local instance of Infisical and keep all your secrets safe.
Managing Secrets in OpenShift with Infisical
Handling secrets in Kubernetes and more specifically OpenShift is an ever evolving space. There are many secrets managers available including Google Secrets Manger, HashiCorp Vault, CyberArk and Azure Key Vault, just to name a few. In this post we will be testing out a new player in the secrets management arena called Infisical.
Tag: Cert-Manager
Using cert-manager and Let's Encrypt with the Wildcard route in OCP
Introduction
So you have successfully set up your very own OpenShift cluster, and now you want to access the UI. You open a web browser and get a Warning:
You can click “Accept the Risk”, but what if there was a better way. Well, depending on your ability to access DNS and make changes to your DNS records, there just might be! This blog post will take you through the process of using the cert-manager Operator for Red Hat OpenShift to configure the Wild Card ingress certificate for your cluster. We will use the Let’s Encrypt service to retrieve a valid signed certificate and keep it up to date within your cluster. As an added bonus we will also update the API certificate so that is signed by a valid CA as well.
Tag: Letsencrypt
Using cert-manager and Let's Encrypt with the Wildcard route in OCP
Introduction
So you have successfully set up your very own OpenShift cluster, and now you want to access the UI. You open a web browser and get a Warning:
You can click “Accept the Risk”, but what if there was a better way. Well, depending on your ability to access DNS and make changes to your DNS records, there just might be! This blog post will take you through the process of using the cert-manager Operator for Red Hat OpenShift to configure the Wild Card ingress certificate for your cluster. We will use the Let’s Encrypt service to retrieve a valid signed certificate and keep it up to date within your cluster. As an added bonus we will also update the API certificate so that is signed by a valid CA as well.
Creating a multi-host OKD Cluster
Introduction
In the last two posts, I have shown you how to get an OKD All-in-One cluster up and running. Since it was an “All-in-One” cluster, there was no redundancy in it, and there was no ability to scale out. OKD and Kubernetes work best in a multi-server deployment, creating redundancy and higher availability along with the ability to scale your applications horizontally on demand. This final blog post is going to outline the steps to build a multi-host cluster. It will build on what we have done in the previous posts but extend the process out to make a multi-node cluster and add working SSL certificates.
Tag: Wildcard Certificate
Using cert-manager and Let's Encrypt with the Wildcard route in OCP
Introduction
So you have successfully set up your very own OpenShift cluster, and now you want to access the UI. You open a web browser and get a Warning:
You can click “Accept the Risk”, but what if there was a better way. Well, depending on your ability to access DNS and make changes to your DNS records, there just might be! This blog post will take you through the process of using the cert-manager Operator for Red Hat OpenShift to configure the Wild Card ingress certificate for your cluster. We will use the Let’s Encrypt service to retrieve a valid signed certificate and keep it up to date within your cluster. As an added bonus we will also update the API certificate so that is signed by a valid CA as well.
Tag: Containers
Using gMSA with Windows Containers in OCP
gMSA and OpenShift
In previous articles, we have shown how you can manage Windows Containers in OpenShift using the Windows Machine Config Operator. By configuring this feature, we are able to deploy and manage Windows Container Images just like any other Container Image with OpenShift. This gives us additional paths to application modernization allowing app developers to move over things like .Net legacy apps to OpenShift without having to re-write large portions of code.
Creating a Windows Template for use with OpenShift Windows Machine Config Operator
If you are looking to try out Windows Containers managed by Kubernetes, you are going to need at least one Windows Server to host the containers. You can follow the steps from OpenShift Windows Containers - Bring Your Own Host and manually add a Windows server to an OpenShift Cluster. You can also use the Windows Machine Config Operator (WMCO) to automatically scale Windows nodes up and down in your cluster.
Windows Containers on Windows 10 or 11, without Docker Desktop
When it comes to running Windows Containers, the only straight forward way to run them has been through Docker Desktop. Starting in August of 2021, the license that Docker Desktop was distributed under changed. It became “Free for personal use” only. If you were using it as a part of your day-2-day job, you were going to need a subscription/license. (see Docker Subscriptions for more details.) But what if you don’t need the fancy UI, and you just want to run Windows Containers on your Windows 10 or 11 host? One option is to download and manually install the Moby binaries from github. If you are looking for a more automated process, Stevedore may be for you.
Using Podman on Mac OSX
Over five years ago I bought an Apple MacBook Pro to learn Go and deep dive into things like containers and Kubernetes. My reasoning was simple, OSX was “*nix” like, the keyboard was amazing, and I could use Docker Desktop to run and manage containers on this machine. I could have used a Windows machine or built a Linux machine, but I wanted the ease of use of Mac, without having to worry about the constant hassles of patching (Windows) or limitations on drivers and power management (Linux). Over these past few years I have become addicted to using a Mac for my day-to-day work… However starting last year Docker made a change to their licensing terms on Docker Desktop as well as constant reminders to “upgrade to the latest version” have forced me to look elsewhere.
NMState Operator and OpenShift Container Platform
UPDATE: An updated blog post on this topic has been written and is available here: Creating a storage network in OpenShift
Introduction
OpenShift Container Platform and OpenShift Data Foundations can supply all your data storage needs, however sometimes you want to leverage an external storage array directly using storage protocols such as NFS or iSCSI. In many cases these storage networks will be served from dedicated network segments or VLANs and use dedicated network ports or network cards to handle the traffic.
Trying Tanzu with Tanzu Community Edition
Installing Tanzu Community Edition on vSphere
Over the past year, I have heard much about VMware Tanzu, but have yet to experience what it is or how it works. Given my infrastructure background, I am interested in how it installs, and how does one maintain it long term. So with those questions in mind, I decided to try installing Tanzu Community Edition.
What is Tanzu? Tanzu is VMware’s productized version of Kubernetes, designed to run on AWS, Azure, and vSphere. There are multiple editions available including Basic, Standard, Advanced and Community. VMware provides a comparison between the different versions and what features they offer here: Compare VMware Tanzu Editions This blog post will focus on deploying the Community Edition on vSphere. The Community Edition is different from the commercial offerings, the cluster deployment, and management process is different when using the commercial offering.
OpenShift Windows Containers - Bring Your Own Host
OpenShift has supported Windows Containers with the Windows Machine Config Operator for the past year, starting with OCP 4.6. Initial Windows Container support required running your platform in Azure or AWS. With the release of 4.7, the WMCO also supported hosting machines in VMWare. However, when deploying in a VMWare environment you had to spend time configuring a base Windows image, using tools such as sysprep and VMware templates. What if you wanted to use a bare metal host(s), or wanted to take advantage of existing Windows servers that you already manage. Or perhaps you just wanted to try out Windows Containers without going through all the steps of setting up a Windows template to deploy a single machine?
Tag: GMSA
Using gMSA with Windows Containers in OCP
gMSA and OpenShift
In previous articles, we have shown how you can manage Windows Containers in OpenShift using the Windows Machine Config Operator. By configuring this feature, we are able to deploy and manage Windows Container Images just like any other Container Image with OpenShift. This gives us additional paths to application modernization allowing app developers to move over things like .Net legacy apps to OpenShift without having to re-write large portions of code.
Tag: Powershell
Using gMSA with Windows Containers in OCP
gMSA and OpenShift
In previous articles, we have shown how you can manage Windows Containers in OpenShift using the Windows Machine Config Operator. By configuring this feature, we are able to deploy and manage Windows Container Images just like any other Container Image with OpenShift. This gives us additional paths to application modernization allowing app developers to move over things like .Net legacy apps to OpenShift without having to re-write large portions of code.
OpenShift Windows Containers - Bring Your Own Host
OpenShift has supported Windows Containers with the Windows Machine Config Operator for the past year, starting with OCP 4.6. Initial Windows Container support required running your platform in Azure or AWS. With the release of 4.7, the WMCO also supported hosting machines in VMWare. However, when deploying in a VMWare environment you had to spend time configuring a base Windows image, using tools such as sysprep and VMware templates. What if you wanted to use a bare metal host(s), or wanted to take advantage of existing Windows servers that you already manage. Or perhaps you just wanted to try out Windows Containers without going through all the steps of setting up a Windows template to deploy a single machine?
Tag: Tutorial
Using gMSA with Windows Containers in OCP
gMSA and OpenShift
In previous articles, we have shown how you can manage Windows Containers in OpenShift using the Windows Machine Config Operator. By configuring this feature, we are able to deploy and manage Windows Container Images just like any other Container Image with OpenShift. This gives us additional paths to application modernization allowing app developers to move over things like .Net legacy apps to OpenShift without having to re-write large portions of code.
Using the Synology K8s CSI Driver with OpenShift
This blog post has been updated with additional details and was originally published on 03-14-2022.
Adding storage to an OpenShift cluster can greatly increase the types of workloads you can run, including workloads such as OpenShift Virtualization, or databases such as MongoDB and PostgreSQL. Persistent volumes can be supplied in many different ways within OpenShift including using LocalVolumes, or OpenShift Data Foundation, or provided by an underlying Cloud Provider such as the vSphere provider. Storage providers for external storage arrays such as Pure CSI Driver, Dell, Infinidat CSI Driver and Synology CSI Driver also exist. While I do not have the a Pure Storage Array or an Infinibox in my home lab, I do have a Synology array, that supports iSCSI and this will be the focus of the blog. The Synology CSI driver supports the creation of ReadWriteOnce (RWO) persistent file volumes along with ReadWriteMany (RWX) persistent block volumes as well as the creation of snapshots on both these volume types.
Signing your Git Commits with SSH Keys
In August of this year, there was a bunch of panic about GitHub being compromised, and 35K repos having malicious code in them. Further investigation clarified that it was Github Repos that were set up to do a “phishing” type attack, by creating repositories that were improperly named or Typosquatting. That being said it has led to further discussion and attention around Code Supply Chain, and ensuring that code contributions, libraries and releases are validated before use. One such way to do this is by signing code commits.
Explaining OpenShift Router Configurations
Introduction
While working with OpenShift Routes recently, I came across a problem with an application deployment that was not working. OpenShift was returning an “Application is not Available” page, even though the application pod was up, and the service was properly configured and mapped. After some additional troubleshooting, we were able to trace the problem back to how the OpenShift router communicates with an application pod. Depending on your route type, OpenShift will either use HTTP, HTTPS or passthrough TCP to communicate with your application. By better understanding the traffic flow and the protocol used, we were able to quickly resolve the issue and get the application up and running. So with this in mind, I figured it would make sense to share this experience so others could benefit from this experience.
Creating Custom Operator Hub Catalogs
Introduction
By default every new OpenShift cluster has a fully populated Operator Hub, filled with various Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. These Operators can be installed by the cluster administrator in order to expand the features and functions of a given cluster. While this is great in many cases, not all enterprises want these Operators made available for install. OpenShift OperatorHub is fully configurable, and some of the various options on how to work with the Operator Catalog will be the topic of this blog post.
Understanding OpenShift MachineConfigs and MachineConfigPools
Introduction
OpenShift 4 is built upon Red Hat CoreOS (RHCOS), and RHCOS is managed differently than most traditional Operating Systems. Unlike other Kubernetes distributions where you must manage the base Operating System as well as your Kubernetes distribution, with OpenSHift 4 the RHCOS Operating System and the Kubernetes platform are tightly coupled, and management of RHCOS including any system-level configurations is managed by MachineConfigs, and MachineConfigPools. These constructs allow you to manage system configuration and detect configuration drift on your Control Plane and Worker nodes.
Using Podman on Mac OSX
Over five years ago I bought an Apple MacBook Pro to learn Go and deep dive into things like containers and Kubernetes. My reasoning was simple, OSX was “*nix” like, the keyboard was amazing, and I could use Docker Desktop to run and manage containers on this machine. I could have used a Windows machine or built a Linux machine, but I wanted the ease of use of Mac, without having to worry about the constant hassles of patching (Windows) or limitations on drivers and power management (Linux). Over these past few years I have become addicted to using a Mac for my day-to-day work… However starting last year Docker made a change to their licensing terms on Docker Desktop as well as constant reminders to “upgrade to the latest version” have forced me to look elsewhere.
Creating a Mutating Webhook in OpenShift
If you have ever used tools like Istio, or OpenShift Service Mesh, you may have noticed that they have an ability to modify your Kubernetes deployments automatically injecting “side-cars” into your application definitions. Or perhaps you have come across tools that add certificates to your deployment, or add special environment variables to your definitions. This magic is brought to you by Kubernetes Admission Controllers. There are multiple types of admission controllers, but today we will focus on just one of them, “Mutating Webhooks”. Mutating Webhooks are the specific class of Admission Controller that can inject changes into your Kubernetes definitions.
Recovering an OCP/OKD Cluster After a Long Time Powered Off
Introduction
If you are like me, you have multiple Lab clusters of OpenShift or OKD in your home or work Lab. Each of these clusters takes up a significant amount of resources and so you may shut them down to save power or compute resources. Or perhaps you are running a cluster in one of the many supported Cloud providers, and you power the machines down to save costs when you are not using them. If you leave the cluster powered off for more than 2 weeks you will find that when you power the cluster back on you are unable to connect to the cluster or the console. Most times, this is due to one or more internal certificates expiring. There is a quick fix for this which we will discuss below.
NMState Operator and OpenShift Container Platform
UPDATE: An updated blog post on this topic has been written and is available here: Creating a storage network in OpenShift
Introduction
OpenShift Container Platform and OpenShift Data Foundations can supply all your data storage needs, however sometimes you want to leverage an external storage array directly using storage protocols such as NFS or iSCSI. In many cases these storage networks will be served from dedicated network segments or VLANs and use dedicated network ports or network cards to handle the traffic.
Trying Tanzu with Tanzu Community Edition
Installing Tanzu Community Edition on vSphere
Over the past year, I have heard much about VMware Tanzu, but have yet to experience what it is or how it works. Given my infrastructure background, I am interested in how it installs, and how does one maintain it long term. So with those questions in mind, I decided to try installing Tanzu Community Edition.
What is Tanzu? Tanzu is VMware’s productized version of Kubernetes, designed to run on AWS, Azure, and vSphere. There are multiple editions available including Basic, Standard, Advanced and Community. VMware provides a comparison between the different versions and what features they offer here: Compare VMware Tanzu Editions This blog post will focus on deploying the Community Edition on vSphere. The Community Edition is different from the commercial offerings, the cluster deployment, and management process is different when using the commercial offering.
OpenShift Windows Containers - Bring Your Own Host
OpenShift has supported Windows Containers with the Windows Machine Config Operator for the past year, starting with OCP 4.6. Initial Windows Container support required running your platform in Azure or AWS. With the release of 4.7, the WMCO also supported hosting machines in VMWare. However, when deploying in a VMWare environment you had to spend time configuring a base Windows image, using tools such as sysprep and VMware templates. What if you wanted to use a bare metal host(s), or wanted to take advantage of existing Windows servers that you already manage. Or perhaps you just wanted to try out Windows Containers without going through all the steps of setting up a Windows template to deploy a single machine?
Using Kata Containers with OpenShift Container Platform
Introduction
Containerization ushered in a new way to run workloads both on-prem and in the cloud securely and efficiently. By leveraging CGroups and Namespaces in the Linux kernel, applications can run isolated from each other in a secure and controlled manner. These applications share the same kernel and machine hardware. While CGroups and Namespaces are a powerful way of defining isolation between applications, faults have been found that allow breaking out of their CGroups jail. Additional measures such as SELinux can assist with keeping applications inside their container, but sometimes your application or workload needs more isolation than CGroups, Namespaces, and SELinux can provide.
OpenShift Cluster Storage Management
When it comes to persistent storage in your OpenShift clusters, there is usually only so much of it to go around. As an OpenShift cluster admin, you want to ensure that in the age of self-service, your consumers do not take more storage than their fair share. More importantly you want to ensure that your users don’t oversubscribe and consume more storage than you have. This is especially true when the storage system you are using leverages “Thin Provisioning.” How do you go about controlling this in OpenShift? Enter the ClusterResourceQuota and Project level ResourceQuotas.
OpenShift FileIntegrity Scanning
Introduction
The File Integrity Operator is used to watch for changed files on any node within an OpenShift cluster. Once deployed and configured, it will watch a set of pre-configured locations and report if any files are modified in any way that were not approved. This operator works in sync with MachineConfig so if you update a file through MachineConfig, once the files are updated, the File Integrity Operator will update its database of signatures to ensure that the approved changes do not trigger an alert. The File Integrity Operator is based on the OpenSource project AIDE Advanced Intrusion Detection Environment.
Tag: WMCO
Creating a Windows Template for use with OpenShift Windows Machine Config Operator
If you are looking to try out Windows Containers managed by Kubernetes, you are going to need at least one Windows Server to host the containers. You can follow the steps from OpenShift Windows Containers - Bring Your Own Host and manually add a Windows server to an OpenShift Cluster. You can also use the Windows Machine Config Operator (WMCO) to automatically scale Windows nodes up and down in your cluster.
Tag: Csi
Using the Synology K8s CSI Driver with OpenShift
This blog post has been updated with additional details and was originally published on 03-14-2022.
Adding storage to an OpenShift cluster can greatly increase the types of workloads you can run, including workloads such as OpenShift Virtualization, or databases such as MongoDB and PostgreSQL. Persistent volumes can be supplied in many different ways within OpenShift including using LocalVolumes, or OpenShift Data Foundation, or provided by an underlying Cloud Provider such as the vSphere provider. Storage providers for external storage arrays such as Pure CSI Driver, Dell, Infinidat CSI Driver and Synology CSI Driver also exist. While I do not have the a Pure Storage Array or an Infinibox in my home lab, I do have a Synology array, that supports iSCSI and this will be the focus of the blog. The Synology CSI driver supports the creation of ReadWriteOnce (RWO) persistent file volumes along with ReadWriteMany (RWX) persistent block volumes as well as the creation of snapshots on both these volume types.
Tag: Iscsi
Using the Synology K8s CSI Driver with OpenShift
This blog post has been updated with additional details and was originally published on 03-14-2022.
Adding storage to an OpenShift cluster can greatly increase the types of workloads you can run, including workloads such as OpenShift Virtualization, or databases such as MongoDB and PostgreSQL. Persistent volumes can be supplied in many different ways within OpenShift including using LocalVolumes, or OpenShift Data Foundation, or provided by an underlying Cloud Provider such as the vSphere provider. Storage providers for external storage arrays such as Pure CSI Driver, Dell, Infinidat CSI Driver and Synology CSI Driver also exist. While I do not have the a Pure Storage Array or an Infinibox in my home lab, I do have a Synology array, that supports iSCSI and this will be the focus of the blog. The Synology CSI driver supports the creation of ReadWriteOnce (RWO) persistent file volumes along with ReadWriteMany (RWX) persistent block volumes as well as the creation of snapshots on both these volume types.
NMState Operator and OpenShift Container Platform
UPDATE: An updated blog post on this topic has been written and is available here: Creating a storage network in OpenShift
Introduction
OpenShift Container Platform and OpenShift Data Foundations can supply all your data storage needs, however sometimes you want to leverage an external storage array directly using storage protocols such as NFS or iSCSI. In many cases these storage networks will be served from dedicated network segments or VLANs and use dedicated network ports or network cards to handle the traffic.
Tag: Kubernetes
Using the Synology K8s CSI Driver with OpenShift
This blog post has been updated with additional details and was originally published on 03-14-2022.
Adding storage to an OpenShift cluster can greatly increase the types of workloads you can run, including workloads such as OpenShift Virtualization, or databases such as MongoDB and PostgreSQL. Persistent volumes can be supplied in many different ways within OpenShift including using LocalVolumes, or OpenShift Data Foundation, or provided by an underlying Cloud Provider such as the vSphere provider. Storage providers for external storage arrays such as Pure CSI Driver, Dell, Infinidat CSI Driver and Synology CSI Driver also exist. While I do not have the a Pure Storage Array or an Infinibox in my home lab, I do have a Synology array, that supports iSCSI and this will be the focus of the blog. The Synology CSI driver supports the creation of ReadWriteOnce (RWO) persistent file volumes along with ReadWriteMany (RWX) persistent block volumes as well as the creation of snapshots on both these volume types.
Creating a Mutating Webhook in OpenShift
If you have ever used tools like Istio, or OpenShift Service Mesh, you may have noticed that they have an ability to modify your Kubernetes deployments automatically injecting “side-cars” into your application definitions. Or perhaps you have come across tools that add certificates to your deployment, or add special environment variables to your definitions. This magic is brought to you by Kubernetes Admission Controllers. There are multiple types of admission controllers, but today we will focus on just one of them, “Mutating Webhooks”. Mutating Webhooks are the specific class of Admission Controller that can inject changes into your Kubernetes definitions.
Trying Tanzu with Tanzu Community Edition
Installing Tanzu Community Edition on vSphere
Over the past year, I have heard much about VMware Tanzu, but have yet to experience what it is or how it works. Given my infrastructure background, I am interested in how it installs, and how does one maintain it long term. So with those questions in mind, I decided to try installing Tanzu Community Edition.
What is Tanzu? Tanzu is VMware’s productized version of Kubernetes, designed to run on AWS, Azure, and vSphere. There are multiple editions available including Basic, Standard, Advanced and Community. VMware provides a comparison between the different versions and what features they offer here: Compare VMware Tanzu Editions This blog post will focus on deploying the Community Edition on vSphere. The Community Edition is different from the commercial offerings, the cluster deployment, and management process is different when using the commercial offering.
OpenShift FileIntegrity Scanning
Introduction
The File Integrity Operator is used to watch for changed files on any node within an OpenShift cluster. Once deployed and configured, it will watch a set of pre-configured locations and report if any files are modified in any way that were not approved. This operator works in sync with MachineConfig so if you update a file through MachineConfig, once the files are updated, the File Integrity Operator will update its database of signatures to ensure that the approved changes do not trigger an alert. The File Integrity Operator is based on the OpenSource project AIDE Advanced Intrusion Detection Environment.
Kubectl and OC Command Output
Introduction
After running an OpenShift or Kubernetes cluster for a little while you find that you need to create reports on specific data about the cluster itself. Reporting on things like Project owners, container images in use, and project quota are just some of the things you might be asked about. There are multiple ways to do this, such as writing your own application that queries the API, or creating a shell script that wraps a bunch of cli commands. For very complex reports, these tactics may be required. For simpler requests, there is another way, using the provided command line client such as “oc” or “kubectl” and a built-in feature which allows you to specify the output format for your query.
Tag: Synology
Using the Synology K8s CSI Driver with OpenShift
This blog post has been updated with additional details and was originally published on 03-14-2022.
Adding storage to an OpenShift cluster can greatly increase the types of workloads you can run, including workloads such as OpenShift Virtualization, or databases such as MongoDB and PostgreSQL. Persistent volumes can be supplied in many different ways within OpenShift including using LocalVolumes, or OpenShift Data Foundation, or provided by an underlying Cloud Provider such as the vSphere provider. Storage providers for external storage arrays such as Pure CSI Driver, Dell, Infinidat CSI Driver and Synology CSI Driver also exist. While I do not have the a Pure Storage Array or an Infinibox in my home lab, I do have a Synology array, that supports iSCSI and this will be the focus of the blog. The Synology CSI driver supports the creation of ReadWriteOnce (RWO) persistent file volumes along with ReadWriteMany (RWX) persistent block volumes as well as the creation of snapshots on both these volume types.
Running Gitea on Synology Arrays
I continue to find that my Synology NAS arrays are the most versatile devices in my home lab. I run many small “helper” services on my arrays through the use of the Docker service built into the 6.x and 7.x releases of the Synology DSM. What are these helper services that I am running? Things like “Grafana”, “Prometheus”, “Minio” and the topic for discussion today “Gitea”.
What is Gitea? From their website it is “Gitea is a community managed lightweight code hosting solution written in Go.” You can think of Gitea as a self hosted GitHub or GitLab service. Gitea is written in Go, and it can run on Windows, macOS, and Linux on both x86 and ARM platforms making it a very versatile application. It can also run without the need for an external database server such as MySQL or Postgres by leveraging SQLite. If you are planning to deploy a large Git hosting solution, you should probably look to use one of these more versatile database servers, but for a small home lab, using the SQLite will work just fine.
Tag: Docker
Windows Containers on Windows 10 or 11, without Docker Desktop
When it comes to running Windows Containers, the only straight forward way to run them has been through Docker Desktop. Starting in August of 2021, the license that Docker Desktop was distributed under changed. It became “Free for personal use” only. If you were using it as a part of your day-2-day job, you were going to need a subscription/license. (see Docker Subscriptions for more details.) But what if you don’t need the fancy UI, and you just want to run Windows Containers on your Windows 10 or 11 host? One option is to download and manually install the Moby binaries from github. If you are looking for a more automated process, Stevedore may be for you.
Using Podman on Mac OSX
Over five years ago I bought an Apple MacBook Pro to learn Go and deep dive into things like containers and Kubernetes. My reasoning was simple, OSX was “*nix” like, the keyboard was amazing, and I could use Docker Desktop to run and manage containers on this machine. I could have used a Windows machine or built a Linux machine, but I wanted the ease of use of Mac, without having to worry about the constant hassles of patching (Windows) or limitations on drivers and power management (Linux). Over these past few years I have become addicted to using a Mac for my day-to-day work… However starting last year Docker made a change to their licensing terms on Docker Desktop as well as constant reminders to “upgrade to the latest version” have forced me to look elsewhere.
Tag: Git
Running Gitea on Synology Arrays
I continue to find that my Synology NAS arrays are the most versatile devices in my home lab. I run many small “helper” services on my arrays through the use of the Docker service built into the 6.x and 7.x releases of the Synology DSM. What are these helper services that I am running? Things like “Grafana”, “Prometheus”, “Minio” and the topic for discussion today “Gitea”.
What is Gitea? From their website it is “Gitea is a community managed lightweight code hosting solution written in Go.” You can think of Gitea as a self hosted GitHub or GitLab service. Gitea is written in Go, and it can run on Windows, macOS, and Linux on both x86 and ARM platforms making it a very versatile application. It can also run without the need for an external database server such as MySQL or Postgres by leveraging SQLite. If you are planning to deploy a large Git hosting solution, you should probably look to use one of these more versatile database servers, but for a small home lab, using the SQLite will work just fine.
Signing your Git Commits with SSH Keys
In August of this year, there was a bunch of panic about GitHub being compromised, and 35K repos having malicious code in them. Further investigation clarified that it was Github Repos that were set up to do a “phishing” type attack, by creating repositories that were improperly named or Typosquatting. That being said it has led to further discussion and attention around Code Supply Chain, and ensuring that code contributions, libraries and releases are validated before use. One such way to do this is by signing code commits.
Tag: Gitea
Running Gitea on Synology Arrays
I continue to find that my Synology NAS arrays are the most versatile devices in my home lab. I run many small “helper” services on my arrays through the use of the Docker service built into the 6.x and 7.x releases of the Synology DSM. What are these helper services that I am running? Things like “Grafana”, “Prometheus”, “Minio” and the topic for discussion today “Gitea”.
What is Gitea? From their website it is “Gitea is a community managed lightweight code hosting solution written in Go.” You can think of Gitea as a self hosted GitHub or GitLab service. Gitea is written in Go, and it can run on Windows, macOS, and Linux on both x86 and ARM platforms making it a very versatile application. It can also run without the need for an external database server such as MySQL or Postgres by leveraging SQLite. If you are planning to deploy a large Git hosting solution, you should probably look to use one of these more versatile database servers, but for a small home lab, using the SQLite will work just fine.
Tag: S3
Running Gitea on Synology Arrays
I continue to find that my Synology NAS arrays are the most versatile devices in my home lab. I run many small “helper” services on my arrays through the use of the Docker service built into the 6.x and 7.x releases of the Synology DSM. What are these helper services that I am running? Things like “Grafana”, “Prometheus”, “Minio” and the topic for discussion today “Gitea”.
What is Gitea? From their website it is “Gitea is a community managed lightweight code hosting solution written in Go.” You can think of Gitea as a self hosted GitHub or GitLab service. Gitea is written in Go, and it can run on Windows, macOS, and Linux on both x86 and ARM platforms making it a very versatile application. It can also run without the need for an external database server such as MySQL or Postgres by leveraging SQLite. If you are planning to deploy a large Git hosting solution, you should probably look to use one of these more versatile database servers, but for a small home lab, using the SQLite will work just fine.
Tag: Github
Signing your Git Commits with SSH Keys
In August of this year, there was a bunch of panic about GitHub being compromised, and 35K repos having malicious code in them. Further investigation clarified that it was Github Repos that were set up to do a “phishing” type attack, by creating repositories that were improperly named or Typosquatting. That being said it has led to further discussion and attention around Code Supply Chain, and ensuring that code contributions, libraries and releases are validated before use. One such way to do this is by signing code commits.
Tag: Security
Signing your Git Commits with SSH Keys
In August of this year, there was a bunch of panic about GitHub being compromised, and 35K repos having malicious code in them. Further investigation clarified that it was Github Repos that were set up to do a “phishing” type attack, by creating repositories that were improperly named or Typosquatting. That being said it has led to further discussion and attention around Code Supply Chain, and ensuring that code contributions, libraries and releases are validated before use. One such way to do this is by signing code commits.
OpenShift FileIntegrity Scanning
Introduction
The File Integrity Operator is used to watch for changed files on any node within an OpenShift cluster. Once deployed and configured, it will watch a set of pre-configured locations and report if any files are modified in any way that were not approved. This operator works in sync with MachineConfig so if you update a file through MachineConfig, once the files are updated, the File Integrity Operator will update its database of signatures to ensure that the approved changes do not trigger an alert. The File Integrity Operator is based on the OpenSource project AIDE Advanced Intrusion Detection Environment.
Tag: MikroTik
MikroTik RouterOS and WireGuard for Road Warriors.
Introduction
As the world starts to open back up, I find that I am traveling more, but I still need access to my home network and Lab equipment for demos and testing. I have tried various VPNs over the years including OpenVPN and ZeroTier and IPsec. These all worked well, but required running a separate server to handle the VPN termination, and were difficult to configure and maintain. In 2020, a new player entered the ring called WireGuard. If you don’t know what WireGuard is, here is how WireGuard describes itself:
Tag: VPN
MikroTik RouterOS and WireGuard for Road Warriors.
Introduction
As the world starts to open back up, I find that I am traveling more, but I still need access to my home network and Lab equipment for demos and testing. I have tried various VPNs over the years including OpenVPN and ZeroTier and IPsec. These all worked well, but required running a separate server to handle the VPN termination, and were difficult to configure and maintain. In 2020, a new player entered the ring called WireGuard. If you don’t know what WireGuard is, here is how WireGuard describes itself:
Tag: WireGuard
MikroTik RouterOS and WireGuard for Road Warriors.
Introduction
As the world starts to open back up, I find that I am traveling more, but I still need access to my home network and Lab equipment for demos and testing. I have tried various VPNs over the years including OpenVPN and ZeroTier and IPsec. These all worked well, but required running a separate server to handle the VPN termination, and were difficult to configure and maintain. In 2020, a new player entered the ring called WireGuard. If you don’t know what WireGuard is, here is how WireGuard describes itself:
Tag: Routing
Explaining OpenShift Router Configurations
Introduction
While working with OpenShift Routes recently, I came across a problem with an application deployment that was not working. OpenShift was returning an “Application is not Available” page, even though the application pod was up, and the service was properly configured and mapped. After some additional troubleshooting, we were able to trace the problem back to how the OpenShift router communicates with an application pod. Depending on your route type, OpenShift will either use HTTP, HTTPS or passthrough TCP to communicate with your application. By better understanding the traffic flow and the protocol used, we were able to quickly resolve the issue and get the application up and running. So with this in mind, I figured it would make sense to share this experience so others could benefit from this experience.
Tag: TLS
Explaining OpenShift Router Configurations
Introduction
While working with OpenShift Routes recently, I came across a problem with an application deployment that was not working. OpenShift was returning an “Application is not Available” page, even though the application pod was up, and the service was properly configured and mapped. After some additional troubleshooting, we were able to trace the problem back to how the OpenShift router communicates with an application pod. Depending on your route type, OpenShift will either use HTTP, HTTPS or passthrough TCP to communicate with your application. By better understanding the traffic flow and the protocol used, we were able to quickly resolve the issue and get the application up and running. So with this in mind, I figured it would make sense to share this experience so others could benefit from this experience.
Tag: Day Two
Creating Custom Operator Hub Catalogs
Introduction
By default every new OpenShift cluster has a fully populated Operator Hub, filled with various Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. These Operators can be installed by the cluster administrator in order to expand the features and functions of a given cluster. While this is great in many cases, not all enterprises want these Operators made available for install. OpenShift OperatorHub is fully configurable, and some of the various options on how to work with the Operator Catalog will be the topic of this blog post.
Creating ExternalIPs in OpenShift with MetalLB
Introduction
Since the 3.0 release of OpenShift it has come with what is called the OpenShift Routes. This can be thought of as a Layer 7 load balancer for TLS or HTTP applications in your cluster. This layer 7 load balancer works great for web applications and services that use HTTP, HTTPS using SNI, or TLS using SNI. However not all applications are HTTP-based, and some will use protocols other than TCP such as UDP and even SCTP. How do you make these applications available to consumers outside of your OpenShift Cluster? You might try using NodePort which will open a port on all worker nodes for your given service and forward that traffic onto the proper application. You can also manually configure ExternalIP and IP Failover to make an external IP available for your application in a highly available configuration, however, this is a time-consuming process.
Understanding OpenShift MachineConfigs and MachineConfigPools
Introduction
OpenShift 4 is built upon Red Hat CoreOS (RHCOS), and RHCOS is managed differently than most traditional Operating Systems. Unlike other Kubernetes distributions where you must manage the base Operating System as well as your Kubernetes distribution, with OpenSHift 4 the RHCOS Operating System and the Kubernetes platform are tightly coupled, and management of RHCOS including any system-level configurations is managed by MachineConfigs, and MachineConfigPools. These constructs allow you to manage system configuration and detect configuration drift on your Control Plane and Worker nodes.
Using Citrix Netscaler with OpenShift
Introduction
The OpenShift platform is a “batteries included” distribution of Kubernetes. It comes with EVERYTHING you need to run a Kubernetes platform from a developer and sysadmin-friendly UI, to monitoring, alerting, platform configuration, and ingress networking. OpenShift was one of the first Kubernetes distributions to realize that having a Kubernetes platform that solved how to load-balance incoming requests for applications was important. OpenShift achieved this through the use of “Routes”. Upstream in Kubernetes this need has been implemented through the use of Ingress, and more recently Gateway API.
Tag: Operations
Creating Custom Operator Hub Catalogs
Introduction
By default every new OpenShift cluster has a fully populated Operator Hub, filled with various Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. These Operators can be installed by the cluster administrator in order to expand the features and functions of a given cluster. While this is great in many cases, not all enterprises want these Operators made available for install. OpenShift OperatorHub is fully configurable, and some of the various options on how to work with the Operator Catalog will be the topic of this blog post.
Tag: Operator Hub
Creating Custom Operator Hub Catalogs
Introduction
By default every new OpenShift cluster has a fully populated Operator Hub, filled with various Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. These Operators can be installed by the cluster administrator in order to expand the features and functions of a given cluster. While this is great in many cases, not all enterprises want these Operators made available for install. OpenShift OperatorHub is fully configurable, and some of the various options on how to work with the Operator Catalog will be the topic of this blog post.
Tag: Operators
Creating Custom Operator Hub Catalogs
Introduction
By default every new OpenShift cluster has a fully populated Operator Hub, filled with various Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. These Operators can be installed by the cluster administrator in order to expand the features and functions of a given cluster. While this is great in many cases, not all enterprises want these Operators made available for install. OpenShift OperatorHub is fully configurable, and some of the various options on how to work with the Operator Catalog will be the topic of this blog post.
Tag: Externalip
Creating ExternalIPs in OpenShift with MetalLB
Introduction
Since the 3.0 release of OpenShift it has come with what is called the OpenShift Routes. This can be thought of as a Layer 7 load balancer for TLS or HTTP applications in your cluster. This layer 7 load balancer works great for web applications and services that use HTTP, HTTPS using SNI, or TLS using SNI. However not all applications are HTTP-based, and some will use protocols other than TCP such as UDP and even SCTP. How do you make these applications available to consumers outside of your OpenShift Cluster? You might try using NodePort which will open a port on all worker nodes for your given service and forward that traffic onto the proper application. You can also manually configure ExternalIP and IP Failover to make an external IP available for your application in a highly available configuration, however, this is a time-consuming process.
Tag: Loadbalancer
Creating ExternalIPs in OpenShift with MetalLB
Introduction
Since the 3.0 release of OpenShift it has come with what is called the OpenShift Routes. This can be thought of as a Layer 7 load balancer for TLS or HTTP applications in your cluster. This layer 7 load balancer works great for web applications and services that use HTTP, HTTPS using SNI, or TLS using SNI. However not all applications are HTTP-based, and some will use protocols other than TCP such as UDP and even SCTP. How do you make these applications available to consumers outside of your OpenShift Cluster? You might try using NodePort which will open a port on all worker nodes for your given service and forward that traffic onto the proper application. You can also manually configure ExternalIP and IP Failover to make an external IP available for your application in a highly available configuration, however, this is a time-consuming process.
Tag: Citrix Adc
Using Citrix Netscaler with OpenShift
Introduction
The OpenShift platform is a “batteries included” distribution of Kubernetes. It comes with EVERYTHING you need to run a Kubernetes platform from a developer and sysadmin-friendly UI, to monitoring, alerting, platform configuration, and ingress networking. OpenShift was one of the first Kubernetes distributions to realize that having a Kubernetes platform that solved how to load-balance incoming requests for applications was important. OpenShift achieved this through the use of “Routes”. Upstream in Kubernetes this need has been implemented through the use of Ingress, and more recently Gateway API.
Tag: Ingress
Using Citrix Netscaler with OpenShift
Introduction
The OpenShift platform is a “batteries included” distribution of Kubernetes. It comes with EVERYTHING you need to run a Kubernetes platform from a developer and sysadmin-friendly UI, to monitoring, alerting, platform configuration, and ingress networking. OpenShift was one of the first Kubernetes distributions to realize that having a Kubernetes platform that solved how to load-balance incoming requests for applications was important. OpenShift achieved this through the use of “Routes”. Upstream in Kubernetes this need has been implemented through the use of Ingress, and more recently Gateway API.
Tag: Osx
Using Podman on Mac OSX
Over five years ago I bought an Apple MacBook Pro to learn Go and deep dive into things like containers and Kubernetes. My reasoning was simple, OSX was “*nix” like, the keyboard was amazing, and I could use Docker Desktop to run and manage containers on this machine. I could have used a Windows machine or built a Linux machine, but I wanted the ease of use of Mac, without having to worry about the constant hassles of patching (Windows) or limitations on drivers and power management (Linux). Over these past few years I have become addicted to using a Mac for my day-to-day work… However starting last year Docker made a change to their licensing terms on Docker Desktop as well as constant reminders to “upgrade to the latest version” have forced me to look elsewhere.
Tag: Podman
Using Podman on Mac OSX
Over five years ago I bought an Apple MacBook Pro to learn Go and deep dive into things like containers and Kubernetes. My reasoning was simple, OSX was “*nix” like, the keyboard was amazing, and I could use Docker Desktop to run and manage containers on this machine. I could have used a Windows machine or built a Linux machine, but I wanted the ease of use of Mac, without having to worry about the constant hassles of patching (Windows) or limitations on drivers and power management (Linux). Over these past few years I have become addicted to using a Mac for my day-to-day work… However starting last year Docker made a change to their licensing terms on Docker Desktop as well as constant reminders to “upgrade to the latest version” have forced me to look elsewhere.
Tag: Admission Controllers
Creating a Mutating Webhook in OpenShift
If you have ever used tools like Istio, or OpenShift Service Mesh, you may have noticed that they have an ability to modify your Kubernetes deployments automatically injecting “side-cars” into your application definitions. Or perhaps you have come across tools that add certificates to your deployment, or add special environment variables to your definitions. This magic is brought to you by Kubernetes Admission Controllers. There are multiple types of admission controllers, but today we will focus on just one of them, “Mutating Webhooks”. Mutating Webhooks are the specific class of Admission Controller that can inject changes into your Kubernetes definitions.
Tag: Cluster
Recovering an OCP/OKD Cluster After a Long Time Powered Off
Introduction
If you are like me, you have multiple Lab clusters of OpenShift or OKD in your home or work Lab. Each of these clusters takes up a significant amount of resources and so you may shut them down to save power or compute resources. Or perhaps you are running a cluster in one of the many supported Cloud providers, and you power the machines down to save costs when you are not using them. If you leave the cluster powered off for more than 2 weeks you will find that when you power the cluster back on you are unable to connect to the cluster or the console. Most times, this is due to one or more internal certificates expiring. There is a quick fix for this which we will discuss below.
Tag: Recovery
Recovering an OCP/OKD Cluster After a Long Time Powered Off
Introduction
If you are like me, you have multiple Lab clusters of OpenShift or OKD in your home or work Lab. Each of these clusters takes up a significant amount of resources and so you may shut them down to save power or compute resources. Or perhaps you are running a cluster in one of the many supported Cloud providers, and you power the machines down to save costs when you are not using them. If you leave the cluster powered off for more than 2 weeks you will find that when you power the cluster back on you are unable to connect to the cluster or the console. Most times, this is due to one or more internal certificates expiring. There is a quick fix for this which we will discuss below.
Tag: Networking
NMState Operator and OpenShift Container Platform
UPDATE: An updated blog post on this topic has been written and is available here: Creating a storage network in OpenShift
Introduction
OpenShift Container Platform and OpenShift Data Foundations can supply all your data storage needs, however sometimes you want to leverage an external storage array directly using storage protocols such as NFS or iSCSI. In many cases these storage networks will be served from dedicated network segments or VLANs and use dedicated network ports or network cards to handle the traffic.
Tag: Tanzu
Trying Tanzu with Tanzu Community Edition
Installing Tanzu Community Edition on vSphere
Over the past year, I have heard much about VMware Tanzu, but have yet to experience what it is or how it works. Given my infrastructure background, I am interested in how it installs, and how does one maintain it long term. So with those questions in mind, I decided to try installing Tanzu Community Edition.
What is Tanzu? Tanzu is VMware’s productized version of Kubernetes, designed to run on AWS, Azure, and vSphere. There are multiple editions available including Basic, Standard, Advanced and Community. VMware provides a comparison between the different versions and what features they offer here: Compare VMware Tanzu Editions This blog post will focus on deploying the Community Edition on vSphere. The Community Edition is different from the commercial offerings, the cluster deployment, and management process is different when using the commercial offering.
Tag: Vsphere
Trying Tanzu with Tanzu Community Edition
Installing Tanzu Community Edition on vSphere
Over the past year, I have heard much about VMware Tanzu, but have yet to experience what it is or how it works. Given my infrastructure background, I am interested in how it installs, and how does one maintain it long term. So with those questions in mind, I decided to try installing Tanzu Community Edition.
What is Tanzu? Tanzu is VMware’s productized version of Kubernetes, designed to run on AWS, Azure, and vSphere. There are multiple editions available including Basic, Standard, Advanced and Community. VMware provides a comparison between the different versions and what features they offer here: Compare VMware Tanzu Editions This blog post will focus on deploying the Community Edition on vSphere. The Community Edition is different from the commercial offerings, the cluster deployment, and management process is different when using the commercial offering.
Tag: Windows Containers
OpenShift Windows Containers - Bring Your Own Host
OpenShift has supported Windows Containers with the Windows Machine Config Operator for the past year, starting with OCP 4.6. Initial Windows Container support required running your platform in Azure or AWS. With the release of 4.7, the WMCO also supported hosting machines in VMWare. However, when deploying in a VMWare environment you had to spend time configuring a base Windows image, using tools such as sysprep and VMware templates. What if you wanted to use a bare metal host(s), or wanted to take advantage of existing Windows servers that you already manage. Or perhaps you just wanted to try out Windows Containers without going through all the steps of setting up a Windows template to deploy a single machine?
Tag: Kata
Using Kata Containers with OpenShift Container Platform
Introduction
Containerization ushered in a new way to run workloads both on-prem and in the cloud securely and efficiently. By leveraging CGroups and Namespaces in the Linux kernel, applications can run isolated from each other in a secure and controlled manner. These applications share the same kernel and machine hardware. While CGroups and Namespaces are a powerful way of defining isolation between applications, faults have been found that allow breaking out of their CGroups jail. Additional measures such as SELinux can assist with keeping applications inside their container, but sometimes your application or workload needs more isolation than CGroups, Namespaces, and SELinux can provide.
Tag: Quotas
OpenShift Cluster Storage Management
When it comes to persistent storage in your OpenShift clusters, there is usually only so much of it to go around. As an OpenShift cluster admin, you want to ensure that in the age of self-service, your consumers do not take more storage than their fair share. More importantly you want to ensure that your users don’t oversubscribe and consume more storage than you have. This is especially true when the storage system you are using leverages “Thin Provisioning.” How do you go about controlling this in OpenShift? Enter the ClusterResourceQuota and Project level ResourceQuotas.
Tag: Storage
OpenShift Cluster Storage Management
When it comes to persistent storage in your OpenShift clusters, there is usually only so much of it to go around. As an OpenShift cluster admin, you want to ensure that in the age of self-service, your consumers do not take more storage than their fair share. More importantly you want to ensure that your users don’t oversubscribe and consume more storage than you have. This is especially true when the storage system you are using leverages “Thin Provisioning.” How do you go about controlling this in OpenShift? Enter the ClusterResourceQuota and Project level ResourceQuotas.
Tag: AIDE
OpenShift FileIntegrity Scanning
Introduction
The File Integrity Operator is used to watch for changed files on any node within an OpenShift cluster. Once deployed and configured, it will watch a set of pre-configured locations and report if any files are modified in any way that were not approved. This operator works in sync with MachineConfig so if you update a file through MachineConfig, once the files are updated, the File Integrity Operator will update its database of signatures to ensure that the approved changes do not trigger an alert. The File Integrity Operator is based on the OpenSource project AIDE Advanced Intrusion Detection Environment.
Tag: Kubectl
Kubectl and OC Command Output
Introduction
After running an OpenShift or Kubernetes cluster for a little while you find that you need to create reports on specific data about the cluster itself. Reporting on things like Project owners, container images in use, and project quota are just some of the things you might be asked about. There are multiple ways to do this, such as writing your own application that queries the API, or creating a shell script that wraps a bunch of cli commands. For very complex reports, these tactics may be required. For simpler requests, there is another way, using the provided command line client such as “oc” or “kubectl” and a built-in feature which allows you to specify the output format for your query.
Tag: Oc
Kubectl and OC Command Output
Introduction
After running an OpenShift or Kubernetes cluster for a little while you find that you need to create reports on specific data about the cluster itself. Reporting on things like Project owners, container images in use, and project quota are just some of the things you might be asked about. There are multiple ways to do this, such as writing your own application that queries the API, or creating a shell script that wraps a bunch of cli commands. For very complex reports, these tactics may be required. For simpler requests, there is another way, using the provided command line client such as “oc” or “kubectl” and a built-in feature which allows you to specify the output format for your query.
Tag: Ansible
Creating a multi-host OKD Cluster
Introduction
In the last two posts, I have shown you how to get an OKD All-in-One cluster up and running. Since it was an “All-in-One” cluster, there was no redundancy in it, and there was no ability to scale out. OKD and Kubernetes work best in a multi-server deployment, creating redundancy and higher availability along with the ability to scale your applications horizontally on demand. This final blog post is going to outline the steps to build a multi-host cluster. It will build on what we have done in the previous posts but extend the process out to make a multi-node cluster and add working SSL certificates.
Openshift, Azure, and Ansible
Intro
In my last post, I showed how to deploy an All-in-One OKD system. This is great for some initial learning, but keeping it up and running can get expensive over time. You can always shut it down and re-create it later, but this can take time and you can end up making typos and errors if you aren’t careful. If you want to be able to create (and destroy) these All-in-One environments in a more automatic way read on.
Tag: Azure
Creating a multi-host OKD Cluster
Introduction
In the last two posts, I have shown you how to get an OKD All-in-One cluster up and running. Since it was an “All-in-One” cluster, there was no redundancy in it, and there was no ability to scale out. OKD and Kubernetes work best in a multi-server deployment, creating redundancy and higher availability along with the ability to scale your applications horizontally on demand. This final blog post is going to outline the steps to build a multi-host cluster. It will build on what we have done in the previous posts but extend the process out to make a multi-node cluster and add working SSL certificates.
Openshift, Azure, and Ansible
Intro
In my last post, I showed how to deploy an All-in-One OKD system. This is great for some initial learning, but keeping it up and running can get expensive over time. You can always shut it down and re-create it later, but this can take time and you can end up making typos and errors if you aren’t careful. If you want to be able to create (and destroy) these All-in-One environments in a more automatic way read on.
OpenShift on Azure - The Manual Way
Intro
The other day I was reading an article on OpenShift All-in-One and thought it would be interesting to re-create it with OKD, the community version of OpenShift. We are going to create an All-In-One (AiO) version of version 3.11 of OKD/OpenShift on Azure.
This post is going to show you how to do an manual install of OKD on just one host. Why would you want to do this? This will get you a fully working instance of OpenShift and even give you cluster admin rights so you can learn how to administer it. One thing you don’t want to do is run a production load on this! While this is a great way to learn OpenShift and even Kubernetes, this is not the way to run a production application.