Below you will find pages that utilize the taxonomy term “Tutorial”
Using gMSA with Windows Containers in OCP
gMSA and OpenShift
In previous articles, we have shown how you can manage Windows Containers in OpenShift using the Windows Machine Config Operator. By configuring this feature, we are able to deploy and manage Windows Container Images just like any other Container Image with OpenShift. This gives us additional paths to application modernization allowing app developers to move over things like .Net legacy apps to OpenShift without having to re-write large portions of code.
Using the Synology K8s CSI Driver with OpenShift
This blog post has been updated with additional details and was originally published on 03-14-2022.
Adding storage to an OpenShift cluster can greatly increase the types of workloads you can run, including workloads such as OpenShift Virtualization, or databases such as MongoDB and PostgreSQL. Persistent volumes can be supplied in many different ways within OpenShift including using LocalVolumes, or OpenShift Data Foundation, or provided by an underlying Cloud Provider such as the vSphere provider. Storage providers for external storage arrays such as Pure CSI Driver, Dell, Infinidat CSI Driver and Synology CSI Driver also exist. While I do not have the a Pure Storage Array or an Infinibox in my home lab, I do have a Synology array, that supports iSCSI and this will be the focus of the blog. The Synology CSI driver supports the creation of ReadWriteOnce (RWO) persistent file volumes along with ReadWriteMany (RWX) persistent block volumes as well as the creation of snapshots on both these volume types.
Signing your Git Commits with SSH Keys
In August of this year, there was a bunch of panic about GitHub being compromised, and 35K repos having malicious code in them. Further investigation clarified that it was Github Repos that were set up to do a “phishing” type attack, by creating repositories that were improperly named or Typosquatting. That being said it has led to further discussion and attention around Code Supply Chain, and ensuring that code contributions, libraries and releases are validated before use. One such way to do this is by signing code commits.
Explaining OpenShift Router Configurations
Introduction
While working with OpenShift Routes recently, I came across a problem with an application deployment that was not working. OpenShift was returning an “Application is not Available” page, even though the application pod was up, and the service was properly configured and mapped. After some additional troubleshooting, we were able to trace the problem back to how the OpenShift router communicates with an application pod. Depending on your route type, OpenShift will either use HTTP, HTTPS or passthrough TCP to communicate with your application. By better understanding the traffic flow and the protocol used, we were able to quickly resolve the issue and get the application up and running. So with this in mind, I figured it would make sense to share this experience so others could benefit from this experience.
Creating Custom Operator Hub Catalogs
Introduction
By default every new OpenShift cluster has a fully populated Operator Hub, filled with various Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. These Operators can be installed by the cluster administrator in order to expand the features and functions of a given cluster. While this is great in many cases, not all enterprises want these Operators made available for install. OpenShift OperatorHub is fully configurable, and some of the various options on how to work with the Operator Catalog will be the topic of this blog post.
Understanding OpenShift MachineConfigs and MachineConfigPools
Introduction
OpenShift 4 is built upon Red Hat CoreOS (RHCOS), and RHCOS is managed differently than most traditional Operating Systems. Unlike other Kubernetes distributions where you must manage the base Operating System as well as your Kubernetes distribution, with OpenSHift 4 the RHCOS Operating System and the Kubernetes platform are tightly coupled, and management of RHCOS including any system-level configurations is managed by MachineConfigs, and MachineConfigPools. These constructs allow you to manage system configuration and detect configuration drift on your Control Plane and Worker nodes.
Using Podman on Mac OSX
Over five years ago I bought an Apple MacBook Pro to learn Go and deep dive into things like containers and Kubernetes. My reasoning was simple, OSX was “*nix” like, the keyboard was amazing, and I could use Docker Desktop to run and manage containers on this machine. I could have used a Windows machine or built a Linux machine, but I wanted the ease of use of Mac, without having to worry about the constant hassles of patching (Windows) or limitations on drivers and power management (Linux). Over these past few years I have become addicted to using a Mac for my day-to-day work… However starting last year Docker made a change to their licensing terms on Docker Desktop as well as constant reminders to “upgrade to the latest version” have forced me to look elsewhere.
Creating a Mutating Webhook in OpenShift
If you have ever used tools like Istio, or OpenShift Service Mesh, you may have noticed that they have an ability to modify your Kubernetes deployments automatically injecting “side-cars” into your application definitions. Or perhaps you have come across tools that add certificates to your deployment, or add special environment variables to your definitions. This magic is brought to you by Kubernetes Admission Controllers. There are multiple types of admission controllers, but today we will focus on just one of them, “Mutating Webhooks”. Mutating Webhooks are the specific class of Admission Controller that can inject changes into your Kubernetes definitions.
Recovering an OCP/OKD Cluster After a Long Time Powered Off
Introduction
If you are like me, you have multiple Lab clusters of OpenShift or OKD in your home or work Lab. Each of these clusters takes up a significant amount of resources and so you may shut them down to save power or compute resources. Or perhaps you are running a cluster in one of the many supported Cloud providers, and you power the machines down to save costs when you are not using them. If you leave the cluster powered off for more than 2 weeks you will find that when you power the cluster back on you are unable to connect to the cluster or the console. Most times, this is due to one or more internal certificates expiring. There is a quick fix for this which we will discuss below.
NMState Operator and OpenShift Container Platform
UPDATE: An updated blog post on this topic has been written and is available here: Creating a storage network in OpenShift
Introduction
OpenShift Container Platform and OpenShift Data Foundations can supply all your data storage needs, however sometimes you want to leverage an external storage array directly using storage protocols such as NFS or iSCSI. In many cases these storage networks will be served from dedicated network segments or VLANs and use dedicated network ports or network cards to handle the traffic.
Trying Tanzu with Tanzu Community Edition
Installing Tanzu Community Edition on vSphere
Over the past year, I have heard much about VMware Tanzu, but have yet to experience what it is or how it works. Given my infrastructure background, I am interested in how it installs, and how does one maintain it long term. So with those questions in mind, I decided to try installing Tanzu Community Edition.
What is Tanzu? Tanzu is VMware’s productized version of Kubernetes, designed to run on AWS, Azure, and vSphere. There are multiple editions available including Basic, Standard, Advanced and Community. VMware provides a comparison between the different versions and what features they offer here: Compare VMware Tanzu Editions This blog post will focus on deploying the Community Edition on vSphere. The Community Edition is different from the commercial offerings, the cluster deployment, and management process is different when using the commercial offering.
OpenShift Windows Containers - Bring Your Own Host
OpenShift has supported Windows Containers with the Windows Machine Config Operator for the past year, starting with OCP 4.6. Initial Windows Container support required running your platform in Azure or AWS. With the release of 4.7, the WMCO also supported hosting machines in VMWare. However, when deploying in a VMWare environment you had to spend time configuring a base Windows image, using tools such as sysprep and VMware templates. What if you wanted to use a bare metal host(s), or wanted to take advantage of existing Windows servers that you already manage. Or perhaps you just wanted to try out Windows Containers without going through all the steps of setting up a Windows template to deploy a single machine?
Using Kata Containers with OpenShift Container Platform
Introduction
Containerization ushered in a new way to run workloads both on-prem and in the cloud securely and efficiently. By leveraging CGroups and Namespaces in the Linux kernel, applications can run isolated from each other in a secure and controlled manner. These applications share the same kernel and machine hardware. While CGroups and Namespaces are a powerful way of defining isolation between applications, faults have been found that allow breaking out of their CGroups jail. Additional measures such as SELinux can assist with keeping applications inside their container, but sometimes your application or workload needs more isolation than CGroups, Namespaces, and SELinux can provide.
OpenShift Cluster Storage Management
When it comes to persistent storage in your OpenShift clusters, there is usually only so much of it to go around. As an OpenShift cluster admin, you want to ensure that in the age of self-service, your consumers do not take more storage than their fair share. More importantly you want to ensure that your users don’t oversubscribe and consume more storage than you have. This is especially true when the storage system you are using leverages “Thin Provisioning.” How do you go about controlling this in OpenShift? Enter the ClusterResourceQuota and Project level ResourceQuotas.
OpenShift FileIntegrity Scanning
Introduction
The File Integrity Operator is used to watch for changed files on any node within an OpenShift cluster. Once deployed and configured, it will watch a set of pre-configured locations and report if any files are modified in any way that were not approved. This operator works in sync with MachineConfig so if you update a file through MachineConfig, once the files are updated, the File Integrity Operator will update its database of signatures to ensure that the approved changes do not trigger an alert. The File Integrity Operator is based on the OpenSource project AIDE Advanced Intrusion Detection Environment.