Below you will find pages that utilize the taxonomy term “openshift”
Post
Creating a Windows Template for use with OpenShift Windows Machine Config Operator
If you are looking to try out Windows Containers managed by Kubernetes, you are going to need at least one Windows Server to host the containers. You can follow the steps from OpenShift Windows Containers - Bring Your Own Host and manually add a Windows server to an OpenShift Cluster. You can also use the Windows Machine Config Operator (WMCO) to automatically scale Windows nodes up and down in your cluster.
Post
Using the Synology K8s CSI Driver with OpenShift
This blog post has been updated with additional details and was originally published on 03-14-2022.
Adding storage to an OpenShift cluster can greatly increase the types of workloads you can run, including workloads such as OpenShift Virtualization, or databases such as MongoDB and PostgreSQL. Persistent volumes can be supplied in many different ways within OpenShift including using LocalVolumes, or OpenShift Data Foundation, or provided by an underlying Cloud Provider such as the vSphere provider.
Post
Explaining OpenShift Router Configurations
Introduction While working with OpenShift Routes recently, I came across a problem with an application deployment that was not working. OpenShift was returning an “Application is not Available” page, even though the application pod was up, and the service was properly configured and mapped. After some additional troubleshooting, we were able to trace the problem back to how the OpenShift router communicates with an application pod. Depending on your route type, OpenShift will either use HTTP, HTTPS or passthrough TCP to communicate with your application.
Post
Creating Custom Operator Hub Catalogs
Introduction By default every new OpenShift cluster has a fully populated Operator Hub, filled with various Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. These Operators can be installed by the cluster administrator in order to expand the features and functions of a given cluster. While this is great in many cases, not all enterprises want these Operators made available for install. OpenShift OperatorHub is fully configurable, and some of the various options on how to work with the Operator Catalog will be the topic of this blog post.
Post
Creating ExternalIPs in OpenShift with MetalLB
Introduction Since the 3.0 release of OpenShift it has come with what is called the OpenShift Routes. This can be thought of as a Layer 7 load balancer for TLS or HTTP applications in your cluster. This layer 7 load balancer works great for web applications and services that use HTTP, HTTPS using SNI, or TLS using SNI. However not all applications are HTTP-based, and some will use protocols other than TCP such as UDP and even SCTP.
Post
Understanding OpenShift MachineConfigs and MachineConfigPools
Introduction OpenShift 4 is built upon Red Hat CoreOS (RHCOS), and RHCOS is managed differently than most traditional Operating Systems. Unlike other Kubernetes distributions where you must manage the base Operating System as well as your Kubernetes distribution, with OpenSHift 4 the RHCOS Operating System and the Kubernetes platform are tightly coupled, and management of RHCOS including any system-level configurations is managed by MachineConfigs, and MachineConfigPools. These constructs allow you to manage system configuration and detect configuration drift on your Control Plane and Worker nodes.
Post
Using Citrix Netscaler with OpenShift
Introduction The OpenShift platform is a “batteries included” distribution of Kubernetes. It comes with EVERYTHING you need to run a Kubernetes platform from a developer and sysadmin-friendly UI, to monitoring, alerting, platform configuration, and ingress networking. OpenShift was one of the first Kubernetes distributions to realize that having a Kubernetes platform that solved how to load-balance incoming requests for applications was important. OpenShift achieved this through the use of “Routes”. Upstream in Kubernetes this need has been implemented through the use of Ingress, and more recently Gateway API.
Post
Creating a Mutating Webhook in OpenShift
If you have ever used tools like Istio, or OpenShift Service Mesh, you may have noticed that they have an ability to modify your Kubernetes deployments automatically injecting “side-cars” into your application definitions. Or perhaps you have come across tools that add certificates to your deployment, or add special environment variables to your definitions. This magic is brought to you by Kubernetes Admission Controllers. There are multiple types of admission controllers, but today we will focus on just one of them, “Mutating Webhooks”.
Post
Recovering an OCP/OKD Cluster After a Long Time Powered Off
Introduction If you are like me, you have multiple Lab clusters of OpenShift or OKD in your home or work Lab. Each of these clusters takes up a significant amount of resources and so you may shut them down to save power or compute resources. Or perhaps you are running a cluster in one of the many supported Cloud providers, and you power the machines down to save costs when you are not using them.
Post
NMState Operator and OpenShift Container Platform
Introduction OpenShift Container Platform and OpenShift Data Foundations can supply all your data storage needs, however sometimes you want to leverage an external storage array directly using storage protocols such as NFS or iSCSI. In many cases these storage networks will be served from dedicated network segments or VLANs and use dedicated network ports or network cards to handle the traffic.
On traditional Operating Systems like RHEL, you would use tools such as nmcli and network-manager to configure settings such as MTU and or create bonded connections, but in Red Hat Core OS, these tools are not directly available to you.
Post
OpenShift Windows Containers - Bring Your Own Host
OpenShift has supported Windows Containers with the Windows Machine Config Operator for the past year, starting with OCP 4.6. Initial Windows Container support required running your platform in Azure or AWS. With the release of 4.7, the WMCO also supported hosting machines in VMWare. However, when deploying in a VMWare environment you had to spend time configuring a base Windows image, using tools such as sysprep and VMware templates. What if you wanted to use a bare metal host(s), or wanted to take advantage of existing Windows servers that you already manage.
Post
Using Kata Containers with OpenShift Container Platform
Introduction Containerization ushered in a new way to run workloads both on-prem and in the cloud securely and efficiently. By leveraging CGroups and Namespaces in the Linux kernel, applications can run isolated from each other in a secure and controlled manner. These applications share the same kernel and machine hardware. While CGroups and Namespaces are a powerful way of defining isolation between applications, faults have been found that allow breaking out of their CGroups jail.
Post
OpenShift Cluster Storage Management
When it comes to persistent storage in your OpenShift clusters, there is usually only so much of it to go around. As an OpenShift cluster admin, you want to ensure that in the age of self-service, your consumers do not take more storage than their fair share. More importantly you want to ensure that your users don’t oversubscribe and consume more storage than you have. This is especially true when the storage system you are using leverages “Thin Provisioning.
Post
OpenShift FileIntegrity Scanning
Introduction The File Integrity Operator is used to watch for changed files on any node within an OpenShift cluster. Once deployed and configured, it will watch a set of pre-configured locations and report if any files are modified in any way that were not approved. This operator works in sync with MachineConfig so if you update a file through MachineConfig, once the files are updated, the File Integrity Operator will update its database of signatures to ensure that the approved changes do not trigger an alert.
Post
Kubectl and OC Command Output
Introduction After running an OpenShift or Kubernetes cluster for a little while you find that you need to create reports on specific data about the cluster itself. Reporting on things like Project owners, container images in use, and project quota are just some of the things you might be asked about. There are multiple ways to do this, such as writing your own application that queries the API, or creating a shell script that wraps a bunch of cli commands.
Post
Creating a multi-host OKD Cluster
Introduction In the last two posts, I have shown you how to get an OKD All-in-One cluster up and running. Since it was an “All-in-One” cluster, there was no redundancy in it, and there was no ability to scale out. OKD and Kubernetes work best in a multi-server deployment, creating redundancy and higher availability along with the ability to scale your applications horizontally on demand. This final blog post is going to outline the steps to build a multi-host cluster.
Post
Openshift, Azure, and Ansible
Intro In my last post, I showed how to deploy an All-in-One OKD system. This is great for some initial learning, but keeping it up and running can get expensive over time. You can always shut it down and re-create it later, but this can take time and you can end up making typos and errors if you aren’t careful. If you want to be able to create (and destroy) these All-in-One environments in a more automatic way read on.
Post
OpenShift on Azure - The Manual Way
Intro The other day I was reading an article on OpenShift All-in-One and thought it would be interesting to re-create it with OKD, the community version of OpenShift. We are going to create an All-In-One (AiO) version of version 3.11 of OKD/OpenShift on Azure.
This post is going to show you how to do an manual install of OKD on just one host. Why would you want to do this? This will get you a fully working instance of OpenShift and even give you cluster admin rights so you can learn how to administer it.