Creating ExternalIPs in OpenShift with MetalLB
By Mark DeNeve
Introduction
Since the 3.0 release of OpenShift it has come with what is called the OpenShift Routes. This can be thought of as a Layer 7 load balancer for TLS or HTTP applications in your cluster. This layer 7 load balancer works great for web applications and services that use HTTP, HTTPS using SNI, or TLS using SNI. However not all applications are HTTP-based, and some will use protocols other than TCP such as UDP and even SCTP. How do you make these applications available to consumers outside of your OpenShift Cluster? You might try using NodePort which will open a port on all worker nodes for your given service and forward that traffic onto the proper application. You can also manually configure ExternalIP and IP Failover to make an external IP available for your application in a highly available configuration, however, this is a time-consuming process.
This post will discuss the use of MetalLB and the MetalLB Operator, which makes the configuration of ExternalIPs much easier to manage. MetalLB can be configured in one of two ways:
- Layer 2 Mode: This is the way we will be configuring MetalLB in this post and is the simplest configuration. It uses blocks of IP addresses preassigned on the same subnet as your worker nodes
- BGP Mode: This way works with NorthBound Routers to configure routes to your services. This allows you to use a different CIDR range for your external services but is much more complex to configure
Network Setup
Since this post will be very network-centric, we need to discuss a little about the network setup in the lab. In the lab we have the following network configuration:
- HostSubnet: 172.16.25.0/24 - This is the network on which the Control Plane and Worker nodes exist. IPs are handed out via DHCP.
- ClusterNetwork: 10.128.0.0/14 - The IP address blocks for pods.
- MachineNetwork: 10.0.0.0/16 - The IP address blocks for machines.
- ServiceNetwork: 172.30.0.0/16 - The IP address block for services.
- ExternalIP: 172.16.25.200-220 - When using MetalLB in “layer 2 mode”, your ExternalIP address range must be on the SAME SUBNET and in the same IP CIDR as your base machines IP addresses. The IP addresses listed here have been excluded from the DHCP Scope to ensure we do not have IP addressing collisions.
NOTE: Caution must be taken to ensure that the HostSubnet, ClusterNetwork, MachineNetwork, ServiceNetwork, and ExternalIP ranges do not conflict with each other or other networks in your lab.
Prerequisites
- OpenShift Cluster 4.10 or later
- Cluster Admin privileges on an OpenShift Cluster
- oc command
- iPerf client installed on a test machine
Operator Install
The MetalLB Operator is the easiest way to get MetalLB installed in your cluster. We will navigate to OperatorHub in the OpenShift UI and follow the steps below to install the operator:
- Log in to the OpenShift Console
- Select Operators -> OperatorHub
- Search “MetalLB” operator
- Select MetalLB Operator
- Click Install
- Create a new Project called “metallb-system”
- Accept all defaults, click Install
Once Operator is installed, select “View Operator” and ensure that the Status shows Succeeded before proceeding.
Configure MetalLB
With the MetalLB operator installed, we will create an instance of a MetalLB Controller and then configure an address Pool for the MetalLB controller to leverage. This controller will handle the deployment of the MetalLB components.
Create an instance of MetalLB
Start by creating a file called metallb-controller.yaml
with the following contents:
apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
With the yaml file created, we will apply the yaml and create our MetalLB instance:
$ oc login
$ oc project metallb-system
$ oc create -f metallb-controller.yaml
We now need to give the MetalLB controller a group of IPs to work with. For this, we will create an address pool.
Create an address pool
The MetalLB Operator supports two main types of AddressPools at this time.
- Layer2 - A worker node gets assigned an additional IP address to announce from the MetalLB addressPool. All traffic for a service IP address is routed through one node, and a failover mechanism handles assigning that additional IP to another node if the primary node fails. When a node becomes unavailable, failover is automatic.
- BGP - MetalLB advertises the load balancer IP address for a service to each BGP peer. BGP peers are commonly network routers that are configured to use the BGP protocol. This is a more advanced and complex setup and requires northbound routers to allow for BGP peering.
We will be using Layer2 which is the simplest way to deploy MetalLB. It does not require any special configuration at the upstream router but does require the reservation of IPs from your machine network’s existing subnet. As previously discussed, in the Network Setup section we will be using IPs from the range 172.16.25.200-172.16.25.220.
apiVersion: metallb.io/v1alpha1
kind: AddressPool
metadata:
namespace: metallb-system
name: l2-addresspool
spec:
protocol: layer2
addresses:
- 172.16.25.200-172.16.25.220
autoAssign: true
NOTE: It is possible to add multiple IP address ranges by having multiple lines of - <IP start> - <IP end> ranges listed.
We can now apply this address pool to our cluster:
$ oc create -f addresspool.yml
We can validate the configuration using the oc describe command:
$ oc describe addressPool -n metallb-system
Name: l2-addresspool
Namespace: metallb-system
API Version: metallb.io/v1beta1
Kind: AddressPool
Spec:
Addresses:
172.16.25.200-172.16.25.220
Auto Assign: true
Protocol: layer2
Events: <none>
SUCCESS, we have now configured MetalLB to create a Layer2 Address Pool to work with and we can now create Kubernetes services of type “LoadBalancer” in our cluster.
Deploy a test Application
We will deploy a simple application and then create a new service of type: LoadBalancer to test it out. Typically we would deploy something like a simple HTTP web application, but these types of applications can be hosted through the OpenShift Router, so let’s host a simple application that does not work through the traditional OpenShift routes. We will use iPerf to test our new MetalLB configuration. You will need to have a local copy of the iPerf client to run this test.
We will create a very simple iperf3 container image using the following Dockerfile:
FROM quay.io/fedora/fedora:latest
RUN dnf install -y iperf3 && dnf clean all
ENTRYPOINT trap : TERM INT; iperf3 -s
Once you build your container image, be sure to push it to an image repository that can be accessed by your cluster. Alternatively, you can also use an image I have previously published to Quay using the same file quay.io/xphyr/iperf:v4
apiVersion: v1
kind: Pod
metadata:
name: iperf-server
labels:
app: iperf
spec:
containers:
- name: server
image: quay.io/xphyr/iperf:v4 #<---------- REPLACE THIS if you created your image
We can now deploy our test iperf application into your cluster. We will create a new project called “iperf” and deploy our iPerf pod into this project:
$ oc new-project iperf
$ oc create -f iperf-pod.yml
Create a LoadBalancer SVC
It is time to put our new MetalLB LoadBalancer to use. To do this, we need to create a new service of type “LoadBalancer” and point it to our iPerf pod. Create a file called iperf-svc.yml and put the following configuration in that file.
apiVersion: v1
kind: Service
metadata:
name: iperf-lb
spec:
selector:
app: iperf
ports:
- port: 5201
targetPort: 5201
protocol: TCP
type: LoadBalancer
With our file created, we can now apply this service to our cluster.
$ oc create -f iperf-svc.yml
service/iperf-lb created
With the service created, we need to see what EXTERNAL-IP address was assigned to the service.
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
iperf-lb LoadBalancer 172.30.246.14 172.16.25.201 5201:32291/TCP 37s
In the output above, the IP assigned externally is 172.16.25.201. With this knowledge, we can test our iPerf service by connecting directly to it via the iperf3 client.
$ iperf3 -c 172.16.25.201 -p 5201
Connecting to host 172.16.25.201, port 5201
[ 5] local 172.16.25.22 port 54748 connected to 172.16.25.201 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 292 MBytes 2.45 Gbits/sec 227 1.31 MBytes
[ 5] 1.00-2.00 sec 351 MBytes 2.95 Gbits/sec 16 1.12 MBytes
[ 5] 2.00-3.00 sec 348 MBytes 2.92 Gbits/sec 0 1.32 MBytes
[ 5] 3.00-4.00 sec 352 MBytes 2.96 Gbits/sec 18 1.12 MBytes
[ 5] 4.00-5.00 sec 259 MBytes 2.17 Gbits/sec 0 1.26 MBytes
[ 5] 5.00-6.00 sec 340 MBytes 2.85 Gbits/sec 35 1.01 MBytes
[ 5] 6.00-7.00 sec 276 MBytes 2.32 Gbits/sec 0 1.18 MBytes
[ 5] 7.00-8.00 sec 316 MBytes 2.65 Gbits/sec 10 981 KBytes
[ 5] 8.00-9.00 sec 219 MBytes 1.83 Gbits/sec 0 1.10 MBytes
[ 5] 9.00-10.00 sec 226 MBytes 1.90 Gbits/sec 0 1.23 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 2.91 GBytes 2.50 Gbits/sec 306 sender
[ 5] 0.00-10.00 sec 2.91 GBytes 2.49 Gbits/sec receiver
SUCCESS! We have connected to the iPerf service, and run a successful test.
Caveats
As with any application that you host in a Kubernetes cluster, the application must be able to handle being placed behind a load balancer. If your application is not able to handle stateless connections you should not run multiple instances of your application. Also keep in mind that in Layer 2 mode, when your worker nodes are rebooted there will be an interruption in network connectivity to your application as the ExternalIP is moved from one node to another.
Conclusion
OpenShift has always provided a simple way to host HTTP and HTTPS-based applications easily with the use of OpenShift routers. It has also always been possible to host TCP apps through the use of things like NodePorts or manual configuration of ExternalIPs. The MetalLB operator adds a new easier way to manage and automatically assign ExternalIPs to applications in your OpenShift Bare Metal and On-Prem clusters.