OpenShift on Azure - The Manual Way
By Mark DeNeve
Intro
The other day I was reading an article on OpenShift All-in-One and thought it would be interesting to re-create it with OKD, the community version of OpenShift. We are going to create an All-In-One (AiO) version of version 3.11 of OKD/OpenShift on Azure.
This post is going to show you how to do an manual install of OKD on just one host. Why would you want to do this? This will get you a fully working instance of OpenShift and even give you cluster admin rights so you can learn how to administer it. One thing you don’t want to do is run a production load on this! While this is a great way to learn OpenShift and even Kubernetes, this is not the way to run a production application.
If you are looking for a fully automated Azure OpenShift deployment, Microsoft supplies a few ways to do this. Take a look at the docs here Deploy OKD in Azure if you just want to get OKD up and running in Azure.
While we are going to be using Azure for this install, you can just as easily use any x86 server that has 8Gb of RAM available.
Before you begin
The process below is going to rely on the use of Git, Ansible, and ssh including using ssh keys. You don’t really need to know everything about Git or Ansible, I will give you the commands you need for those, BUT you will need an SSH client, and SSH key. Try one of these links to help get you set up with SSH.
If you don’t have an Azure account you can set up a free one. Microsoft is nice enough to give you $200 in free credits for 30 days. That is more than enough to walk through this tutorial but, don’t use them all up… Future posts are going to automate the whole process I outline below.
Building your first Azure Host
Something to note before we move forward. Azure charges money for running these VMs. Please don’t come to me if you forget to shut down the VM when you are done with it.
All-in-One Host
Log into the Azure Portal and select “Virtual Machines” and then “+Add”. Most of what we will use are defaults, but there are a few things we need to specify.
- All-in-One hosts need ~8Gb of RAM if you want to get them fully running, so we will use a Standard D2v3 host for the All-in-One.
- Give your VM a name, for example, “okdaio”.
- For “image” be sure to select “Centos 7.5” (this will only work on Centos.)
- Set the username to “okdadmin” (this will be used later) and put your SSH Public Key into the proper location
- Select “Allow selected ports” and select “SSH” from the dropdown
and then click ““Review + create”. Compare against an example image to validate you got the settings right. If everything is good click “Create”.
Jump Box for install
We are also going to need a build host to run the installer from. We can do this on a much smaller host. In a future post we will see how we can eliminate this extra host with a Docker file, but for now, using the same process as above for creating our AiO host, create another machine. This time:
- Give it a different name for example “jumphost”
- set the size to “B2s.”
- still use “Centos 7.5” for your OS
- keep the username the same “okdadmin”
- be sure to enable the ssh port
When both hosts are built, we need to get the IP addresses for both machines. Select “Virtual Machines” and for each VM record the “Public Ip Address.” Now, we can move onto the next section.
Enable additional Network Ports
OKD web UI and API runs on port 8443, and before we start the install we need to open up one additional port.
- Select the “okdaio” server and then select “Networking”
- Click “Add inbound port rule”
- Set the Destination port to 8443
- select TCP for the port
- then give it a name such as “okd_api”
- repeat the steps above for the following additional ports
- 443 - call it “okd_router_https”
- 80 - call it “okd_router_http”
- finally click Add
Host(s) Setup
So now we have two separate hosts ready and waiting for something to do. Let’s ssh into the jumpbox first. Using the IP you gathered when building the host, ssh to it using the following command
ssh okdadmin@<jumpbox-ip-address>
. This will use your SSH Key that you added when building these VMs. You won’t be prompted for a password but rest assured it is still secure. Once you are in, we need to do a few different things. We are going to update the base software, and then install some pre-requisites as well as the OpenShift installer scripts. You can copy/paste the steps below
Jump Box
sudo yum -y update
sudo yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo sed -i -e "s/^enabled=1/enabled=0/" /etc/yum.repos.d/epel.repo
sudo yum -y --enablerepo=epel install wget git ansible python-pip pyOpenSSL
git clone https://github.com/openshift/openshift-ansible
cd openshift-ansible
git checkout release-3.11
Congrats, you now have a jump host that is ready to deploy OKD.
OKDAIO Host
All we need to now is prep the AiO host. From your base host, SSH to our AiO host ssh okdadmin@<aio-ip-address>
and run the following commands
sudo yum -y update
sudo yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo sed -i -e "s/^enabled=1/enabled=0/" /etc/yum.repos.d/epel.repo
sudo yum -y --enablerepo=epel install wget git ansible python-pip pyOpenSSL
sudo sed -i -e "s/^NM_CONTROLLED=no/NM_CONTROLLED=true/" /etc/sysconfig/network-scripts/ifcfg-eth0
At this point, you need to reboot the AiO server.
sudo shutdown -r now
We need to reboot in order to modify how network-manager interacts with the network. OpenShift/OKD requires NetworkManager to be active in order for the ansible scripts we will be running to work. Wait a few minutes and then try logging back into the okdaio server to ensure it is healthy.
The last step is to ensure you can connect to the okdaio server from your jump jumpbox with an SSH key. You can either copy your private/public key to your jump box, or if you are like me and you want to be a bit more careful with your private key, just create a NEW key on your jump server and then copy that to your okdaio server, adding it to the ~okdadmin/.ssh/authorized_keys file. From your jumpbox test that ssh key authentication is working by running ssh okdadmin@<xip.io hostname>
. If you get in successfully you are ready to deploy OpenShift/OKD.
Installing OKD
We are finally ready to do the install. OKD leverages ansible and an inventory file that defines how to do the install. SSH to the jump host and change directory to the openshift-ansible installer cd openshift-ansible
.
A Word about Name Resolution: OKD requires some working DNS names. If you have a DNS server feel free to create your own entries for the hostname in the inventory, but if you don’t, we can use either xip.io or nip.io. They work the same way…
You remember that IP address you got for your AiO server above when you create the inventory file below use that IP address to create a host such as “.xip.io”. For example, if the All in One host from azure has an IP address of 1.2.3.4, your hostname would be “1.2.3.4.xip.io”. Want to use “nip.io”, just replace xip.io in the example above with nip.io. Either will work but be sure to be consistent.
To get you going we are going to create an inventory file called inventory-aio and put the following into that file
###########################################################################
### OpenShift Hosts
###########################################################################
[OSEv3:children]
nfs
masters
etcd
nodes
[OSEv3:vars]
ansible_ssh_user=okdadmin
ansible_become=true
openshift_deployment_type=origin
openshift_disable_check=memory_availability,disk_availability
openshift_master_cluster_public_hostname=<xip.io hostname goes here>
openshift_master_default_subdomain=apps.<xip.io hostname goes here>
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
[nfs]
<xip.io hostname goes here>
[masters]
<xip.io hostname goes here>
[etcd]
<xip.io hostname goes here>
[nodes]
## All-In-One with Docker
<xip.io hostname> openshift_node_group_name='node-config-all-in-one' openshift_node_problem_detector_install=true
Now that you have the inventory file and a working SSH key, the fun really starts. Let’s kick off our first script. Run ansible-playbook -i inventory-aio openshift-ansible/playbooks/prerequisites.yml
and watch for a successful run. If you get any errors, don’t worry. You can re-run the same ansible command multiple times without issue. This process typically takes under 2 minutes.
Once the prerequisites.yml playbook completes, you can do your install. Just run ansible-playbook -i inventory-aio openshift-ansible/playbooks/deploy_cluster.yml
and wait. This install will take a little while. YMMV but usually this is about 30 minutes long. Just like with the prerequisites playbook, if you run into any issues, just re-run the deploy_cluster.yml playbook command.
Add a password
If the ansible scripts completed successfully the next step is to set up a password for the UI/API. Keep in mind that this OpenShift instance is live on the Internet … be sure you select a good password to ensure someone doesn’t take advantage of your very own PaaS. We need to log into the “okdaio” server, so ssh to that host and then run the htpasswd command to set a username and password.
ssh okdadmin@<IP address of aio host>
sudo htpasswd /etc/origin/master/htpasswd admin
# you will be prompted for a password here
oc adm policy add-cluster-role-to-user admin admin
IT’S ALIVE
At this point, if all the previous steps went well, you should be able to log into your very own one node cluster of OpenShift. Open a web browser and browse to “https://<xip.io hostname>:8443”. You will get prompted to accept an unsigned certificate. This is because we are using an unsigned cert by default. Possibly in the future, we will go through replacing the cert with a proper one.
Look around, deploy an app… it is all there and all working. Keep in mind it is also all costing you money, so if you are done playing. Log into the Azure console and select “Stop” for the host and it will power it down, but leave it in place. If you power it back on, you will get a different IP address … which unfortunately will break the cluster configuration. If you plan to save money by powering your cluster on and off, I suggest using a real DNS name for the cluster that you can update as you power it on and off.
What’s Next
So that’s it for now. You are the proud owner of a one-node OKD “cluster”. You built it all manually and hopefully have some idea of the parts that make it up. In my next post, I will share a much easier and more automated way to deploy an All-in-One OKD cluster using Docker, and Ansible for all the steps including Azure provisioning! Send me feedback via twitter @markdeneve.