OpenShift Cluster Storage Management
By Mark DeNeve
When it comes to persistent storage in your OpenShift clusters, there is usually only so much of it to go around. As an OpenShift cluster admin, you want to ensure that in the age of self-service, your consumers do not take more storage than their fair share. More importantly you want to ensure that your users don’t oversubscribe and consume more storage than you have. This is especially true when the storage system you are using leverages “Thin Provisioning.” How do you go about controlling this in OpenShift? Enter the ClusterResourceQuota and Project level ResourceQuotas.
A ClusterResourceQuota object allows quotas to be applied across multiple projects in OpenShift. ClusterResourceQuotas aggregate resources used in all projects selected by the quota and can limit resources across all the selected projects. To make this an effective solution for managing storage across your entire cluster all projects will need to have an annotation added which makes it a part of the ClusterResourceQuota. By utilizing a default Project template we can ensure that all Projects are created by default with this annotation.
NOTE: Selecting more than 100 projects under a single multi-project quota can have detrimental effects on API server responsiveness in those projects!
Prerequisites
This blog post will make use of an existing OpenShift cluster that has been configured with multiple storage classes. The commands we will be using require cluster admin level permissions. You will also need the oc command available to run the commands in this blog.
Identifying Storage Classes
Before we begin we are going to need to identify the types of storage available within your cluster. This can be done by running oc get storageclass
$ oc get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 8m15s
ocs-storagecluster-ceph-rgw openshift-storage.ceph.rook.io/bucket Delete Immediate false 8m15s
ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 8m15s
openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 6d23h
thin (default) kubernetes.io/vsphere-volume Delete Immediate false 27d
As you can see in our example above, there are multiple storage classes available in this cluster including “ocs-storagecluster-cephfs”, “ocs-storagecluster-ceph-rbd” and “thin”. We will see how these different storage classes come into play with our ClusterResourceQuota later in this post.
Note The CSI driver that handles your storage provisioning must support quotas and quota enforcement for this to work properly. Check with your CSI provider to validate if storage quota is supported.
Creating a ClusterResourceQuota
To begin, We will create a ClusterResourceQuota to manage the maximum amount of storage that will be allocated for the cluster. In the example below, we have identified “40Gi” as the maximum amount of storage we want to be allocated across all Projects with the “clusterstoragequota: enabled” annotation. The requests.storage for your cluster will vary based on your specific storage constraints. Create a file called clusterresourcequota-storage.yaml
and place the following data in it. Be sure to update the requests.storage field for your needs.
apiVersion: v1
kind: ClusterResourceQuota
metadata:
name: totalclusterstorage
spec:
quota:
hard:
requests.storage: "40Gi"
selector:
annotations:
clusterstoragequota: enabled
Apply the ClusterResourceQuota to your cluster by running the following command:
$ oc create -f clusterresourcequota-storage.yaml
clusterresourcequota.quota.openshift.io/totalclusterstorage created
Create and Annotate Projects
We have now created a ClusterResourceQuota, but it does not have any projects to select, and include as part of the quota. We will create two new projects and manually add a annotation to these projects for our testing:
$ oc new-project myproject
Now using project "myproject" on server "https://api.ocp47.example.com:6443".
$ oc new-project myproject2
Now using project "myproject2" on server "https://api.ocp47.example.com:6443".
$ oc annotate namespace myproject clusterstoragequota=enabled
namespace/myproject annotated
$ oc annotate namespace myproject2 clusterstoragequota=enabled
namespace/myproject2 annotated
By adding the annotation clusterstoragequota=enabled to both projects, they are now subject to the control of the ClusterResourceQuota.
Create a Namespace quota
The creation of a ClusterResourceQuota allows us to control the overall use of storage in the cluster, but it does not keep one Project from consuming all that storage. To control storage at a Project level, we will create a ResourceQuota. Leveraging the test projects we created in the last step, we will create a separate quota for each test project. Create two project quotas, one on myproject and a different quota on myproject2:
$ oc create quota myprojectstoragequota --hard=requests.storage=30Gi -n myproject
resourcequota/myprojectstoragequota created
$ oc create quota myproject2storagequota --hard=requests.storage=25Gi -n myproject2
resourcequota/myproject2storagequota created
Checking quotas
Now that we have applied our ClusterResourceQuota as well as a quota to individual Projects, lets take a look at how these quotas are reflected in your cluster. Using the oc command ensure you are on the “myproject” project we created earlier.
Now, review the ClusterResourceQuota that is assigned:
$ oc project myproject
$ oc describe AppliedClusterResourceQuota
Name: totalclusterstorage
Created: 2 days ago
Labels: <none>
Annotations: <none>
Namespace Selector: ["myproject" "myproject2"]
Label Selector:
AnnotationSelector: clusterstoragequota=enabled
Resource Used Hard
-------- ---- ----
requests.storage 0Gi 40Gi
Note all the projects that are summed up in the ClusterResourceQuota are displayed.
Check the project quota:
$ oc describe quota
Name: storage-consumption
Namespace: myproject
Resource Used Hard
-------- ---- ----
requests.storage 0Gi 30Gi
We have validated that both the ClusterResourceQuota and the ResourceQuota is applied to our cluster. We will now see how they affect storage creation.
Exercise the quotas
With our storage quotas in place at both the cluster level and the project level, we will test them out to see how they work together to ensure that they are controlling storage use. Start by creating a PersistentVolumeClaim that is less than the quota applied at the project level. Create a file called storageclaim1.yaml
with the following contents ensuring that you update <storageClassName> with a storage class present in your cluster:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: storageclaim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: <storageClassName>
Create the PVC in your project oc create -f storageclaim1.yaml
. Now see how the PVC you just created is reflected in both your Project and Cluster quotas:
$ oc create -f storageclaim1.yaml
persistentvolumeclaim/storageclaim1 created
$ oc describe AppliedClusterResourceQuota
Name: totalclusterstorage
Created: 2 days ago
Labels: <none>
Annotations: <none>
Namespace Selector: ["myproject" "myproject2"]
Label Selector:
AnnotationSelector: clusterstoragequota=enabled
Resource Used Hard
-------- ---- ----
requests.storage 5Gi 40Gi
$ oc describe quota
Name: storage-consumption
Namespace: myproject
Resource Used Hard
-------- ---- ----
requests.storage 5Gi 30Gi
Create a second pvc file called storageclaim2.yaml, and change the storage request to 20Gi. We will apply this to our second project myproject2 and then see how the ClusterResourceQuota reflects this change.
$ oc create -f storageclaim2.yaml -n myproject2
persistentvolumeclaim/storageclaim2 created
$ oc describe AppliedClusterResourceQuota
Name: totalclusterstorage
Created: 2 days ago
Labels: <none>
Annotations: <none>
Namespace Selector: ["myproject" "myproject2"]
Label Selector:
AnnotationSelector: clusterstoragequota=enabled
Resource Used Hard
-------- ---- ----
requests.storage 25Gi 40Gi
Note that the used storage for the cluster has increased by 20Gi. To validate that the ClusterResourceQuota is enforcing our quota across multiple projects, create one more pvc file called storageclaim3.yaml and change the storage request to 20Gi. This request is within the project level quota we have remaining but will exceed the maximum amount of cluster storage we want to allocate.
$ oc create -f storageclaim3.yaml
persistentvolumeclaim/storageclaim3 created
Error from server (Forbidden): error when creating "storageclaim3.yaml": persistentvolumeclaims "storageclaimclaim3" is forbidden: exceeded quota: totalclusterstorage, requested: requests.storage=20Gi, used: requests.storage=25Gi, limited: requests.storage=40Gi
Success! We have ensured that the total storage allocated across multiple projects does not exceed our ClusterResourceRequest limit. The only issue at this point, is that we need to add an annotation to each new project as it is created. This is where Project Templates come in to help us manage this step automatically.
Creating a Project Template that includes storage annotation
Now that we have seen how you can manually apply annotations to projects, and how those annotations affect our ClusterResourceQuota, we will make sure that all future projects that are created include the annotation that adds it to our ClusterResourceQuota.
We will start be creating a default project template:
$ oc adm create-bootstrap-project-template -o yaml > template.yaml
Edit the template.yaml file we just created updating the name of the template, and adding our “clusterstoragequota: enabled” annotation to the section objects.metadata.annotations:
apiVersion: template.openshift.io/v1
kind: Template
metadata:
creationTimestamp: null
name: <template_name>
objects:
- apiVersion: project.openshift.io/v1
kind: Project
metadata:
annotations:
clusterstoragequota: enabled
openshift.io/description: ${PROJECT_DESCRIPTION}
openshift.io/display-name: ${PROJECT_DISPLAYNAME}
openshift.io/requester: ${PROJECT_REQUESTING_USER}
creationTimestamp: null
name: ${PROJECT_NAME}
Now apply the newly created template to your cluster:
$ oc create -f template.yaml -n openshift-config
template.template.openshift.io/<template_name> created
Finally, edit the cluster config to start using the new template.
$ oc edit project.config.openshift.io/cluster
In the “spec” section add the following, ensuring to update <template_name> with the name you selected when you created your template:
spec:
projectRequestTemplate:
name: <template_name>
To validate that the project template properly applies our annotation, create a new project myproject3 and validate that it is a part of the ClusterResourceQuota:
$ oc new-project myproject3
$ oc describe AppliedClusterResourceQuota
Name: totalclusterstorage
Created: 2 days ago
Labels: <none>
Annotations: <none>
Namespace Selector: ["myproject" "myproject2" "myproject3"]
Label Selector:
AnnotationSelector: clusterstoragequota=enabled
Resource Used Hard
-------- ---- ----
requests.storage 25Gi 40Gi
Note that the newly created myproject3 is automatically added to the ClusterResourceQuota.
Cluster Quotas with Multiple Storage Classes
The ClusterResourceQuota and project quotas that we have created thus far aggregate all cluster storage classes together. What if you want to set quotas on a per-class basis? This can be done by calling out the storage classes that you want to set the quotas on. Let’s start with the ClusterResourceQuota we have been using thus far and add some additional targeted classes by adding individual lines to the hard quota in the form <storageClassName>.storageclass.storage.k8s.io/requests.storage: <value>
apiVersion: v1
kind: ClusterResourceQuota
metadata:
name: totalclusterstorage
spec:
quota:
hard:
requests.storage: "100Gi"
thin.storageclass.storage.k8s.io/requests.storage: "40Gi"
ocs-storagecluster-cephfs.storageclass.storage.k8s.io/requests.storage: "80Gi"
ocs-storagecluster-ceph-rbd.storageclass.storage.k8s.io/requests.storage: "0Gi"
selector:
annotations:
clusterstoragequota: enabled
The YAML above creates a Cluster level quota for storage that does the following:
- Allow no more than 100Gi of storage to be assigned in your cluster (perhaps your backup service has a max capacity of 100Gi)
- Allow no more than 40Gi of storage to be assigned in your cluster from the “thin” storage class
- Allow no more than 80Gi of storage to be assigned in your cluster from the “ocs-storagecluster-cephfs” class
- Do not allow provisioning of any “ocs-storagecluster-ceph-rbd” storage class
Validate this by getting the AppliedClusterResourceQuota:
$ oc describe AppliedClusterResourceQuota
Name: totalclusterstorage
Created: 21 seconds ago
Labels: <none>
Annotations: <none>
Namespace Selector: ["myproject" "myproject3" "myproject2"]
Label Selector:
AnnotationSelector: clusterstoragequota=enabled
Resource Used Hard
-------- ---- ----
managed-nfs-storage.storage.k8s.io/requests.storage 0 80Gi
requests.storage 14Gi 100Gi
thin.storageclass.storage.k8s.io/requests.storage 10Gi 40Gi
Feel free to jump back to the Exercise the quotas section and target the additional storage classes you created to see how they work.
Summary
By combining OpenShift Project Templates, and ClusterResouceQuotas along with project quotas OpenShift cluster administrators can take back control and manage storage allocated within their cluster. Remember that you will need to retrofit all existing projects that use PVCs to have the annotation on the project following the steps in the Create and Annotate Projects section of this document. Use caution when applying annotations to existing projects to ensure that you do not adversely effect applications deployed in these projects. While this blog post focused on quotas for persistent storage, the concepts that are shown here apply to any object type that can have a quota applied.