As you might already known, in Kubernetes we can use Persisten Volumes (PV) for the Pod storage resource. PV is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using StorageClasses.
By using StorageClass we can provision volumes dynamically. There are several supported storage back-end, such as: AzureDisk, AWSElasticBlockStore, GCEPersistentDisk, Ceph, NFS, etc. In this blog post, I am gonna show the steps to use Ceph as the storage back-end for a Kubernetes using dynamic volume provisioning.
First of all, you need a working Ceph cluster. If you are looking for a tutorial to set up a Ceph cluster, take a look at my previous blog post Deploy Ceph storage cluster on Ubuntu server.
And of course, you will need a Kubernetes cluster as well. It can be a managed one from the cloud providers like AWS, Azure or GCP. It could also be your self-managed Kubernetes cluster using kubeadmin.
Before we begin, let’s check the status of our clusters first.
kubectl get nodes
We need a dedicated Ceph pool which will be used for Kubernetes volume creation. The following command will create a pool named
ceph osd pool create kube 8
If you enabled the authentication in your Ceph cluster config, you have to create an user for Kubernetes nodes to access the pool. The following command create a client.kube user which has neccessary previleges.
$ ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
Now get the client.kube user key
ceph auth get client.kube
We also need the admin key which will be used for Kubernetes’s Ceph provisioner operation. Using the admin key, the provisioner will be able to create volumes inside the pool. I assume that the admin user is client.admin
ceph auth get client.admin
Following is the YAML file for the Secret resource which will be used for Ceph authentication from Kubernetes cluster. The
key value is base64 encoded from the keys in previous step.
Create Secret resources using kubectl command
kubectl create -f ceph-user-secret.yaml
Following is the YAML file for the StorageClass resource which will be used for Kubernetes Ceph provisioner to access the Ceph cluster.
- monitors: the list of your Ceph-mon nodes. It can be a single node or multiple nodes separated by comma
- pool: the Ceph pool name created in the previous step
Create StorageClass resource using kubectl command
kubectl create -f ceph-storageclass.yaml
Verify the recently added StorageClass
kubectl get storageclass
In order to access to the Ceph cluster, each Kubernetes worker node must hold the
ceph.client.kube.keyring key file which generated in the previous step. Make sure you copy it to
/etc/ceph directory on each nodes. This file will be read by
kubelet process whenever it has a running Pod that requires access to a Persistent Volumes which maps to Ceph StorageClass.
These Kubernetes worker nodes also need Ceph’s common packages to interact with the storage. On all your worker nodes, execute the following command
sudo apt install ceph-common
Now, it is time to test the creation of a Persistent Volume Clain (VPC) using Ceph StorageClass. Following is the YAML definition for the PVC resource.
Create PVC resource using kubectl command
kubectl create -f ceph-test-pvc.yaml
Verify the recently added PVC and PV
kubectl get pvc
The Persistent Volume is ready to use, we can now attach it to any Pod as usual.
The above steps work fine for the managed Kubernetes cluster. However, if you are using a self-hosted cluster, using kubeadm for example, you might face a problem with missing ceph driver in the Controller Manager.
Error: "failed to create rbd image: executable file not found in $PATH, command output:
To fix it, there is a workaround which mentioned on Kubernetes’s Github. Just need to edit the
/etc/kubernetes/manifests/kube-controller-manager.yaml file to change…
After editing, the kube-controller-manager pod got recreated using new image.