Ceph is a storage platform with a focus on being distributed, resilient, and having good performance and high reliability. Ceph can also be used as a block storage solution for virtual machines or through the use of FUSE, a conventional filesystem. Ceph is extremely configurable, with administrators being able to control virtually all aspects of the system.
A Ceph deployment usually has the following components:
- Ceph OSDs
I am not gonna go into the detail of the deployment requirements since this is just quick guideline on how to spin up a Ceph cluster. If you are looking for a setup on production environment, I recommend you to take a look at Ceph official website. Especially, hardware recommendations and OS recommendations.
In this setup, I am gonna use 4 x Ubuntu Server 18.04 LTS servers. 1 server for the admin node and the rest for the Ceph cluster operation. All the machine are in the same network as below:
Hostname IP address
Firewall also allowed:
- 22 for SSH
- 6789 for Monitors
- 6800:7300 for OSDs, Managers
- 8080 for Dashboard
- 7480 for Rados gateway
On all machines, make sure the hostnames are resolvable using the /etc/hosts file
We also need to make sure the system clocks are not skewed so Ceph cluster can operate properly. It is recommended to sync the time with NTP servers
sudo apt -y install ntp
I am gonna use
ceph-deploy command from the admin node (ceph-admin) to do the deployment over SSH. To simply the deployment process by removing the password prompt for SSH and sudo privileges:
On all Ceph nodes, enable passwordless sudo in visudo
khanh ALL=NOPASSWD: ALL
On the admin node, copy the SSH public key to all Ceph nodes
Then install the ceph-deploy tool
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb https://download.ceph.com/debian-nautilus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt update
sudo apt -y install ceph-deploy
The deployment process can be done quickly using
ceph-deploy command line tool. To prepare the Ceph configuration file, we use
ceph-deploy new command which followed by node hostnames.
ceph-deploy new ceph1 ceph2 ceph3
The command above will generate a ceph.conf file which looks like this:
Each Ceph cluster has it own
fsid identity and require
cephx key based authentication by default. If you wish to disable the authentication, change it to
Next, we install the Cephs required packages on the target nodes. The following command is to install nautilus release. Change it if you want to use another release.
ceph-deploy install --release nautilus ceph1 ceph2 ceph3
Now it is time to initialize the Ceph cluster. The following command will create the Monitors to all nodes. The configuration will be talken from the generated
ceph.conf file above. The command will only complete successfully if all the monitors are up and in the quorum.
ceph-deploy mon create-initial
In order to use Ceph CLI on each node, we have to be authenticated using keyring file. The following command will help to copy
ceph.client.admin.keyring to Ceph config directory.
ceph-deploy admin ceph1 ceph2 ceph3
Now let’s verify the first cluster status by SSH into a Ceph node and run the following command. If we get HEALTH_OK health, it means the cluster was initialize successfully.
sudo ceph status
Start from the luminous build, Ceph requires a Manager daemon to take care of the cluster monitoring and status. You can install it to any node in your cluster, for example, in my case it is ceph1 node.
ceph-deploy mgr create ceph1
Now it is time to add the storage for our cluster. We are expecting to have additional disks on each node. Let’s use ceph-deploy to discover them.
ceph-deploy disk list ceph1
The above command found /dev/sdb which is my new disk attached to the node. I am gonna use it for Ceph cluster data storage. Run following command for each node you have. Remember to update the disk in your case.
ceph-deploy osd create --data /dev/sdb ceph1
The Ceph Dashboard is a built-in web-based Ceph management and monitoring application to administer various aspects and objects of the cluster. It is implemented as a Ceph Manager Daemon module.
To install Ceph Dashboard, login into your Ceph Manager node (ceph1 in my case) and run following commands:
sudo apt install -y ceph-mgr-dashboard
Ceph Dashboard listens on port 8080 by default. Open the we browser and go to http://ceph1:8080 to access it. The credential was set in above command:
admin as username and
changeme as password.