Deploy Ceph storage cluster on Ubuntu server

About Ceph

Ceph is a storage platform with a focus on being distributed, resilient, and having good performance and high reliability. Ceph can also be used as a block storage solution for virtual machines or through the use of FUSE, a conventional filesystem. Ceph is extremely configurable, with administrators being able to control virtually all aspects of the system.

A Ceph deployment usually has the following components:

  • Monitors
  • Managers
  • Ceph OSDs
  • MDSs

Requirements

I am not gonna go into the detail of the deployment requirements since this is just quick guideline on how to spin up a Ceph cluster. If you are looking for a setup on production environment, I recommend you to take a look at Ceph official website. Especially, hardware recommendations and OS recommendations.

In this setup, I am gonna use 4 x Ubuntu Server 18.04 LTS servers. 1 server for the admin node and the rest for the Ceph cluster operation. All the machine are in the same network as below:

1
2
3
4
5
Hostname       IP address
ceph-admin 172.17.30.10
ceph1 172.17.30.11
ceph2 172.17.30.12
ceph3 172.17.30.13

Firewall also allowed:

  • 22 for SSH
  • 6789 for Monitors
  • 6800:7300 for OSDs, Managers
  • 8080 for Dashboard
  • 7480 for Rados gateway

Node preparation

On all machines, make sure the hostnames are resolvable using the /etc/hosts file

/etc/hosts
1
2
3
4
172.17.30.10   ceph-admin
172.17.30.11 ceph1
172.17.30.12 ceph2
172.17.30.13 ceph3

We also need to make sure the system clocks are not skewed so Ceph cluster can operate properly. It is recommended to sync the time with NTP servers

1
$ sudo apt -y install ntp

I am gonna use ceph-deploy command from the admin node (ceph-admin) to do the deployment over SSH. To simply the deployment process by removing the password prompt for SSH and sudo privileges:

On all Ceph nodes, enable passwordless sudo in visudo

1
khanh   ALL=NOPASSWD: ALL

On the admin node, copy the SSH public key to all Ceph nodes

1
2
3
ssh-copy-id khanh@ceph1
ssh-copy-id khanh@ceph2
ssh-copy-id khanh@ceph3

Then install the ceph-deploy tool

1
2
3
4
$ wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb https://download.ceph.com/debian-nautilus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
$ sudo apt update
$ sudo apt -y install ceph-deploy

Ceph deployment

Prepare the config and packages

The deployment process can be done quickly using ceph-deploy command line tool. To prepare the Ceph configuration file, we use ceph-deploy new command which followed by node hostnames.

1
$ ceph-deploy new ceph1 ceph2 ceph3

The command above will generate a ceph.conf file which looks like this:

/etc/ceph/ceph.conf
1
2
3
4
5
6
7
[global]
fsid = be6300b0-eb01-4619-8684-40b6d485f94f
mon_initial_members = ceph1, ceph2, ceph3
mon_host = 172.17.30.11,172.17.30.12,172.17.30.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

Each Ceph cluster has it own fsid identity and require cephx key based authentication by default. If you wish to disable the authentication, change it to none.

Next, we install the Cephs required packages on the target nodes. The following command is to install nautilus release. Change it if you want to use another release.

1
$ ceph-deploy install --release nautilus ceph1 ceph2 ceph3

Initialize the cluster

Now it is time to initialize the Ceph cluster. The following command will create the Monitors to all nodes. The configuration will be talken from the generated ceph.conf file above. The command will only complete successfully if all the monitors are up and in the quorum.

1
$ ceph-deploy mon create-initial

In order to use Ceph CLI on each node, we have to be authenticated using keyring file. The following command will help to copy ceph.client.admin.keyring to Ceph config directory.

1
$ ceph-deploy admin ceph1 ceph2 ceph3

Verify the cluster status

Now let’s verify the first cluster status by SSH into a Ceph node and run the following command. If we get HEALTH_OK health, it means the cluster was initialize successfully.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ sudo ceph status
cluster:
id: be6300b0-eb01-4619-8684-40b6d485f94f
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 1m)
mgr: no daemons active
osd: 0 osds: 0 up, 0 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:

Install the management node

Start from the luminous build, Ceph requires a Manager daemon to take care of the cluster monitoring and status. You can install it to any node in your cluster, for example, in my case it is ceph1 node.

1
$ ceph-deploy mgr create ceph1

Add storage to the Ceph cluster

Now it is time to add the storage for our cluster. We are expecting to have additional disks on each node. Let’s use ceph-deploy to discover them.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ ceph-deploy disk list ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/khanh/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk list ceph1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : list
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ff7d20c8d70>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : ['ceph1']
[ceph_deploy.cli][INFO ] func : <function disk at 0x7ff7d20a1050>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph1][DEBUG ] connection detected need for sudo
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO ] Running command: sudo fdisk -l
[ceph1][INFO ] Disk /dev/sdb: 30 GiB, 32212254720 bytes, 62914560 sectors
[ceph1][INFO ] Disk /dev/sda: 50 GiB, 53687091200 bytes, 104857600 sectors
[ceph1][INFO ] Disk /dev/mapper/ceph1--vg-root: 49 GiB, 52655292416 bytes, 102842368 sectors
[ceph1][INFO ] Disk /dev/mapper/ceph1--vg-swap_1: 980 MiB, 1027604480 bytes, 2007040 sectors

The above command found /dev/sdb which is my new disk attached to the node. I am gonna use it for Ceph cluster data storage. Run following command for each node you have. Remember to update the disk in your case.

1
2
3
$ ceph-deploy osd create --data /dev/sdb ceph1
$ ceph-deploy osd create --data /dev/sdb ceph2
$ ceph-deploy osd create --data /dev/sdb ceph3

Ceph dashboard

The Ceph Dashboard is a built-in web-based Ceph management and monitoring application to administer various aspects and objects of the cluster. It is implemented as a Ceph Manager Daemon module.

To install Ceph Dashboard, login into your Ceph Manager node (ceph1 in my case) and run following commands:

1
2
3
$ sudo apt install -y ceph-mgr-dashboard
$ sudo ceph mgr module enable dashboard
$ sudo ceph dashboard ac-user-create admin changeme administrator

Ceph Dashboard listens on port 8080 by default. Open the we browser and go to http://ceph1:8080 to access it. The credential was set in above command: admin as username and changeme as password.

Ceph dashboard

Share Comments