How to deploy Ceph Cluster on CentOS
In order to integrate Ceph Cluster with Openstack, we need deploy a Ceph Cluster first, here is the step by step guide.
Nodes
ems-sv-centos: 10.195.231.247 management node for installation
ceph1: 10.195.231.201 mon, mgr and osd1
ceph2: 10.195.231.202 osd2
ceph3: 10.195.231.203 osd3
Install NTP on all nodes
Ceph need time to be synced up btw all Ceph node, otherwise you may see unhealth Ceph state, in this case, we install NTP on all nodes.
# Install NTP and start service
sudo yum install ntp ntpdate ntp-doc -y
sudo systemctl enable ntpd
sudo systemctl start ntpd# Manual sync
sudo ntpdate -u 0.pool.ntp.org# Change the timezone
timedatectl set-timezone America/Los_Angeles
Create a user who deploys ceph (all nodes)
The Ceph-deploy tool must log on to the Ceph node as a normal user, and this user has permission to use sudo without a password because it requires no password to be entered during the installation of the software and configuration files.
It is recommended that you create a specific user for Ceph-deploy on all CEPH nodes within the cluster, but do not use the name “Ceph”.
useradd -d /home/ceph -m ceph
passwd ceph
Ensure that newly created users on each Ceph node have sudo permissions
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph
Allow password-free SSH login (Management node)
Because Ceph-deploy does not support entering a password, you must generate an SSH key on the management node and distribute its public key to each ceph node. Ceph-deploy will attempt to generate an SSH key pair for the initial monitors.
1) Generate SSH key pair
# su ceph
$ ssh-keygen
$ ssh-copy-id ceph@ceph1
$ ssh-copy-id ceph@ceph2
$ ssh-copy-id ceph@ceph3
2) Modify the ~/.ssh/config file
$ sudo vi ~/.ssh/config
Host ems-sv4-centos7
Hostname ems-sv4-centos7.es.equinix.com
User ceph
Host ceph1
Hostname ceph1.es.equinix.com
User ceph
Host ceph2
Hostname ceph2.es.equinix.com
User ceph
Host ceph3
Hostname ceph3.es.equinix.com
User ceph
Issue: If “bad owner or Permissions On/home/zeng/.ssh/config” appears, execute the command to modify the file permissions.
$ sudo chmod 644 ~/.ssh/config
Ports required for opening (Ceph node)
The 6789-port communication is used by default between Ceph Monitors, which is used by default with 6,800:7,300 ports in this range between the OSD. The Ceph OSD can use multiple network connections for replication and heartbeat communication with clients, monitors, and other OSD.
$ sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent
// or turn off the firewall
$ sudo systemctl stop firewalld
$ sudo systemctl disable firewalld
Terminal (TTY) (Ceph node)
You may get an error when executing the Ceph-deploy command on CentOS and RHEL. If your Ceph node is set to Requiretty by default, execute the
$ sudo visudo
Find the Defaults requiretty option, change it to Defaults:ceph!requiretty or comment directly, so ceph-deploy can connect with the previously created user (the user who created the Ceph deployment).
When editing a profile/etc/sudoers, you must use sudo visudo instead of a text editor.
Turn off SELinux (Ceph node)
sudo setenforce 0
To make the SELinux configuration permanent (if it is indeed the source of the problem), modify its configuration file/etc/selinux/config:
sudo vi /etc/selinux/config
selinux=disabled
Configuring the Epel Source (deploy node)
sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/ yum.repos.d/dl.fedoraproject.org*
Add the package source to the software Library (deploy node)
sudo vim /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages for $basearch
baseurl=https://download.ceph.com/rpm-nautilus/el7/x86_64
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-nautilus/el7/noarch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc[ceph-source]
name=Ceph source packages
baseurl=https://download.ceph.com/rpm-nautilus/el7/SRPMS
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
Update the Software library and install the Ceph-deploy (deploy node)
sudo yum update && sudo yum install ceph-deploy
sudo yum install yum-plugin-priorities
Build Cluster
Perform the following steps under the deploy node :
1. Create a directory on the management node that holds the Ceph-deploy generated configuration file and key pair.
$ cd ~
$ mkdir my-cluster
$ cd my-cluster
Note: If you are having trouble installing ceph, you can use the following command to clear the package and configure it
// remove the installation package
$ ceph-deploy purge admin-node node1 node2 node3// clear the configuration
$ ceph-deploy purgedata admin-node node1 node2 node3
$ ceph-deploy forgetkeys$ ceph-deploy forgetkeys
2. Creating clusters and monitoring nodes
To create a cluster and initialize the monitoring node, Here ceph1 is the monitor node, so execute:
ceph-deploy new ceph1
After completion, My-clster more than 3 files:ceph.conf,ceph-deploy-ceph.logandceph.mon.keyring.
Issue: If “[Ceph_deploy][error] runtimeerror:remote connection got closed, ensure isrequirettydisabled for Node1” appears, perform sudo vis Udo will comment out Defaults Requiretty.
3. Modify the configuration file
[ceph@ems-sv4-centos7 my-cluster]$ vim ceph.conf
[global]
fsid = 71e20981-ee53-4e54-a6b4-29a65b30d62c
mon_initial_members = ceph1
mon_host = 10.195.231.201
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
Change the default number of replicas in the Ceph configuration file from 3 to 2 if you only have two OSD node, so that only two OSD can reach active + clean status. Add the OSD pool default size = 2 to the [Global] segment:
$ sed -i ‘$a\osd pool default size = 2‘ ceph.conf
If you have more than one network card,
The public network can be written to the [global] segment of the Ceph configuration file:
public network = {ip-address}/{netmask}
4. Installing Ceph
To install Ceph on all nodes:
$ ceph-deploy install ceph1 ceph2 ceph3
Issue: [Ceph_deploy][error] runtimeerror:failed to execute command:yum-y install Epel-release
workaround
sudo yum -y remove epel-release
5. Configure the initial monitor (s), and collect all keys
$ ceph-deploy mon create-initial
These key rings should appear in the current directory after you have completed the above actions:
[ceph@ems-sv4-centos7 my-cluster]$ ls -la
-rw------- 1 ceph ceph 113 Feb 8 21:08 ceph.bootstrap-mds.keyring
-rw------- 1 ceph ceph 113 Feb 8 21:08 ceph.bootstrap-mgr.keyring
-rw------- 1 ceph ceph 113 Feb 8 21:08 ceph.bootstrap-osd.keyring
-rw------- 1 ceph ceph 113 Feb 8 21:08 ceph.bootstrap-rgw.keyring
-rw------- 1 ceph ceph 151 Feb 8 21:08 ceph.client.admin.keyring
-rw-rw-r-- 1 ceph ceph 197 Feb 8 20:45 ceph.conf
-rw-rw-r-- 1 ceph ceph 266889 Feb 8 21:08 ceph-deploy-ceph.log
-rw------- 1 ceph ceph 73 Feb 8 20:45 ceph.mon.keyring
Full logs
ceph@ems-sv4-centos7 my-cluster]$ ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc618ae3368>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7fc618d51578>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph1
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph1 ...
[ceph1][DEBUG ] connection detected need for sudo
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.7.1908 Core
[ceph1][DEBUG ] determining if provided host has same hostname in remote
[ceph1][DEBUG ] get remote short hostname
[ceph1][DEBUG ] deploying mon to ceph1
[ceph1][DEBUG ] get remote short hostname
[ceph1][DEBUG ] remote hostname: ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][DEBUG ] create the mon path if it does not exist
[ceph1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph1/done
[ceph1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph1/done
[ceph1][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph1.mon.keyring
[ceph1][DEBUG ] create the monitor keyring file
[ceph1][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph1 --keyring /var/lib/ceph/tmp/ceph-ceph1.mon.keyring --setuser 1000 --setgroup 1000
[ceph1][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph1.mon.keyring
[ceph1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph1][DEBUG ] create the init path if it does not exist
[ceph1][INFO ] Running command: sudo systemctl enable ceph.target
[ceph1][INFO ] Running command: sudo systemctl enable ceph-mon@ceph1
[ceph1][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph1.service to /usr/lib/systemd/system/ceph-mon@.service.
[ceph1][INFO ] Running command: sudo systemctl start ceph-mon@ceph1
[ceph1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph1][DEBUG ] ********************************************************************************
[ceph1][DEBUG ] status for monitor: mon.ceph1
[ceph1][DEBUG ] {
[ceph1][DEBUG ] "election_epoch": 3,
[ceph1][DEBUG ] "extra_probe_peers": [],
[ceph1][DEBUG ] "feature_map": {
[ceph1][DEBUG ] "mon": [
[ceph1][DEBUG ] {
[ceph1][DEBUG ] "features": "0x3ffddff8ffacfffb",
[ceph1][DEBUG ] "num": 1,
[ceph1][DEBUG ] "release": "luminous"
[ceph1][DEBUG ] }
[ceph1][DEBUG ] ]
[ceph1][DEBUG ] },
[ceph1][DEBUG ] "features": {
[ceph1][DEBUG ] "quorum_con": "4611087854031667195",
[ceph1][DEBUG ] "quorum_mon": [
[ceph1][DEBUG ] "kraken",
[ceph1][DEBUG ] "luminous",
[ceph1][DEBUG ] "mimic",
[ceph1][DEBUG ] "osdmap-prune"
[ceph1][DEBUG ] ],
[ceph1][DEBUG ] "required_con": "144115738102218752",
[ceph1][DEBUG ] "required_mon": [
[ceph1][DEBUG ] "kraken",
[ceph1][DEBUG ] "luminous",
[ceph1][DEBUG ] "mimic",
[ceph1][DEBUG ] "osdmap-prune"
[ceph1][DEBUG ] ]
[ceph1][DEBUG ] },
[ceph1][DEBUG ] "monmap": {
[ceph1][DEBUG ] "created": "2020-02-08 13:06:35.271147",
[ceph1][DEBUG ] "epoch": 1,
[ceph1][DEBUG ] "features": {
[ceph1][DEBUG ] "optional": [],
[ceph1][DEBUG ] "persistent": [
[ceph1][DEBUG ] "kraken",
[ceph1][DEBUG ] "luminous",
[ceph1][DEBUG ] "mimic",
[ceph1][DEBUG ] "osdmap-prune"
[ceph1][DEBUG ] ]
[ceph1][DEBUG ] },
[ceph1][DEBUG ] "fsid": "71e20981-ee53-4e54-a6b4-29a65b30d62c",
[ceph1][DEBUG ] "modified": "2020-02-08 13:06:35.271147",
[ceph1][DEBUG ] "mons": [
[ceph1][DEBUG ] {
[ceph1][DEBUG ] "addr": "10.195.231.201:6789/0",
[ceph1][DEBUG ] "name": "ceph1",
[ceph1][DEBUG ] "public_addr": "10.195.231.201:6789/0",
[ceph1][DEBUG ] "rank": 0
[ceph1][DEBUG ] }
[ceph1][DEBUG ] ]
[ceph1][DEBUG ] },
[ceph1][DEBUG ] "name": "ceph1",
[ceph1][DEBUG ] "outside_quorum": [],
[ceph1][DEBUG ] "quorum": [
[ceph1][DEBUG ] 0
[ceph1][DEBUG ] ],
[ceph1][DEBUG ] "rank": 0,
[ceph1][DEBUG ] "state": "leader",
[ceph1][DEBUG ] "sync_provider": []
[ceph1][DEBUG ] }
[ceph1][DEBUG ] ********************************************************************************
[ceph1][INFO ] monitor: mon.ceph1 is running
[ceph1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.ceph1
[ceph1][DEBUG ] connection detected need for sudo
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph_deploy.mon][INFO ] mon.ceph1 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmp5fnSuV
[ceph1][DEBUG ] connection detected need for sudo
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] get remote short hostname
[ceph1][DEBUG ] fetch remote file
[ceph1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.admin
[ceph1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.bootstrap-mds
[ceph1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.bootstrap-mgr
[ceph1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.bootstrap-osd
[ceph1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmp5fnSuV
6. Add 3 OSD
1) Log in to the Ceph node and create a directory for the OSD daemon and add permissions.
[ceph@ceph1 ~]$ sudo mkdir /var/local/osd0
[ceph@ceph1 ~]$ sudo chmod 777 /var/local/osd0/[ceph@ceph2 ~]$ sudo mkdir /var/local/osd1
[ceph@ceph2 ~]$ sudo chmod 777 /var/local/osd1/[ceph@ceph3 ~]$ sudo mkdir /var/local/osd2
[ceph@ceph3 ~]$ sudo chmod 777 /var/local/osd2/
2) Then, execute Ceph-deploy from the management node to prepare the OSD.
[ceph@ems-sv4-centos7 my-cluster]$ ceph-deploy osd create ceph1 --data /dev/sdb
[ceph@ems-sv4-centos7 my-cluster]$ ceph-deploy osd create ceph1 --data /dev/sdc[ceph@ems-sv4-centos7 my-cluster]$ ceph-deploy osd create ceph2 --data /dev/sdb
[ceph@ems-sv4-centos7 my-cluster]$ ceph-deploy osd create ceph2 --data /dev/sdc[ceph@ems-sv4-centos7 my-cluster]$ ceph-deploy osd create ceph3 --data /dev/sdb
[ceph@ems-sv4-centos7 my-cluster]$ ceph-deploy osd create ceph3 --data /dev/sdc
Full logs, FYI
[ceph@ems-sv4-centos7 my-cluster]$ ceph-deploy osd create ceph1 --data /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph1 --data /dev/sdb
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7facc55ed3f8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] journal : None
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] host : ceph1
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7facc5834a28>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] data : /dev/sdb
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[ceph1][DEBUG ] connection detected need for sudo
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.7.1908 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][WARNIN] osd keyring does not exist yet, creating one
[ceph1][DEBUG ] create a keyring file
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[ceph1][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph1][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 25fb1334-2352-4d16-a4ab-a0aad27f6329
[ceph1][WARNIN] Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-a6d32ee0-3185-4872-bf37-79defa632e40 /dev/sdb
[ceph1][WARNIN] stdout: Physical volume "/dev/sdb" successfully created.
[ceph1][WARNIN] stdout: Volume group "ceph-a6d32ee0-3185-4872-bf37-79defa632e40" successfully created
[ceph1][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-25fb1334-2352-4d16-a4ab-a0aad27f6329 ceph-a6d32ee0-3185-4872-bf37-79defa632e40
[ceph1][WARNIN] stdout: Logical volume "osd-block-25fb1334-2352-4d16-a4ab-a0aad27f6329" created.
[ceph1][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph1][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph1][WARNIN] Running command: /usr/sbin/restorecon /var/lib/ceph/osd/ceph-0
[ceph1][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-a6d32ee0-3185-4872-bf37-79defa632e40/osd-block-25fb1334-2352-4d16-a4ab-a0aad27f6329
[ceph1][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[ceph1][WARNIN] Running command: /bin/ln -s /dev/ceph-a6d32ee0-3185-4872-bf37-79defa632e40/osd-block-25fb1334-2352-4d16-a4ab-a0aad27f6329 /var/lib/ceph/osd/ceph-0/block
[ceph1][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph1][WARNIN] stderr: got monmap epoch 1
[ceph1][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQB5Jj9eBeF1OBAAdjl3PMSJ5WwdRHIx++sJtA==
[ceph1][WARNIN] stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ceph1][WARNIN] added entity osd.0 auth auth(auid = 18446744073709551615 key=AQB5Jj9eBeF1OBAAdjl3PMSJ5WwdRHIx++sJtA== with 0 caps)
[ceph1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph1][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 25fb1334-2352-4d16-a4ab-a0aad27f6329 --setuser ceph --setgroup ceph
[ceph1][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdb
[ceph1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph1][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-a6d32ee0-3185-4872-bf37-79defa632e40/osd-block-25fb1334-2352-4d16-a4ab-a0aad27f6329 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[ceph1][WARNIN] Running command: /bin/ln -snf /dev/ceph-a6d32ee0-3185-4872-bf37-79defa632e40/osd-block-25fb1334-2352-4d16-a4ab-a0aad27f6329 /var/lib/ceph/osd/ceph-0/block
[ceph1][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph1][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[ceph1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph1][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-0-25fb1334-2352-4d16-a4ab-a0aad27f6329
[ceph1][WARNIN] stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-25fb1334-2352-4d16-a4ab-a0aad27f6329.service to /usr/lib/systemd/system/ceph-volume@.service.
[ceph1][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@0
[ceph1][WARNIN] stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.
[ceph1][WARNIN] Running command: /bin/systemctl start ceph-osd@0
[ceph1][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[ceph1][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[ceph1][INFO ] checking OSD status...
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph1 is now ready for osd use.
7. Copy the configuration file and admin key to the management node and the Ceph node
Push configuration and client.admin key to a remote host.
[ceph@ems-sv4-centos7 my-cluster]$ ceph-deploy admin ceph1 ceph2 ceph3
Full logs, FYI
[ceph@ems-sv4-centos7 my-cluster]$ ceph-deploy admin ceph1 ceph2 ceph3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph1 ceph2 ceph3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f261524d908>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['ceph1', 'ceph2', 'ceph3']
[ceph_deploy.cli][INFO ] func : <function admin at 0x7f2615af1398>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph1
[ceph1][DEBUG ] connection detected need for sudo
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph2
[ceph2][DEBUG ] connection detected need for sudo
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph3
[ceph3][DEBUG ] connection detected need for sudo
[ceph3][DEBUG ] connected to host: ceph3
[ceph3][DEBUG ] detect platform information from remote host
[ceph3][DEBUG ] detect machine type
[ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
8. Make sure you have the right permissions for ceph.client.admin.keyring on each node
[ceph@ceph1 ~]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
[ceph@ceph2 ~]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
[ceph@ceph3 ~]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
The reason is there is no mgr node in the cluster.
[ceph@ceph1 ~]$ ceph health
HEALTH_WARN no active mgr
[ceph@ceph1 ~]$ ceph -s
cluster:
id: 71e20981-ee53-4e54-a6b4-29a65b30d62c
health: HEALTH_WARN
no active mgrservices:
mon: 1 daemons, quorum ceph1
mgr: no daemons active
osd: 6 osds: 2 up, 2 indata:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
9. Add ceph1 node as mgr
[ceph@ems-sv4-centos7 my-cluster]$ sudo ceph-deploy mgr create ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy mgr create ceph1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] mgr : [('ceph1', 'ceph1')]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc24199af80>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mgr at 0x7fc24200b2a8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph1:ceph1
The authenticity of host 'ceph1 (10.195.231.201)' can't be established.
ECDSA key fingerprint is SHA256:++zzEuzo+YVLp/QXW8SrrUZQ2ySS4nke/08w/AGoJ68.
ECDSA key fingerprint is MD5:ca:84:71:ab:16:49:a4:14:09:aa:98:f8:03:3b:c4:23.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph1' (ECDSA) to the list of known hosts.
Warning: the ECDSA host key for 'ceph1' differs from the key for the IP address '10.195.231.201'
Offending key for IP in /root/.ssh/known_hosts:3
Are you sure you want to continue connecting (yes/no)? yes
root@ceph1's password:
Warning: the ECDSA host key for 'ceph1' differs from the key for the IP address '10.195.231.201'
Offending key for IP in /root/.ssh/known_hosts:3
Matching host key in /root/.ssh/known_hosts:5
Are you sure you want to continue connecting (yes/no)? yes
root@ceph1's password:
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.7.1908 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][WARNIN] mgr keyring does not exist yet, creating one
[ceph1][DEBUG ] create a keyring file
[ceph1][DEBUG ] create path recursively if it doesn't exist
[ceph1][INFO ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph1/keyring
[ceph1][INFO ] Running command: systemctl enable ceph-mgr@ceph1
[ceph1][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph1.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph1][INFO ] Running command: systemctl start ceph-mgr@ceph1
[ceph1][INFO ] Running command: systemctl enable ceph.target
Check ceph state
[ceph@ceph1 ~]$ ceph health
HEALTH_OK
[ceph@ceph2 ~]$ ceph health
HEALTH_OK
[ceph@ceph3 ~]$ ceph health
HEALTH_OK
10. Enable dashboard
[ceph@ceph1 ~]$ ceph mgr module enable dashboard
[ceph@ceph1 ~]$ ceph config set mgr mgr/dashboard/server_addr 0.0.0.0
[ceph@ceph1 ~]$ ceph config set mgr mgr/dashboard/server_port 7000
[ceph@ceph1 ~]$ ceph config set mgr mgr/dashboard/ssl false
[ceph@ceph1 ~]$ ceph dashboard set-login-credentials admin admin
Username and password updated
11. Start ceph service automatically after reboot(all nodes)
sudo systemctl enable ceph-mon.target
sudo systemctl enable ceph-osd.target
sudo systemctl enable ceph.target
Expansion
1. Add OSD
Add a Osd.2 on the Node1.
1) Create a directory ?
$ ssh node1$ sudo mkdir /var/local/osd2
$ sudo chmod 777 /var/local/osd2/
$ exit
2) create
$ ceph-deploy osd create ceph1 --data /dev/sdb
2. Add Monitors
Add monitoring nodes in Ndoe2 and Node3.
1) Modificationmon_initial_members,mon_hostandpublic networkconfiguration:
[global]
fsid = a3dd419e-5c99-4387-b251-58d4eb582995
mon_initial_members = node1,node2,node3
mon_host = 192.168.0.131,192.168.0.132,192.168.0.133
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephxosd pool default size = 2
public network = 192.168.0.120/24
2) push to another node:
$ ceph-deploy --overwrite-conf config push node1 node2 node3
3) Add Monitoring node:
$ ceph-deploy mon add node2 node3
4) View the Monitoring node:
[email protected]$ ceph -s
cluster a3dd419e-5c99-4387-b251-58d4eb582995
health HEALTH_OK
monmap e3: 3 mons at {node1=192.168.0.131:6789/0,node2=192.168.0.132:6789/0,node3=192.168.0.133:6789/0}
election epoch 8, quorum 0,1,2 node1,node2,node3
osdmap e25: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v3919: 64 pgs, 1 pools, 0 bytes data, 0 objects
19494 MB used, 32687 MB / 52182 MB avail
64 active+clean
Q&A
Q. OSD in DOWN state because of time out of sync?
[ceph@ceph2 ~]$ ceph -s; ceph osd tree
cluster:
id: 71e20981-ee53-4e54-a6b4-29a65b30d62c
health: HEALTH_OKservices:
mon: 1 daemons, quorum ceph1
mgr: ceph1(active)
osd: 6 osds: 6 up, 6 indata:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 6.0 GiB used, 58 GiB / 64 GiB avail
pgs:ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.01268 root default
-3 0.01268 host ceph1
0 ssd 0.00879 osd.0 up 1.00000 1.00000
1 ssd 0.00389 osd.1 up 1.00000 1.00000
2 0 osd.2 down 0 1.00000
3 0 osd.3 down 0 1.00000
4 0 osd.4 down 0 1.00000
5 0 osd.5 down 0 1.00000
After configured NTP and time in sync, all OSD come up.
[ceph@ceph2 ~]$ ceph -s; ceph osd tree
cluster:
id: 71e20981-ee53-4e54-a6b4-29a65b30d62c
health: HEALTH_OKservices:
mon: 1 daemons, quorum ceph1
mgr: ceph1(active)
osd: 6 osds: 6 up, 6 indata:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 6.0 GiB used, 58 GiB / 64 GiB avail
pgs:ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.06253 root default
-3 0.01268 host ceph1
0 ssd 0.00879 osd.0 up 1.00000 1.00000
1 ssd 0.00389 osd.1 up 1.00000 1.00000
-7 0.02248 host ceph2
2 ssd 0.01859 osd.2 up 1.00000 1.00000
3 ssd 0.00389 osd.3 up 1.00000 1.00000
-5 0.02737 host ceph3
4 ssd 0.01859 osd.4 up 1.00000 1.00000
5 ssd 0.00879 osd.5 up 1.00000 1.00000