Ceph Preflight
July 15, 2017 | Ceph Storage ProvisioningIn this post, we describe how we prepared the nodes in the Pulpos cluster for Ceph installation.
Chrony
The clocks on Ceph nodes (especially Ceph Monitor nodes) should be synchronized to prevent issues arising from clock drift. Chrony is an implementation of the Network Time Protocol (NTP); and is the preferred NTD daemon on RHEL/CentOS 7. Hence we run chronyd rather than ntpd on Pulpos nodes.
The chrony suite has been installed on every Pulpos node during the auto installation of CentOS 7.
Password-less SSH access
The SSH public key for root
has been copied to every Pulpos node during the auto installation of CentOS 7.
Firewall
1) FirewallD is installed by default with CentOS 7 on every Pulpos node;
2) We’ve used a simple Ansible playbook to bind all 1GbE and 40GbE interfaces on every node to the trusted
zone of FirewallD;
3) We’ll run mon
daemons on pulpo-mon01, pulpo-mds01 & pulpo-admin; osd
daemons on pulpo-osd01, pulpo-osd02 & pulpo-osd03; and mds
daemon on pulpo-mds01. Let’s open the required ports:
SELINUX
SELINUX is doubly disabled, with both a kernel parameter selinux=0
and the configuration file /etc/selinux/config
.
IPv6
Ceph doesn’t support dual-stack yet! So we’ve only configured IPv4 addresses on the network interfaces, even though we do have both IPv4 and IPv6 addresses available for all 10G public interfaces.