In this post I describe how we set up a home-brew environment for automated installation of CentOS 7 on the nodes of our Ceph storage cluster Pulpos. For bigger deployment, one might want to look into a more sophisticated provisioning system like Cobbler.
4) Copy initrd.img and vmlinuz from the CentOS 7 local mirror:
5) Edit /usr/lib/systemd/system/tftp.service so that we’ll use /tftpboot (instead of the default /var/lib/tftpboot) as the TFTP directory:
6) Enable and start TFTP (Note: because we’ve made changes to the unit file for tftp.service, we need to reload systemd manager configuration with systemctl daemon-reload):
7) Test TFTP locally:
So it works!
PXELINUX
1) Create the directory /tftpboot/pxelinux.cfg where PXELINUX configuration files reside:
2) Create a local boot configuration file (/tftpboot/pxelinux.cfg/localboot) with the following content:
3) Make localboot the default boot configuration:
4) Create a PXELINUX configuration file for each IP address in the control subnet 192.168.1.0/24. For example, the private IP address of pulpo-dtn is 192.168.1.2, its PXELINUX configuration filename is thus C0A80102 (uppercase hexadecimal value of the IP address); and the content of /tftpboot/pxelinux.cfg\C0A80102 is (see RHEL7 Boot Options):
Kickstart Installation
Lastly we create a kickstart configuration file for each node. These configuration files are served by Apache HTTP server. For example, the kickstart configuration file for pulpo-dtn is /var/www/html/centos/ks/dtn.cfg, with the following content:
NOTES
1) pulpo-dtn has a single SSD. On dual-SSD nodes such as pulpo-mon01 and pulpo-mds01, we create software RAID 1 partitions on the 2 SSDs:
2) Each of the 3 OSD nodes has two 1TB SATA SSDs (which are the boot drives), twelve 8TB SATA HDDs, and two 1.2TB PCIe SSDs. Most of the time, the twelve 8TB SATA HDDs show up as sda - sdl, and the two 1TB SATA SSDs as sdm & sdn, respectively. However, device names do change from time to time! To avoid accidental installation of the OS onto some of the HDDs, one can either pull out all the HDD driver bays during network installation; or leave the disk partitioning information blank in the kickstart configuration file and manually configure the partitions during installation.
I took the latter approach. I didn’t physically go to the Data Center to configure the partitions; instead, I used Supermicro’s GUI tool IPMIView to redirect KVM console to my laptop and did the configuration there.
3) Somehow, during the network installation, the Anaconda installer would send a BPDU packet to the top-of-the-rack Brocade switch, causing the ports to be err-disabled! Our network engineers had to disable BPDU guard on those ports!