In this post, we document how we aggregated two 1GbE network interfaces into a logical bonded interface, on the 4-GPU workstation Hydra. We’ll use mode 6 (balance-alb) of Linux Ethernet Bonding Driver, because UCSC does not support Link Aggregation for servers hosted outside the data center.
0) The two 1GbE network interfaces are eno1 & eno2. It is prudent to back up the old configurations: /etc/sysconfig/network-scripts/ifcfg-eno1 & /etc/sysconfig/network-scripts/ifcfg-eno2.
1) RHEL 7 Networking Guide says the bonding module is not loaded by default in RHEL/CentOS 7 and one needs to load the module:
but I find it unnecessary; because I didn’t do it and everything worked fine.
2) Create configuration file /etc/sysconfig/network-scripts/ifcfg-bond0 for the Channel Bonding Interface bond0:
In this case we use mode 6 (balance-alb), which does not require any specific configuration of the switch.
3a) Edit configuration file /etc/sysconfig/network-scripts/ifcfg-eno1 for SLAVE interface eno1:
3b) Edit configuration file /etc/sysconfig/network-scripts/ifcfg-eno2 for SLAVE interface eno2:
4) Activate the Channel Bond:
Note that we were operating in-band, via an SSH session over eno1; so we chained the commands together. And it worked! My SSH connection was not even dropped!