There are many situations in which someone would like to configure network bonding or channel bonding on a server… In most cases it is done for the network card redundancy. You can bond two network cards together in an “active/backup” configuration and in case one of your network cards fails all of the traffic will be instantly switched to the other, still working network card.
You can also bond two network cards together to get more network traffic throughput. This means that you need to configure your network bonding in “load-balancing” mode and make both network cards active at the same time. This way you double the network throughput to and from your server. But if you want to succesfully run a “load-balancing” bonding configuration your networking infrastructure (routers, switches,…) MUST ALSO be configured for port load balancing.
Read more about the problems you might come to HERE.
Let’s start our Configure Network Bonding on CentOS guide!
1. Create bonding configuration file
Create a new file called bonding.conf in /etc/modprobe.d directory (full path – /etc/modprobe.d/bonfing.conf) and insert the following lines.
- Active backup configuration (one active network card)
alias bond0 bonding options bond0 miimon=80 mode=1
- Balance-tlb configuration
alias bond0 bonding options bond0 miimon=80 mode=1
2. Create new network interface
Create bond0 configuration file (full path – /etc/sysconfig/network-scripts/ifcfg-bond0) and insert the following lines.
DEVICE=bond0 IPADDR=your_ip_address NETMASK=network_mask GATEWAY=gateway_address ONBOOT=yes BOOTPROTO=none USERCTL=no
3. Edit network configuration
Change the contents of exsisting ifcfg-eth0 and ifcfg-eth1 configuration files (full path -/etc/sysconfig/network-scripts/ifcfg-eth0 and /etc/sysconfig/network-scripts/ifcfg-eth1)and insert the following lines.
- /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0 ONBOOT=yes BOOTPROTO=none USERCTL=no MASTER=bond0 SLAVE=yes
- /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1 ONBOOT=yes BOOTPROTO=none USERCTL=no MASTER=bond0 SLAVE=yes
4. Restart network
Restart networking and check the status of bonding.
[root@foo ~]# service network restart [root@foo ~]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 80 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 08:00:27:6b:06:7e Slave queue ID: 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 08:00:27:57:b2:a9
5. Test bonding
You can test the network bonding by dropping active network card (ifdown eth0) and check that the network connection is still working…
One Comment
Leave a Reply