CentOS6 - NICの冗長化 bonding

NIC冗長化のため、bondingのactive-backupを試してみた。

確認環境
  • CentOS 6.3 x86_64 on VMware
    • 同一セグメントにつながるNICを2枚予め準備
確認手順

1. 前準備

(1) NetworkManagerを停止
# service NetworkManager stop
NetworkManager デーモンを停止中:                           [  OK  ]
(2) NetworkManagerの無効化
# chkconfig NetworkManager off
# chkconfig --list | grep NetworkManager
NetworkManager  0:off 1:off 2:off 3:off 4:off 5:off 6:off
※すべてoffであることを確認

2. NIC冗長化の定義

(1) /etc/modprobe.d/bonding の作成
# cat /etc/modprobe.d/bonding.conf 
alias bond0 bonding
    options bonding mode=1 miimon=100
(2) /etc/sysconfig/network-scripts/ifcfg-eth0 の編集
# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="none"
ONBOOT=no
HWADDR=00:0C:29:87:23:5D
USERCTL=no
MASTER=bond0
SLAVE=yes
(3) /etc/sysconfig/network-scripts/ifcfg-eth1 の作成
# cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE="eth1"
BOOTPROTO="none"
ONBOOT=no
HWADDR=00:0C:29:87:23:67
USERCTL=no
MASTER=bond0
SLAVE=yes3. bonding有効化

3. 定義有効化
# service network restart
...

4. 動作確認

(1) ipコマンドによる状態の確認
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast maste
r bond0 state UP qlen 1000
    link/ether 00:0c:29:87:23:5d brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast maste
r bond0 state UP qlen 1000
    link/ether 00:0c:29:87:23:5d brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state 
UP 
    link/ether 00:0c:29:87:23:5d brd ff:ff:ff:ff:ff:ff
    inet 192.168.11.6/24 brd 192.168.11.255 scope global bond0
    inet6 fe80::20c:29ff:fe87:235d/64 scope link 
       valid_lft forever preferred_lft forever
(2) /proc/net/bondingによる状態の確認
# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:87:23:5d
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:87:23:67
Slave queue ID: 0

(2) 外部へのping
# ping 192.168.11.1
PING 192.168.11.1 (192.168.11.1) 56(84) bytes of data.
64 bytes from 192.168.11.1: icmp_seq=1 ttl=64 time=5.97 ms
64 bytes from 192.168.11.1: icmp_seq=2 ttl=64 time=1.29 ms
64 bytes from 192.168.11.1: icmp_seq=3 ttl=64 time=1.79 ms
64 bytes from 192.168.11.1: icmp_seq=4 ttl=64 time=1.04 ms
64 bytes from 192.168.11.1: icmp_seq=5 ttl=64 time=0.909 ms
64 bytes from 192.168.11.1: icmp_seq=6 ttl=64 time=0.979 ms
^C


コメント