teaming

Bonding and teaming are two implementations of the same concept with a number of differences:

teaming is a more recent implementation that has a small kernel to manage the packet flow but performs most tasks in user space. The communication between user space applications and the kernel is done through APIs. This modular design makes teaming easily extensible and gives full runtime control to user-space.

• teaming works in a lockless manner (unlike bonding which uses rwlocks) and it gives it a lower overhead and thus higher performance.

• teaming allows NANS (Neighbour Advertisements Neighbour Solicitations) IPv6 link monitoring, multiple link monioring, as well as setup of separate port priorities & stickiness.

The teaming packages might not be installed by default so we might have to dnf them:

# dnf -y install teamd

In teaming the logic and work is split between the Team daemon or teamd and a bunch of separate units of code called runners. These runners implement the logic specific to certain load-sharing and backup methods.

There are 4 runners whose concept is the same as in bonding:

round-robin (aka balance-rr in bonding)

active-backup

broadcast

lacp (aka 802.3ad in bonding)

And there's a 5th one unique to teaming:

• loadbalance (Tx load balancing based on BPF info)

In addition to the 5 runners above, we can choose one of the following 3 link monitoring tools:

• ethtool → this is the default tool to monitor link state changes

• arp_ping → can be used to monitor MAC addresses in the local network using ARP packets

• nsna_ping → can be used to monitor neighbour IPv6 interfaces

As we said before, teaming is a more modern implementation of the bonding concept so we might find ourselves in the position of needing to migrate from bonded to teamed channels. That migration is fairly easy with the bond2team binary:

# /usr/bin/bond2team --master bond0

The command above will convert the relevant /etc/sysconfig/network-scripts/ifcfg-* files from bond to team format. However, the new team will retain its name as bond0! We can (and should?) avoid that by adding the --rename option:

# /usr/bin/bond2team --master bond0 --rename team0
.
Resulted files:
.  /tmp/bond2team.PrLfcz/ifcfg-team0
.  /tmp/bond2team.PrLfcz/ifcfg-ens1
.  /tmp/bond2team.PrLfcz/ifcfg-ens2
.
# cat /tmp/bond2team.PrLfcz/ifcfg-team0
NAME=bond0
DEVICE=team0
TYPE=Bond
DEVICETYPE=Team

IPADDR=192.168.122.30
PREFIX=24
GATEWAY=192.168.122.1
ONBOOT=yes
BOOTPROTO=none
NM_CONTROLLED=no
TEAM_CONFIG='{ "runner" : { "name" : "roundrobin" }, "link_watch" : { "name" : "ethtool" } }'

.
# cat /tmp/bond2team.PrLfcz/ifcfg-ens1
DEVICE=ens1
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
DEVICETYPE="TeamPort"
TEAM_MASTER="team0"
TEAM_PORT_CONFIG='{ "prio" : -100 }'
.
# cat /tmp/bond2team.PrLfcz/ifcfg-ens2
DEVICE=ens2
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
DEVICETYPE="TeamPort"
TEAM_MASTER="team0"
TEAM_PORT_CONFIG='{ "prio" : -100 }'

The files resulting from the execution of bond2team are ready-to-use as they are, but I would suggest some cosmetic changes...

• The parameter TYPE in ifcfg-team0 is ignored (DEVICETYPE is used instead) so we might as well remove it for clarity sake.

• The files ifcfg-ens[12] lack the NAME parameter and it would be best to add it (e.g. NAME=team0-slave).

We can use bond2team to translate BONDING_OPTS into the format used in teaming. For instance...

# bond2team --bonding_opts "mode=0 miimon=100"

... would give us this sample file ...

# cat /tmp/bond2team.AFg7pS/ifcfg
DEVICETYPE="Team"
TEAM_CONFIG='{ "runner" : { "name" : "roundrobin" }, "link_watch" : { "name" : "ethtool" } }'

... or ...

# bond2team --bonding_opts "mode=1 miimon=200"

... would give us ...

# cat /tmp/bond2team.sET1LT/ifcfg
DEVICETYPE="Team"
TEAM_CONFIG='{ "runner" : { "name" : "activebackup" }, "link_watch" : { "name" : "ethtool" } }'

We can use the same 4 methods to create teamed channels as with bonding: nmtui, nmcli, configuration files and Network Manager GUI. However, we shall again focus on the configuration files plus CLI for the same reasons as before.

Creating the configuration files for teamed channels is at first a bit more cumbersome than doing that for bonding. But to get up fast on the learning curve we can examine the copious sample files in:

# ls -l /usr/share/doc/teamd/
total 8
drwxr-xr-x. 2 root root 4096 Nov 15 22:29 example_configs
drwxr-xr-x. 5 root root 4096 Nov 15 22:29 example_ifcfgs
.
# ls -l /usr/share/doc/teamd/example_configs/
total 64
-rw-r--r--. 1 root root 305 May 20 2016 activebackup_arp_ping_1.conf
-rw-r--r--. 1 root root 465 May 20 2016 activebackup_arp_ping_2.conf
-rw-r--r--. 1 root root 194 May 20 2016 activebackup_ethtool_1.conf
-rw-r--r--. 1 root root 212 May 20 2016 activebackup_ethtool_2.conf
-rw-r--r--. 1 root root 241 May 20 2016 activebackup_ethtool_3.conf
-rw-r--r--. 1 root root 447 May 20 2016 activebackup_multi_lw_1.conf
-rw-r--r--. 1 root root 285 May 20 2016 activebackup_nsna_ping_1.conf
-rw-r--r--. 1 root root 318 May 20 2016 activebackup_tipc.conf
-rw-r--r--. 1 root root  96 May 20 2016 broadcast.conf
-rw-r--r--. 1 root root 209 May 20 2016 lacp_1.conf
-rw-r--r--. 1 root root  98 May 20 2016 loadbalance_1.conf
-rw-r--r--. 1 root root 140 May 20 2016 loadbalance_2.conf
-rw-r--r--. 1 root root 183 May 20 2016 loadbalance_3.conf
-rw-r--r--. 1 root root  93 May 20 2016 random.conf
-rw-r--r--. 1 root root 244 May 20 2016 roundrobin_2.conf
-rw-r--r--. 1 root root  97 May 20 2016 roundrobin.conf
.
# ls -l /usr/share/doc/teamd/example_ifcfgs/
total 12
drwxr-xr-x. 2 root root 4096 Nov 15 22:29 1
drwxr-xr-x. 2 root root 4096 Nov 15 22:29 2
drwxr-xr-x. 2 root root 4096 Nov 15 22:29 3
.
# ls -l /usr/share/doc/teamd/example_ifcfgs/*
/usr/share/doc/teamd/example_ifcfgs/1:
total 12
-rw-r--r--. 1 root root  73 May 20 2016 ifcfg-eth1
-rw-r--r--. 1 root root  73 May 20 2016 ifcfg-eth2
-rw-r--r--. 1 root root 157 May 20 2016 ifcfg-team_test0
.
/usr/share/doc/teamd/example_ifcfgs/2:
total 12
-rw-r--r--. 1 root root  74 May 20 2016 ifcfg-eth1
-rw-r--r--. 1 root root  74 May 20 2016 ifcfg-eth2
-rw-r--r--. 1 root root 257 May 20 2016 ifcfg-team_test0
.
/usr/share/doc/teamd/example_ifcfgs/3:
total 12
-rw-r--r--. 1 root root 123 May 20 2016 ifcfg-eth1
-rw-r--r--. 1 root root 107 May 20 2016 ifcfg-eth2
-rw-r--r--. 1 root root 293 May 20 2016 ifcfg-team_test0

In the example_config directory we can see a bunch of JSON formatted examples of different configurations: load-balance, active-backup, random, broadcast, etc. The settings apply to the master team file and (save for the "device" parameter) are meant to be included in the TEAM_CONFIG parameter.

We should be familiar with most of the parameters shown in those files but we can always do a man teamd.conf to get a full list with descriptions of them.

In the example_ifcfgs we can get another bunch of sample files for masters and slaves. Most of the time it will suffice to choose the configuration most similar to our desired one, copy, paste and tweak a few obvious parameters.

Once we have created the master and slave configuration files according to our preferences, we can use the teamdctl command to monitor the status of the teamed channel.

# cd /etc/sysconfig/network-scripts
.
# ls -l ifcfg-*
-rw-r--r--. 1 root root 152 Feb 28 15:26 ifcfg-ens1
-rw-r--r--. 1 root root 152 Feb 28 15:26 ifcfg-ens2
-rw-r--r--. 1 root root 254 Aug 30 13:56 ifcfg-lo
-rw-r--r--. 1 root root 345 Feb 28 16:39 ifcfg-team0
.
# ifdown team0                   → if master is brought down, slaves also come down
.
# teamdctl team0 state
Device "team0" does not exist
.
# ifup team0                     →  if master comes up, slaves follow
.
# teamdctl team0 state
setup:
. runner: roundrobin
ports:
. ens1
.   link watches:
.     link summary: up
.     instance[link_watch_0]:
.       name: ethtool
.       link: up
.       down count: 0
. ens2
.   link watches:
.     link summary: up
.     instance[link_watch_0]:
.       name: ethtool
.       link: up
.       down count: 0
.
# ifdown ens2                     →  we  bring down one slave...
.
# teamdctl team0 state            → and the teamed channel keeps working...
setup:
. runner: roundrobin
ports:
. ens1
.   link watches:
.     link summary: up
.     instance[link_watch_0]:
.       name: ethtool
.       link: up
.       down count: 0
.
# ifdown ens1                       we bring down the second and last slave...
.
# teamdctl team0 state             → and the master is up but with no slaves it cannot work
setup:
. runner: roundrobin

With the teamdctl command we can do other things such as...

# teamdctl team0 state dump               → dump a JSON file with the configuration of the whole team
# teamdctl team0 config dump              → dump a JSON file with the configuration of the master
# teamdctl team0 port config dump ens1    → dump a JSON file with the configuration of port ens1

If we have the configuration file ready for a port, we can add it on the fly with:

# teamdctl team0 port add ens3
# teamdctl team0 state
setup:
. runner: roundrobin
ports:
. ens1
.   link watches:
.     link summary: up
.     instance[link_watch_0]:
.       name: ethtool
.       link: up
.       down count: 0
. ens2
.   link watches:
.     link summary: up
.     instance[link_watch_0]:
.       name: ethtool
.       link: up
.       down count: 0
. ens3
.   link watches:
.     link summary: up
.     instance[link_watch_0]:
.       name: ethtool
.       link: up
.       down count: 0

And we can also remove it:

# teamdctl team0 port remove ens3

There is also have the teamnl binary that we can use to:

# teamnl team0 ports                       → list ports in a network team
. 4: ens5: up 100Mbit FD
. 3: ens4: up 100Mbit FD
.
# teamnl team0 options                     → list team's option values
. queue_id (port:ens5) 0
. priority (port:ens5) -100
. user_linkup_enabled (port:ens5) false
. user_linkup (port:ens5) true
. enabled (port:ens5) true
. queue_id (port:ens4) 0
. priority (port:ens4) -100
. user_linkup_enabled (port:ens4) false
. user_linkup (port:ens4) true
. enabled (port:ens4) true
. mcast_rejoin_interval 0
. mcast_rejoin_count 0
. notify_peers_interval 0
. notify_peers_count 0
. mode roundrobin
.
# teamnl team0 getoption mode                       →  get value of specific option
roundrobin
.
# teamnl team0 setoption notify_peers_count 10      →  change value of specific option

We can add multiple IP addresses the usual way in the ifcfg files (IPADDR0, IPADDR1, etc) but nothing stops us from adding IPs on-the-fly to teamed channels...

# ip addr add 192.168.122.90/24 dev team0

... but obviously those won't survive a reboot. We can also use ip instead of ifup / ifdown to bring interfaces up and down:

# ip link set dev ens1 down
teamdctl team0 state
setup:
. runner: roundrobin
ports:
. ens1
.   link watches:
.     link summary: down
.     instance[link_watch_0]:
.       name: ethtool
.       link: down
.       down count: 1
. ens2
.   link watches:
.     link summary: up
.     instance[link_watch_0]:
.       name: ethtool
.       link: up
.       down count: 0
# ip link set dev ens1 up

Finally and to close this up, we can add ports to the channel with ip in a similar fashion we did before with teamdctl:

# ip link set dev ens3 master team0

<< bonding           bridging >>