Docker network performance

Here are some results of testing performance of different Docker Inter Container Communication (ICC) techniques.

Techniques I tested:

iptables Routing settings that give containers externally visible IP address as described here:  blog.codeaholics.org/2013/giving-dockerlxc-containers-a-routable-ip-address/
pipework  github.com/jpetazzo/pipework  – a tool for assigning external IP addresses to containers. It has two modes: using macvlan interface and using veth pairs. I used the second one – veth pairs.
 Docker link Docker feature for inter container communication (ICC). It doesn’t assign containers external IPs. docs.docker.com/userguide/dockerlinks/
 Open vSwitch openvswitch.org
Used version 2.1.0 with kernel support.

Tested on Dell Poweredge D320 server with Xeon 1.8GHz, 4GB (1333MHz) RDIMM, 7200RPM SATA HDD
OS Ubuntu server 12.04.4 LTS
kernel 3.8.0-39
Docker version 0.11.

Network performance was tested with iperf with the following client command:
 iperf -c $ServerIP -P 1 -i 1 -p 5001 -f g -t 5

Performance in Gbits/s, average for one container.

The following 3 setups was tested:

  1. One server container and multiple client containers with 1 iperf process in each container.
  2. Multiple servers and multiple clients. 1 iperf process in each container.
  3. One server container with one iperf server, multiple containers with multiple iperf clients in each container.

One server and multiple clients

ICC performance between one container with one iperf server and multiple containers with one iperf client each. Average performance per container in Gbits/s.

client containers 1 2 4 8 16 20
iptables 10.0 8.0 4.2 2.1 1.0 0.8
pipework 11.6 8.3 5.1 2.7 1.4 1.0
link 12.0 8.1 4.9 2.5 1.2 1.0
ovs 13.2 11.0 6.6 3.3 1.8 1.4

 

1 server multiple clients
1 server – multiple clients

Multiple servers and multiple clients

ICC between multiple containers each with one iperf server inside, and multiple containers with one iperf client each. Client number i connects to the server number i (mod n), where n is the number of servers.
Below are the average performance results in Gbits/s.

servers 1 2 4 20
clients 1 2 4 20
iptables 10.0 8.3 4.3 0.8
pipework 11.6 10.0 5.5 1.3
link 12.0 9.1 5.1 1.1
ovs 13.2 10.8 5.6 1.4
ICCn-m-1
Multiple servers and multiple clients with 1 iperf process inside

 

One server and multiple containers with multiple iperf clients inside

Average performance per container in Gbits/sec.
Number of containers x number of iperf clients in one container

1 x 1 1 x 2 1 x 4 1 x 16 2 x 1 4 x 1 16 x 1 2 x 2 4 x 4 16 x 16
pipework 11.16 9.81 3.38 1.07 9.33 4.99 1.52 4.63 1.20 0.04
link 12.10 7.70 3.30 0.80 8.20 4.90 1.20 4.30 1.10 0.00
ovs 13.06 11.14 6.36 1.54 11.24 6.63 1.99 6.40 1.50 0.06

ICC1-n-m

Running iperf clients in different containers gives better performance compared with the same number of clients running in one container (compare 1×4 and 4×1).

4 thoughts on “Docker network performance”

  1. The performance tests are quite interesting, recently I have started working with docker and I wanted to do some performance tests, particularly calculating “jitter”, for that I am using Iperf and I am having problems connection iperf server and client on two different machines. When I start the iperf server container I bind it to a port 5001 of host to 5001 in container. My iperf client on another host sends traffic to machine1 ip address at port 5001, but i get an error that i cannot connect to the server.
    any help would be really appreciated.

    1. Most probable reason is that you have firewall on machine with the server container blocking port 5001. Troubleshooting networks problems is never easy, you know. If simple solutions don’t work try using tcpdump or wireshark.

  2. On 29 Jun, 2014, at 10:09, Jérôme Petazzoni wrote:

    Hi Peter,
    This is an interesting benchmark!
    If I understand correctly, when you benchmarked “docker links”, you end up using the internal IP address + port, right?
    i.e. the client connects to e.g. 172.17.0.2:5000…?

    However I don’t understand exactly how you ran the other methods; could you elaborate a bit on that?

    Thank a lot.

    1. Hi, Jerome,

      I am glad you find my benchmarking interesting. Let me explain, how I used to connect containers.
      In general, for every setup I have a script that starts some iperf server containers, assigns them fixed IPs, starts some iperf client containers with some iperf clients running inside, and assigns these containers fixed IPs.
      A particular iperf server container is assigned to every client container, and the server IP is used as a parameter when starting client containers. Thus, every iperf client has an IP of iperf server it is going to connect.

      The above setup is common for all four ICC setup methods. What different is the way I assign IP addresses to containers.

      As for the Docker link, you are right. A client is linked to a server at container start:
      docker run ... --link serverN:iserv ...,
      then a client connects to IP=$ISERV_PORT_5001_TCP_ADDR. serverN is the name of the container with iperf server number N.

      IPtables setting involves creating macvlan bridge for every container, assigning it IP with “ip addr add “+IP+” dev”+br_name, and adding rules to iptables -t nat. I followed these setup instructions for fixed IP address.

      Pipework setup is easy. Thank you for a wonderful tool! To assign IP addresses to containers I use:
      pipework.sh br1 containerID IP
      Again, IPs are fixed.

      OVS setup involves creating one OVS bridge on the host machine, adding to the bridge one interface for each container, moving interface to the container namespace and assigning fixed IP address.

      If you have any further questions, don’t hesitate to ask.

      Kind regards,
      Peter

Comments are closed.