Tag Archives: networking

Docker ecosystem

This chart depicts a structure of Docker-related tools in terms of their functionality. Docker ecosystem is ever changing, so is this chart. I plan to update it more or less regularly. Any suggestions on how to improve it are welcome.

docker-ecosystem-8-6-1
Link to a large PDF file.

Last update 2016/10/14

About classification

Service discovery

Tools for registering and searching information about services provided by applications running in containers (including multi-host applications).

Orchestration

Tools with main purpose of managing multi-host multi-container applications. Usually help managing multiple containers and network connections between them.

Automation

Tools that help :
a. making containers easier to use,
b. giving containers new features,
c. building a service powered by containers.

Monitoring

Tools for monitoring resources used by containers, containers heath- check, monitoring in-container environment.

OS

Light-weight OS for running containers.

Networking

Tools for organising inter-container and host-container communications.

Data and File Systems

Tools for managing data in containers and tools that include or control Docker file system plugins.

Note: Tools’ features presented on the chart are based on what is advertised on the tool web site or on information provided by the tool developers.

Connect VirtualBox VMs

Here is a way to set up networking in VirtualBox VMs so, that VMs can see each other and also the Internet.

For experiment I used VirtualBox 4.3 on Mac OS X 10.9.

Created two VMs with Ubuntu 14.04.

You will need two Network adapters – one for communication between VMs, and another for communication with the outer world. On both VMs set similar Network settings:

Adapter 1

Attached to : NAT

set1

You can also set Port Forwarding here to be able to access VM from the host. Here I’ve open port 22 for SSH access. I’ll be able to connect with “ssh -p 2200 user@localhost” from the host.

set2

Adapter 2

Attached to: Internal Network

Name: network-name

Name can be anything, but must be the same on both machines.

set3

Now start you VMs and open a Terminal window. Test with “ip addr s” command that you have these network interfaces: eth0 and eth1.

Assign a static IP to the interface of Adapter 2 (type “Internal network”). For me it is eth1. The other one should already have IP address, so you can tell which one you need by “ip addr s” command.

I used addresses 10.0.1.3 and 10.0.1.5. For subnet mask I used 255.255.255.0 (CIDR 24 ), which means that all addresses 10.0.1.X will belong to my virtual network.

Set IP address in terminal window:

and on the other machine:

Test that your first machine is visible from the second:

Docker network performance

Here are some results of testing performance of different Docker Inter Container Communication (ICC) techniques.

Techniques I tested:

iptables Routing settings that give containers externally visible IP address as described here:  blog.codeaholics.org/2013/giving-dockerlxc-containers-a-routable-ip-address/
pipework  github.com/jpetazzo/pipework  – a tool for assigning external IP addresses to containers. It has two modes: using macvlan interface and using veth pairs. I used the second one – veth pairs.
 Docker link Docker feature for inter container communication (ICC). It doesn’t assign containers external IPs. docs.docker.com/userguide/dockerlinks/
 Open vSwitch openvswitch.org
Used version 2.1.0 with kernel support.

Tested on Dell Poweredge D320 server with Xeon 1.8GHz, 4GB (1333MHz) RDIMM, 7200RPM SATA HDD
OS Ubuntu server 12.04.4 LTS
kernel 3.8.0-39
Docker version 0.11.

Network performance was tested with iperf with the following client command:
 iperf -c $ServerIP -P 1 -i 1 -p 5001 -f g -t 5

Performance in Gbits/s, average for one container.

The following 3 setups was tested:

  1. One server container and multiple client containers with 1 iperf process in each container.
  2. Multiple servers and multiple clients. 1 iperf process in each container.
  3. One server container with one iperf server, multiple containers with multiple iperf clients in each container.

One server and multiple clients

ICC performance between one container with one iperf server and multiple containers with one iperf client each. Average performance per container in Gbits/s.

client containers 1 2 4 8 16 20
iptables 10.0 8.0 4.2 2.1 1.0 0.8
pipework 11.6 8.3 5.1 2.7 1.4 1.0
link 12.0 8.1 4.9 2.5 1.2 1.0
ovs 13.2 11.0 6.6 3.3 1.8 1.4

 

1 server multiple clients
1 server – multiple clients

Multiple servers and multiple clients

ICC between multiple containers each with one iperf server inside, and multiple containers with one iperf client each. Client number i connects to the server number i (mod n), where n is the number of servers.
Below are the average performance results in Gbits/s.

servers 1 2 4 20
clients 1 2 4 20
iptables 10.0 8.3 4.3 0.8
pipework 11.6 10.0 5.5 1.3
link 12.0 9.1 5.1 1.1
ovs 13.2 10.8 5.6 1.4
ICCn-m-1
Multiple servers and multiple clients with 1 iperf process inside

 

One server and multiple containers with multiple iperf clients inside

Average performance per container in Gbits/sec.
Number of containers x number of iperf clients in one container

1 x 1 1 x 2 1 x 4 1 x 16 2 x 1 4 x 1 16 x 1 2 x 2 4 x 4 16 x 16
pipework 11.16 9.81 3.38 1.07 9.33 4.99 1.52 4.63 1.20 0.04
link 12.10 7.70 3.30 0.80 8.20 4.90 1.20 4.30 1.10 0.00
ovs 13.06 11.14 6.36 1.54 11.24 6.63 1.99 6.40 1.50 0.06

ICC1-n-m

Running iperf clients in different containers gives better performance compared with the same number of clients running in one container (compare 1×4 and 4×1).