All posts by Peter Bryzgalov

Cloud GPU providers comparison – more graphs

Continued from the previous post.

With the graphs below you can compare calculation time and cost for a fixed amount of calculations in Floating Point Operations (FLOPs***). Use buttons above the graphs to set calculations amount and number of virtual or bare metal machines called “nodes” used for calculations.

Important notice: we assume that a task can be run on multiple computers WITHOUT any slowdown. This means that on N machines the task will finish N times faster. This could be true, for instance, in case of hyperparameters search when you have multiple independant tasks.

Graphs are not scaled when parameters change to make points movements clearly visible. To scale graphs manually use “Autoscale” and “Reset axes” buttons that appear in the top right corner of the graphs when you bring the mouse cursor over it (on tablet devices tap the graph).

Continue reading Cloud GPU providers comparison – more graphs

Cloud GPU providers comparison

There are plenty of cloud GPU offers from many providers. This post is here to help you to compare offers in terms of cost*, GPU and CPU performance**, memory etc. It has some interactive graphs for comparing offers and a table with details for each offer.

Please find my article on our STAIR laboratory web site about why I created these graphs and how to use them.

The “filter” charts below provide statistical information about offers distribution by some parameters, such as how many offers each provider has. These charts can also be used for filtering offers. Click on a value in any chart to filter out offers with different values. You can select multiple values on one or multiple charts. All graphs and the table below will show data only for the selected offers.

Please note, that only offers with GPU are mentioned on this page. Some providers, like Google and Amazon, have too many offers to show them all here, so I picked up only some representative ones.

Continue reading Cloud GPU providers comparison

Bash: Expanding variables and commands in text

Say you have a text file with variables or commands in it:

Store text file contents in a variable and expand variables and commands in the text with:

That’s it! You will see something like:

Note, that without  echo EOF bash will use first line of text as a limit string for heredoc.

Docker ecosystem

This chart depicts a structure of Docker-related tools in terms of their functionality. Docker ecosystem is ever changing, so is this chart. I plan to update it more or less regularly. Any suggestions on how to improve it are welcome.

Link to a large PDF file.

Last update 2016/10/14

About classification

Service discovery

Tools for registering and searching information about services provided by applications running in containers (including multi-host applications).


Tools with main purpose of managing multi-host multi-container applications. Usually help managing multiple containers and network connections between them.


Tools that help :
a. making containers easier to use,
b. giving containers new features,
c. building a service powered by containers.


Tools for monitoring resources used by containers, containers heath- check, monitoring in-container environment.


Light-weight OS for running containers.


Tools for organising inter-container and host-container communications.

Data and File Systems

Tools for managing data in containers and tools that include or control Docker file system plugins.

Note: Tools’ features presented on the chart are based on what is advertised on the tool web site or on information provided by the tool developers.

Connect VirtualBox VMs

Here is a way to set up networking in VirtualBox VMs so, that VMs can see each other and also the Internet.

For experiment I used VirtualBox 4.3 on Mac OS X 10.9.

Created two VMs with Ubuntu 14.04.

You will need two Network adapters – one for communication between VMs, and another for communication with the outer world. On both VMs set similar Network settings:

Adapter 1

Attached to : NAT


You can also set Port Forwarding here to be able to access VM from the host. Here I’ve open port 22 for SSH access. I’ll be able to connect with “ssh -p 2200 user@localhost” from the host.


Adapter 2

Attached to: Internal Network

Name: network-name

Name can be anything, but must be the same on both machines.


Now start you VMs and open a Terminal window. Test with “ip addr s” command that you have these network interfaces: eth0 and eth1.

Assign a static IP to the interface of Adapter 2 (type “Internal network”). For me it is eth1. The other one should already have IP address, so you can tell which one you need by “ip addr s” command.

I used addresses and For subnet mask I used (CIDR 24 ), which means that all addresses 10.0.1.X will belong to my virtual network.

Set IP address in terminal window:

and on the other machine:

Test that your first machine is visible from the second:

Docker network performance

Here are some results of testing performance of different Docker Inter Container Communication (ICC) techniques.

Techniques I tested:

iptables Routing settings that give containers externally visible IP address as described here:
pipework  – a tool for assigning external IP addresses to containers. It has two modes: using macvlan interface and using veth pairs. I used the second one – veth pairs.
 Docker link Docker feature for inter container communication (ICC). It doesn’t assign containers external IPs.
 Open vSwitch
Used version 2.1.0 with kernel support.

Tested on Dell Poweredge D320 server with Xeon 1.8GHz, 4GB (1333MHz) RDIMM, 7200RPM SATA HDD
OS Ubuntu server 12.04.4 LTS
kernel 3.8.0-39
Docker version 0.11.

Network performance was tested with iperf with the following client command:
 iperf -c $ServerIP -P 1 -i 1 -p 5001 -f g -t 5

Performance in Gbit/s, average for one container.

The following 3 setups was tested:

  1. One server container and multiple client containers with 1 iperf process in each container.
  2. Multiple servers and multiple clients. 1 iperf process in each container.
  3. One server container with one iperf server, multiple containers with multiple iperf clients in each container.

One server and multiple clients

ICC performance between one container with one iperf server and multiple containers with one iperf client each. Average performance per container in Gbit/s.

client containers 1 2 4 8 16 20
iptables 10.0 8.0 4.2 2.1 1.0 0.8
pipework 11.6 8.3 5.1 2.7 1.4 1.0
link 12.0 8.1 4.9 2.5 1.2 1.0
ovs 13.2 11.0 6.6 3.3 1.8 1.4


1 server multiple clients
1 server – multiple clients

Multiple servers and multiple clients

ICC between multiple containers each with one iperf server inside, and multiple containers with one iperf client each. Client number i connects to the server number i (mod n), where n is the number of servers.
Below are the average performance results in Gbit/s.

servers 1 2 4 20
clients 1 2 4 20
iptables 10.0 8.3 4.3 0.8
pipework 11.6 10.0 5.5 1.3
link 12.0 9.1 5.1 1.1
ovs 13.2 10.8 5.6 1.4
Multiple servers and multiple clients with 1 iperf process inside


One server and multiple containers with multiple iperf clients inside

Average performance per container in Gbit/sec.
Number of containers x number of iperf clients in one container

1 x 1 1 x 2 1 x 4 1 x 16 2 x 1 4 x 1 16 x 1 2 x 2 4 x 4 16 x 16
pipework 11.16 9.81 3.38 1.07 9.33 4.99 1.52 4.63 1.20 0.04
link 12.10 7.70 3.30 0.80 8.20 4.90 1.20 4.30 1.10 0.00
ovs 13.06 11.14 6.36 1.54 11.24 6.63 1.99 6.40 1.50 0.06


Running iperf clients in different containers gives better performance compared with the same number of clients running in one container (compare 1×4 and 4×1).