Docker enables to create containers for your program with all the libraries installed.
This avoids to reinstall all the libraries (say mpich, fftw…) to any user and in new systems
The user just needs to pull the container from a repository. For example nonlinearxwaves/base
I write C++ scientific computing programs with mpich, fftw-mpi and random numbers libraries (as sprng5), which I need to run in both windows and linux systems. Docker simplifies a lot the deployment but also the development of the code.
nonlinearxwaves/base is a container with all of that
After installing Docker you run
docker login
Then you pull the docker image
docker pull nonlinearxwaves/base:0.1
You list the available images with
docker images -a
You identify the image id (in this example it is ec56f7250d5a)
REPOSITORY TAG IMAGE ID CREATED SIZE
nonlinearxwaves/base 0.1 ec56f7250d5a 42 hours ago 1.13GB
You run the image with (you must replace the image id with your image id)
docker run -i -t ec56f7250d5a
And you are in a shell with all the libraries installed and you may compile and run your mpi application in the usual way. In this image you will be the user “user”
user@2ff281ad4621:~$
The number 2ff281ad4621 is the container id that is now running (similar to a virtual machine)
This works with Windows and Linux (and also Mac, but I did not test)
You may also create your images with the Dockerfile
Is docker fast ? or is it better not to use a container? we will test …