Docker vs Singularity vs Shifter vs UGE Container Edition

Docker vs Singularity vs Shifter vs UGE Container Edition

Here is a comparison of HPCS Singularity vs NeRSC Shifter. They both bring the benefits of container to the HPC world. As such, both provide very similar features. The subtleties are in their implementation approach. MPI maybe the place with the biggest difference.
This is work in progress and I update it frequently as I work with them and get better understanding. Yes, THEM, because I can't make a decision which tool is better to my problem at hand from just reading docs. Please comment on the blogger comment section to improve the doc. Much thanks!
(Working version of this file in github is here)

Docker Singularity Shifter UGE Container Ed.
Docker Singularity Shifter UGE Container Ed.
Main problem being addressed DevOps, microservices. Enterprise applications Application portability (single image file, contain all dependencies)
Reproducibility, run cross platform, provide support for legacy OS and apps.
Utilize the large number of docker apps. Provides a way to run them in HPC after a conversion process. It also strip out all the requirements of root so that they are runnable as user process.
Running dockers containers in HPC, with UGE managing the docker daemon process (?)
Interaction w/ Docker Singularity work completely independent of Docker.
It does have ability to import docker images, convert them to singularity images, or run docker container directly
Shifter primary workflow is to pull and convert docker image into shifter image.
Docker pull is used minimally, in Image Gateway.
Possible Use Case microservices. Do one small thing alone but does it well. Often daemon listening on TCP port 24/7
  • A large-ish container with all dependencies build-in, but targetting only a very specific application (eg a custom python program needed a specific version of Anaconda and set of python libraries)
  • A "gigantic" image containing the whole OS of a dying computer, running a mission critical legacy app
  • Potentially docker-like, with many small containers, and string them together via command chain (not sure if there are ways to configure dependencies/versions in definition file)
  • User group of primary focus Developers/DevOps Scientific Application Users Slurm admins, docker users UGE users
    Portability Docker Hub
    Single image file. Container purpose build for singularity. Can import docker images. Working to accept docker file or Rocket definition syntax as Singularity definition.
    Hub of singular repository by utilizing google drive file share
    Image Gateway pulls from Docker Hub, automatically convert to shifter image.
    Version 1.15-1 2.2
    Documentation is decent, with many community articles popping up as well.
    16.08.3 (pre-release)
    Documentation is work needing progress :).
    8
    Host OS Req Linux kernel 3.10+, eg:
    RHEL 7
    Debian 8.0
    Ubuntu 12.04
    SUSE Linux Enterprise 12
    other
    RHEL 6
    Ubuntu
    ...
    Cray OS
    Linux with kernel 2.6.25+, eg
    RHEL 6.
    ...
    RHEL 7
    Newer version of Linux that supports Docker/cgroup.
    Run different OS Yes.
    docker pull centos/ubuntu/etc
    Yes. Utilize commands like debootstrap and febootstrap to bootstrap a new image Yes
    Guiding principle Make container as transparent as possible to user. Run as much as possible as normal user process, change environment only enough to provide portability of the application
    Scheduler Docker Swarm Agnostic, run singularity as a job Can work as stand alone container environment. For scalability, scheduler integration currently exist for SLURM, via SPANK plugin. (Should work with other scheduler) qsub -l to request docker boolean resource. UGE will be able to assign different priorities to diff containers
    Run on multiple nodes? Swarm tool provided by Docker Scheduler's job to spin multiple instances on multiple nodes Yes.
    salloc -N4 ...
    srun ...
    Process isolation via Kernel Namespace? Yes. PID 1 is the process executed by the container. Cannot see host's processes. Yes, but PID namespace is disabled by default.
    While inside a singularity shell, ps -ef shows all process of host. kill on PID on host will work.
    Yes
    Resource restriction via cgroup? docker run -c ... -m ... --cpuset ... Not touched by Singularity. Scheduler/Resource Manager to control what is available to Singularity Let SLURM manage cgroup restrictions cgroups_params in qconf -mconf
    MPI MP-what? :-)
    Singularity has build-in support for MPI (OpenMPI, MPICH, IntelMPI, MVAPICH).
    Once app is build with MPI libraries, execute mpirun as normal, replacing the usual binary with the single file singularity app.
    While the app is running inside the singularity container, Process Management Interface (PMI) calls will pass thru the singularity launcher onto ORTED.
    Shifter relies on MPICH Application Binary Interface (ABI).
    Apps that use vanilla MPI that is compatibile with MPICH would work. Site-specific MPI libraries need to be copied to the container at run time. LD_LIBRARY_PATH need to include for example /opt/udiImage/...
    IB Likely ok for IPoIB. Probably no support for RDS. Native access
    GPU Your app should run head-less, why you trying to access video card? :-) Native access, native speed. See NIH's page for setup In progress. Reportedly getting near native performance
    Network access NAT (default) Transparent. Access network like any user process would. Transparent?
    Host's device access /dev, /sys and /proc are bind mounted to container by default. /dev, /sys and /proc of host show up inside container
    Host's file system access docker run -v hostpath:containerpath will bind mount to make host's FS accessible to container. container can be modified and saved. root access possible. Singularity app run as a user process, therefore it has access to all of host's fs and devices that any user process have access to, including specially optimized FS like Lustre and GPFS. Singularity performs a bind mount between host's mount to the inside of the container.
    -B /opt:/mnt will bind mount host's /opt to container's /mnt
    Container writable if started with -w.
    Root access possible if singularity is run by root
    Start shifter with volume mapping, done using loop dev. qsub -xdv to map host's FS to container
    Deployment Single package install.
    Daemon service with root priviledge
    Use in batch processing environment left as an exercise to the reader
    root to install singularity, sexec-setuid need setuid root to work correctly.
    Single .rpm/.deb per host.
    No scheduler modification needed.
    Shifter need to be installed in all hpc nodes
    Docker NOT needed in hpc
    Need an ImageGateway
    Upgrade to new version of UGE
    Head and compute nodes need to be newer version of Linux that supports Docker (eg RHEL 7).
    Container activation Docker daemon running on each host loopback mount on singularity image file to utilize container. No daemon process. No docker daemon on CN. srun ... loopback mount May employ a set of nodes with running docker daemon, such nodes are flagged as providing a boolean resource for job to make request against. (?)
    Container build-up
    1. Docker spec file.
    2. Interactive build container, then use docker save.
    3. Pull existing image from repository such as docker hub, docker store, private repository.
    1. Create singularity .def file. Package dependencies are automatically analyzed and incorporated into the image
    2. Interactive expand on existing image by leveraging (sudo) singularity shell -w.
    3. Import/convert a docker image into a singularity image singularity import.
    4. Share pre-build singularity image file (single sparse file, very compressible). Singularity Hub (coming soon? already exist?)
    1. Import/convert docker container into shifter container
    2. Not sure if there are any provision to build container from scratch, or to modify existing container.
    Security User running docker commands need to be in special docker group to gain elevated system access
    No change in security paradigm.
    User run singularity image/app without special privileges.
    root to install shifter binary.
    Need special integration into scheduler (or only in cray cuz they are special?).
    User run shifter image/app without special privileges.
    Env
  • No host FS mounted/mapped by default.
  • Container writable, but content not saved unless special docker save is invoked from outside the container.
  • container inherit the env of user invoking the singularity cmd
  • /bin/sudo, if installed, will have setuid flag, but it is blocked inside a singularity container
  • root can be had via "sudo singularity shell centos7.img"
  • /home, /var/tmp, /tmp, /root are bind mount from host and fully writable
  • / is read-only mount and container is typically inmutable (unless started with -w flag)
  • User management ? Yes. Container has sanitized passwd, group files
    Sample User Commands sudo singularity create ubuntu.img
    sudo singularity bootstrap ubuntu.img ubuntu.def
    singularity exec ubuntu.img bash
    ./ubuntu.img
    # the %runscript section of the container will autoamtically be invoked
    module load shifter
    shiftering pull docker:python:3.5 # pull docker image
    shifterimg images # list images
    shifter --image=python:3.5 python # execute a container app
    module load slurm
    sbatch --image=docker:python:3.5 shifter python # run as job
    Unix pipes output of container captured by pipe. eg:
    docker run centos:6 ls /etc | wc
    However, docker won't take standard input, at least not for 1.5-1. The follwing does not work as one might expect:
    echo 'cat("Hello world\n")' | docker run r-base:latest R --no-save
    can daisy chain singularity apps like regular unix commands. eg
    echo "echo hello world" | singularity exec centos7.img bash
    Shifter will capture standard input as well inside an sbatch script:
    echo 'cat("Hello world\n")' | shifter R --no-save
    Misc Singularity app can run outside HPC, without any job scheduler. It can serve as a container for app portability outside the HPC world, and since a single file encompass the whole container, this maybe advantageous for sharing, portability and archive.
    Singularity recommends compiling all apps into the Singularity container, spec files would need to be written. I hear they are looking to adopt Docker and/or RKT specfile to minimize this burden.
    Docker files can be imported or even run directly. Given docker operate on a different paradigm, the hit-and-miss outcome of such conversion is understandable.
    In future development, maybe NeRSC's shifter could run singularity app as well.
    Shifter container/app can run as stand alone outside HPC scheduler. This is certainly useful for testing and development. However, if one isn't trying to utilize container in HPC, maybe there is not much point in converting docker images to shifter images.
    Shifter works by pulling the large number of existing containers. Docker isn't needed inside the HPC, some gateway location can be used for Shifter to pull docker images and store converted image in a location accessible cluster-wide, eg /scratch. See shifter workflow below for more details.
    Feel High :) Building a singularity 2.2 image feels a lot like building a VM using kickstart file.
    Adopters Like wildfire by internet companies UC Berkeley, Stanford, TACC, SDSC, GSI, HPC-UGent, Perdue, UFL, NIH, etc.
    Seems like the non-cray community is gravitating toward Singularity.
    Integrated into Qlustar, Bright
    NeRSC, VLSCI, CSCS, CERN, etc.
    Seems like the big physics number crunchers are gravitating toward shifter, cuz they use Cray?


    Shifter Workflow

    As per CSCS/ETH slide share, based on Shifter 15.12.0.




    Example Commands in Docker

    # get a container with anaconda python and jupyter notebook 
    
    docker run -i -t -p 8888:8888 continuumio/anaconda /bin/bash -c "/opt/conda/bin/conda install jupyter -y --quiet && mkdir /opt/notebooks && /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser"
    

    Example Commands in Singularity

     
    sudo  singularity create --size 2048 /opt/singularity_repo/ubuntu16.img
    # Create an image file to host the content of the container.  
    # Think of it like creating the virtual hard drive for a VM.
    # The image created should be a sparse file (ie with holes),
    # However, while ext4/ubuntu seems to result in compact file
    # RHEL6/ext3 img take up their final file size.
    
    sudo -E singularity bootstrap /opt/singularity_repo/ubuntu16.img /opt/src/singularity/examples/ubuntu.def
    # "kickstart" build the container
    # If behind a proxy, sudo -E will preserve http_proxy env var 
    
    singularity shell ubuntu.def
    singularity exec  ubuntu.img bash  # slight diff than shell, PS1 not changed, .bash_profile sourced here
    singularity exec  centos.img  rpm -qa
    # Note that uname and uptime will always report info of the host, not the container.
    
    singularity  exec  -w  centos.img  zsh     # -w for writable container, FS ACL still applies
                                               # centos.img needs to be writable by the user
    sudo singularity shell -w centos.img       # root can edit all contents inside container
    
    
    # best to build up container using kickstart mentality.  
    # ie, to add more packages to image, vi /opt/src/singularity/exampless/ubuntu.def   
    # re-run bootstrap command again.  
    # bootstap on existing image will build on top of it, not overwritting it/restarting from scratch
    # singularity .def file is kinda like kickstart file
    # unix commands can be run, but if there is any error, the bootstrap process ends.
    
    
    # pipe and redirect works as input and output for singularity container executions: 
    
    echo "echo hello world" | singularity exec centos7.img bash
    singularity exec centos7.img echo 'echo hello world' | singularity exec centos7.img bash
    singularity exec centos7.img rpm -qa | singularity exec centos7.img wc -l
    
    singularity exec centos7.img python < ./myscript.py
    
    
    

    Example Singularity container definition file

    CentOS:
    # Copyright (c) 2015-2016, Gregory M. Kurtzer. All rights reserved.
    #
    # "Singularity" Copyright (c) 2016, The Regents of the University of California,
    # through Lawrence Berkeley National Laboratory (subject to receipt of any
    # required approvals from the U.S. Dept. of Energy).  All rights reserved.
    
    
    BootStrap: yum
    OSVersion: 7
    MirrorURL: http://mirror.centos.org/centos-%{OSVERSION}/%{OSVERSION}/os/$basearch/
    Include: yum
    
    # If you want the updates (available at the bootstrap date) to be installed
    # inside the container during the bootstrap instead of the General Availability
    # point release (7.x) then uncomment the following line
    #UpdateURL: http://mirror.centos.org/centos-%{OSVERSION}/%{OSVERSION}/updates/$basearch/
    
    
    %runscript
        echo "This is what happens when you run the container..."
    
    
    %post
        echo "Hello from inside the container"
        yum -y install vim-minimal
    
        # adding a number of rather useful packages
        yum -y install bash
        yum -y install zsh
        yum -y install environment-modules
        yum -y install which
        yum -y install less
        yum -y install sudo         # binary has setuid flag, but it is not honored inside singularity
        yum -y install wget
        yum -y install coreutils    # provide yes
        yum -y install bzip2        # anaconda extract
        yum -y install tar          # anaconda extract
        
        # bootstrap will terminate on first error, so be careful!
        test -d /etc/singularity || mkdir /etc/singularity
        touch                          /etc/singularity/singularity_bootstart.log
        echo '*** env ***'          >> /etc/singularity/singularity_bootstart.log
        env                         >> /etc/singularity/singularity_bootstart.log
    
        # install anaconda python by download and execution of installer script 
        # the test condition is so that subsequent singularity bootstrap to expand the image don't re-install anaconda
        cd /opt
        [[ -f Anaconda3-4.2.0-Linux-x86_64.sh ]] || wget https://repo.continuum.io/archive/ Anaconda3-4.2.0-Linux-x86_64.sh
        [[ -d /opt/anaconda3 ]] || bash Anaconda3-4.2.0-Linux-x86_64.sh -p /opt/anaconda3 -b     # -b = batch mode, accept license w/o user input
    
    
    Ubuntu snipplet.
    I found that --force-yes is needed
    debootstrap rpm is needed in a CentOS machine for this bootstrap to work
    %post
        echo "Hello from inside the container"
        sed -i 's/$/ universe/' /etc/apt/sources.list
        # when building from ubuntu, should avoid using --force-yes in apt-get
        # but when building from RHEL, had to use --force-yes for it to work (at least in RHEL6)
        apt-get -y --force-yes install vim
        apt-get -y --force-yes install ncurses-term
    

    Additional singularity .def examples

    Readings for Containers in HPC

    Readings for Singularity

    Readings for Shifter

    Readings for UGE

    <-- Please click if you found this site useful ;-)