Creating and running software containers with Singularity

How to use Singularity!

This is an introductory workshop on Singularity. It was originally taught by Dave Godlove at the NIH HPC, but the content has since been adapted to a general audience. For more information about the topics covered here, see the following:

Hour 1 (Introduction and Installation)

What IS a software container anyway? (And what’s it good for?)

A container allows you to stick an application and all of its dependencies into a single package. This makes your application portable, shareable, and reproducible.

Containers foster portability and reproducibility because they package ALL of an applications dependencies… including its own tiny operating system!

This means your application won’t break when you port it to a new environment. Your app brings its environment with it.

Here are some examples of things you can do with containers:

How do containers differ from virtual machines (VMs)

Containers and VMs are both types of virtualization. But it’s important to understand the differences between the two and know when to use each.

Virtual Machines install every last bit of an operating system (OS) right down to the core software that allows the OS to control the hardware (called the kernel). This means that VMs:

Containers share a kernel with the host OS. This means that Containers:

Because of their differences, VMs and containers serve different purposes and should be favored under different circumstances.


Docker is currently the most widely used container software. It has several strengths and weaknesses that make it a good choice for some projects but not for others.


Docker is built for running multiple containers on a single system and it allows containers to share common software features for efficiency. It also seeks to fully isolate each container from all other containers and from the host system.

Docker assumes that you will be a root user. Or that it will be OK for you to elevate your privileges if you are not a root user.



Docker shines for DevOPs teams providing cloud-hosted micro-services to users.


Singularity is a relatively new container software originally developed by Greg Kurtzer while at Lawrence Berkley National labs. It was developed with security, scientific software, and HPC systems in mind.


Singularity assumes that each application will have its own container. It does not seek to fully isolate containers from one another or the host system. Singularity assumes that you will have a build system where you are the root user, but that you will also have a production system where you may or may not be the root user.



Singularity shines for scientific software running in an HPC environent. We will use it for the remainder of the class.


Here we will install the latest tagged release from GitHub. If you prefer to install a different version or to install Singularity in a different location, see these Singularity docs.

We’re going to compile Singularity from source code. First we’ll need to make sure we have some development tools installed so that we can do that. On Ubuntu, run these commands to make sure you have all the necessary packages installed.

$ sudo apt-get update

$ sudo apt-get -y install python dh-autoreconf build-essential debootstrap libarchive-dev

On CentOS, these commmands should get you up to speed.

$ sudo yum update 

$ sudo yum groupinstall 'Development Tools'

$ sudo yum install wget epel-release libarchive-devel

$ sudo yum install debootstrap.noarch

Next we’ll download a compressed archive of the source code (using the the wget command). Then we’ll extract the source code from the archive (with the tar command).

$ wget

$ tar xzvf singularity-2.5.2.tar.gz

Finally it’s time to build and install!

$ cd singularity-2.5.2

$ ./configure --prefix=/usr/local

$ make 

$ sudo make install

If you want support for tab completion of Singularity commands, you need to source the appropriate file and add it to the bash completion directory in /etc so that it will be sourced automatically when you start another shell.

$ . etc/bash_completion.d/singularity

$ sudo cp etc/bash_completion.d/singularity /etc/bash_completion.d/

If everything went according to plan, you now have a working installation of Singularity. You can test your installation like so:

$ singularity run docker://godlovedc/lolcow

You should see something like the following.

Docker image path:
Cache folder set to /home/ubuntu/.singularity/docker
[6/6] |===================================| 100.0%
Creating container runtime...
/ Excellent day for putting Slinkies on \
\ an escalator.                         /
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Your cow will likely say something different (and be more colorful), but as long as you see a cow your installation is working properly.

This command downloads and runs a container from Docker Hub. During the next hour we will learn how to build a similar container from scratch.

Hour 2 (Building and Running Containers)

In the second hour we will build the preceding container from scratch.

Simply typing singularity will give you an summary of all the commands you can use. Typing singularity help <command> will give you more detailed information about running an individual command.

Building a basic container

To build a singularity container, you must use the build command. The build command installs an OS, sets up your container’s environment and installs the apps you need. To use the build command, we need a recipe file (also called a definition file). A Singularity recipe file is a set of instructions telling Singularity what software to install in the container.

The Singularity source code contains several example definition files in the /examples subdirectory. Let’s copy the ubuntu example and inspect it.

Note: You need to build containers on a file system where the sudo command can write files as root. This may not work in an HPC cluster setting if your home directory resides on a shared file server. If that’s the case you may have to to cd to a local hard disk such as /tmp.

$ cd ~ # just in case you were not there already

$ mkdir lolcow

$ cp singularity-2.5.2/examples/ubuntu/Singularity lolcow/

$ cd lolcow

$ nano Singularity

It should look something like this:

BootStrap: debootstrap
OSVersion: trusty

    echo "This is what happens when you run the container..."

    echo "Hello from inside the container"
    sed -i 's/$/ universe/' /etc/apt/sources.list
    apt-get update
    apt-get -y install vim
    apt-get clean

See the Singularity docs for an explanation of each of these sections.

Now let’s use this recipe file as a starting point to build our lolcow.img container. Note that the build command requires sudo privileges, when used in combination with a recipe file.

$ sudo singularity build --sandbox lolcow Singularity

The --sandbox option in the command above tells Singularity that we want to build a special type of container for development purposes.

Singularity can build containers in several different file formats. The default is to build a squashfs image. The squashfs format is compressed and immutable making it a good choice for reproducible, production-grade containers.

But if you want to shell into a container and tinker with it (like we will do here), you should build a sandbox (which is really just a directory). This is great when you are still developing your container and don’t yet know what should be included in the recipe file.

When your build finishes, you will have a basic Ubuntu container saved in a local directory called lolcow.

Using shell to explore and modify containers

Now let’s enter our new container and look around.

$ singularity shell lolcow

Depending on the environment on your host system you may see your prompt change. Let’s look at what OS is running inside the container.

Singularity lolcow:~> cat /etc/os-release
VERSION="14.04, Trusty Tahr"
PRETTY_NAME="Ubuntu 14.04 LTS"

No matter what OS is running on your host, your container is running Ubuntu 14.04!

Let’s try a few more commands:

Singularity lolcow:~> whoami

Singularity lolcow:~> hostname

This is one of the core features of Singularity that makes it so attractive from a security standpoint. The user remains the same inside and outside of the container.

Let’s try installing some software. I used the programs fortune, cowsay, and lolcat to produce the container that we saw in the first demo.

Singularity lolcow:~> sudo apt-get update && sudo apt-get -y install fortune cowsay lolcat
bash: sudo: command not found


Singularity complains that it can’t find the sudo command. But even if you try to install sudo or change to root using su, you will find it impossible to elevate your privileges within the container.

Once again, this is an important concept in Singularity. If you enter a container without root privileges, you are unable to obtain root privileges within the container. This insurance against privilege escalation is the reason that you will find Singularity installed in so many HPC environments.

Let’s exit the container and re-enter as root.

Singularity lolcow:~> exit

$ sudo singularity shell --writable lolcow

Now we are the root user inside the container. Note also the addition of the --writable option. This option allows us to modify the container. The changes will actually be saved into the container and will persist across uses.

Let’s try installing some software again.

Singularity lolcow:~> apt-get update && apt-get -y install fortune cowsay lolcat

Now you should see the programs successfully installed. Let’s try running the demo in this new container.

Singularity lolcow:~> fortune | cowsay | lolcat
bash: lolcat: command not found
bash: cowsay: command not found
bash: fortune: command not found

Drat! It looks like the programs were not added to our $PATH. Let’s add them and try again.

Singularity lolcow:~> export PATH=/usr/games:$PATH

Singularity lolcow:~> fortune | cowsay | lolcat
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
        LANGUAGE = (unset),
        LC_ALL = (unset),
        LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
/ Keep emotionally active. Cater to your \
\ favorite neurosis.                     /
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

We’re making progress, you may receive a warning from perl like the one above. However, before we tackle that, let’s think some more about the $PATH variable.

We changed our path in this session, but those changes will disappear as soon as we exit the container just like they will when you exit any other shell. To make the changes permanent we should add them to the definition file and re-bootstrap the container. We’ll do that in a minute.

Now back to our perl warning. Perl is complaining that the locale is not set properly. Basically, perl wants to know where you are and what sort of language encoding it should use. Should you encounter this warning you can probably fix it with the locale-gen command or by setting LC_ALL=C. Here we’ll just set the environment variable.

Singularity lolcow:~> export LC_ALL=C

Singularity lolcow:~> fortune | cowsay | lolcat
| GREAT ANSWERS: #19 A: To be or not to   |
\ be. Q: What is the square root of 4b^2? /
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Great! Things are working properly now.

Although it is fine to shell into your Singularity container and make changes while you are debugging, you ultimately want all of these changes to be reflected in your recipe file. Otherwise if you need to reproduce it from scratch you will forget all of the changes you made.

Let’s update our definition file with the changes we made to this container.

Singularity lolcow:~> exit

$ nano Singularity

Here is what our updated definition file should look like.

BootStrap: debootstrap
OSVersion: trusty

    echo "This is what happens when you run the container..."

    echo "Hello from inside the container"
    sed -i 's/$/ universe/' /etc/apt/sources.list
    apt-get update
    apt-get -y install vim fortune cowsay lolcat

    export PATH=/usr/games:$PATH
    export LC_ALL=C

Let’s rebuild the container with the new definition file.

$ sudo singularity build lolcow.simg Singularity

Note that we changed the name of the container. By omitting the --sandbox option, we are building our container in the standard Singularity squashfs file format. We are denoting the file format with the (optional) .simg extension. A squashfs file is compressed and immutable making it a good choice for a production environment.

Singularity stores a lot of useful metadata. For instance, if you want to see the recipe file that was used to create the container you can use the inspect command like so:

$ singularity inspect --deffile lolcow.simg
BootStrap: debootstrap
OSVersion: trusty

    echo "This is what happens when you run the container..."

    echo "Hello from inside the container"
    sed -i 's/$/ universe/' /etc/apt/sources.list
    apt-get update
    apt-get -y install vim fortune cowsay lolcat

    export PATH=/usr/games:$PATH
    export LC_ALL=C

Blurring the line between the container and the host system.

Singularity does not try to isolate your container completely from the host system. Unlike other container runtimes, Singularity favors integration over isolation This allows you to do some interesting things.

Using the exec command, we can run commands within the container from the host system.

$ singularity exec lolcow.simg cowsay 'How did you get out of the container?'
< How did you get out of the container? >
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

In this example, singularity entered the container, ran the cowsay command, displayed the standard output on our host system terminal, and then exited.

You can also use pipes and redirection to blur the lines between the container and the host system.

$ singularity exec lolcow.simg cowsay moo > cowsaid

$ cat cowsaid
< moo >
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

We created a file called cowsaid in the current working directory with the output of a command that was executed within the container.

We can also pipe things into the container.

$ cat cowsaid | singularity exec lolcow.simg cowsay -n
/  _____                       \
| < moo >                      |
|  -----                       |
|         \   ^__^             |
|          \  (oo)\_______     |
|             (__)\       )\/\ |
|                 ||----w |    |
\                 ||     ||    /
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

We’ve created a meta-cow (a cow that talks about cows). :stuck_out_tongue_winking_eye:

Hour 3 (advanced Singularity usage)

Making containerized apps behave more like normal apps

In the third hour we are going to consider an extended example describing a containerized application that takes a file as input, analyzes the data in the file, and produces another file as output. This is obviously a very common situation.

Let’s imagine that we want to use the cowsay program in our lolcow.simg to “analyze data”. We should give our container an input file, it should reformat it (in the form of a cow speaking), and it should dump the output into another file.

Here’s an example. First I’ll make some “data”

$ echo "The grass is always greener over the septic tank" > input

Now I’ll “analyze” the “data”

$ cat input | singularity exec lolcow.simg cowsay > output

The “analyzed data” is saved in a file called output.

$ cat output
/ The grass is always greener over the \
\ septic tank                          /
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

This works… but the syntax is ugly and difficult to remember.

Singularity supports a neat trick for making a container function as though it were an executable. We need to create a runscript inside the container. It turns out that our Singularity recipe file already contains a runscript. It causes our container to print a helpful message.

$ ./lolcow.simg
This is what happens when you run the container...

Let’s rewrite this runscript in the definition file and rebuild our container so that it does something more useful.

BootStrap: debootstrap
OSVersion: trusty

    if [ $# -ne 2 ]; then
        echo "Please provide an input and an output file."
        exit 1
    cat $1 | cowsay > $2

    echo "Hello from inside the container"
    sed -i 's/$/ universe/' /etc/apt/sources.list
    apt-get update
    apt-get -y install vim fortune cowsay lolcat

    export PATH=/usr/games:$PATH
    export LC_ALL=C

Now we must rebuild out container to install the new runscript.

$ sudo singularity build --force lolcow.simg Singularity

Note the --force option which ensures our previous container is completely overwritten.

After rebuilding our container, we can call the lolcow.simg as though it were an executable, and simply give it two arguments. One for input and one for output.

$ ./lolcow.simg
Please provide an input and an output file.

$ ./lolcow.simg input output2

$ cat output2
/ The grass is always greener over the \
\ septic tank                          /
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Bind mounting host system directories into a container

It’s possible to create and modify files on the host system from within the container. In fact, that’s exactly what we did in the previous example when we created output files in our home directory.

Let’s be more explicit. Consider this example.

$ singularity shell lolcow.simg

Singularity lolcow.simg:~> echo wutini > ~/jawa.sez

Singularity lolcow.simg:~> cat ~/jawa.sez

Singularity lolcow.simg:~> exit

$ cat ~/jawa.sez

Here we shelled into a container and created a file with some text in our home directory. Even after we exited the container, the file still existed. How did this work?

There are several special directories that Singularity bind mounts into your container by default. These include:

You can specify other directories to bind using the --bind option or the environmental variable $SINGULARITY_BINDPATH

Let’s say we want to use our cowsay.img container to “analyze data” and save results in a different directory. For this example, we first need to create a new directory with some data on our host system.

$ sudo mkdir /data

$ sudo chown $USER:$USER /data

$ echo 'I am your father' > /data/vader.sez

We also need a directory within our container where we can bind mount the host system /data directory. We could create another directory in the %post section of our recipe file and rebuild the container, but our container already has a directory called /mnt that we can use for this example.

Now let’s see how bind mounts work. First, let’s list the contents of /mnt within the container without bind mounting /data to it.

$ singularity exec lolcow.simg ls -l /mnt
total 0

The /mnt directory within the container is empty. Now let’s repeat the same command but using the --bind option to bind mount /data into the container.

$ singularity exec --bind /data:/mnt lolcow.simg ls -l /mnt
total 4
-rw-rw-r-- 1 ubuntu ubuntu 17 Jun  7 20:57 vader.sez

Now the /mnt directory in the container is bind mounted to the /data directory on the host system and we can see its contents.

Now what about our earlier example in which we used a runscript to run a our container as though it were an executable? The singularity run command accepts the --bind option and can execute our runscript like so.

$ singularity run --bind /data:/mnt lolcow.simg /mnt/vader.sez /mnt/output3

$ cat /data/output3
< I am your father >
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

But that’s a cumbersome command. Instead, we could set the variable $SINGULARITY_BINDPATH and then use our container as before.

$ export SINGULARITY_BINDPATH=/data:/mnt

$ ./lolcow.simg /mnt/output3 /mnt/metacow2

$ ls -l /data/
total 12
-rw-rw-r-- 1 ubuntu ubuntu 809 Jun  7 21:07 metacow2
-rw-rw-r-- 1 ubuntu ubuntu 184 Jun  7 21:06 output3
-rw-rw-r-- 1 ubuntu ubuntu  17 Jun  7 20:57 vader.sez

$ cat /data/metacow2
/  __________________ < I am your father \
| >                                      |
|                                        |
| ------------------                     |
|                                        |
| \ ^__^                                 |
|                                        |
| \ (oo)\_______                         |
|                                        |
| (__)\ )\/\                             |
|                                        |
| ||----w |                              |
|                                        |
\ || ||                                  /
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

For a lot more info on how to bind mount host directories to your container, check out the NIH HPC Binding external directories section.

Singularity Instances

Up to now all of our examples have run Singularity containers in the foreground. But what if you want to run a service like a web server or a database in a Singularity container in the background?

lolcow (useless) example

In Singularity v2.4+, you can use the instance command group to start and control container instances that run in the background. To demonstrate, let’s start an instance of our lolcow.simg container running in the background.

$ singularity instance start lolcow.simg cow1

We can use the instance list command to show the instances that are currently running.

$ singularity instance list 
cow1             10794    /home/dave/lolcow.simg

We can connect to running instances using the instance:// URI like so:

$ singularity shell instance://cow1
Singularity: Invoking an interactive shell within container...

Singularity lolcow.simg:~> ps -ef
dave         1     0  0 19:05 ?        00:00:00 singularity-instance: dave [cow1]
dave         3     0  0 19:06 pts/0    00:00:00 /bin/bash --norc
dave         4     3  0 19:06 pts/0    00:00:00 ps -ef

Singularity lolcow.simg:~> exit

Note that we’ve entered a new PID namespace, so that the singularity-instance process has PID number 1.

You can start multiple instances running in the background, as long as you give them unique names.

$ singularity instance start lolcow.simg cow2

$ singularity instance start lolcow.simg cow3

$ singularity instance list
cow1             10794    /home/dave/lolcow.simg
cow2             10855    /home/dave/lolcow.simg
cow3             10885    /home/dave/lolcow.simg

You can stop individual instances using their unique names or stop all instances with the --all option.

$ singularity instance stop cow1
Stopping cow1 instance of /home/dave/lolcow.simg (PID=10794)

$ singularity instance stop --all
Stopping cow2 instance of /home/dave/lolcow.simg (PID=10855)
Stopping cow3 instance of /home/dave/lolcow.simg (PID=10885)

nginx (useful) example

These examples are not very useful because lolcow.simg doesn’t run any services. Let’s extend the example to something useful by running a local nginx web server in the background. This command will download the official nginx image from Docker Hub and start it in a background instance called “web”. (The commands need to be executed as root so that nginx can run with the privileges it needs.)

$ sudo singularity instance start docker://nginx web
Docker image path:
Cache folder set to /root/.singularity/docker
[3/3] |===================================| 100.0%
Creating container runtime...

$ sudo singularity instance list
web              15379    /tmp/.singularity-runtime.MBzI4Hus/nginx

Now to start nginx running in the instance called web.

$ sudo singularity exec instance://web nginx

Now we have an nginx web server running on our localhost. We can verify that it is running with curl.

$ curl localhost - - [02/Nov/2017:19:20:39 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1" "-"
<!DOCTYPE html>
<title>Welcome to nginx!</title>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href=""></a>.<br/>
Commercial support is available at
<a href=""></a>.</p>

<p><em>Thank you for using nginx.</em></p>

When finished, don’t forget to stop all running instances like so:

$ sudo singularity instance.stop --all

Singularity Hub and Docker Hub

We’ve spent a lot of time on building and using your own containers so that you understand how Singularity works. But there’s an easier way! Docker Hub hosts over 100,000 pre-built, ready-to-use containers. And singularity makes it easy to use them.

When we first installed Singularity we tested the installation by running a container from Docker Hub like so.

$ singularity run docker://godlovedc/lolcow

Instead of running this container from Docker Hub, we could also just copy it to our local system with the build command.

$ sudo singularity build lolcow-from-docker.simg docker://godlovedc/lolcow

You can build and host your own images on Docker Hub, (using Docker) or you can download and run images that others have built.

$ singularity shell docker://tensorflow/tensorflow

Singularity tensorflow:~> python

>>> import tensorflow as tf

>>> quit()

Singularity tensorflow:~> exit

You can also build, pull, and exec containers from Singularity Hub

$ singularity exec shub://GodloveD/lolcow cowsay moo

You can even use images on Docker Hub and Singularity Hub as a starting point for your own images. Singularity recipe files allow you to specifiy a Docker Hub or Singularity Hub registry to bootstrap from and you can use the %post section to modify the container to your liking.

For example, to start from a Docker Hub image of Ubuntu in your recipe, you could do something like this:

BootStrap: docker
From: ubuntu

    echo "This is what happens when you run the container..."

    echo "Hello from inside the container"
    echo "Install additional software here"

Or to start from a Singularity Hub version of BusyBox you could do something like this:

BootStrap: shub
From: GodloveD/busybox

    echo "This is what happens when you run the container..."

    echo "Hello from inside the container"
    echo "Install additional software here"

Both Docker Hub and Singularity Hub link to your GitHub account. New container builds are automatically triggered every time you push changes to a Docker file or a Singularity recipe file in a linked repository.

Returning to our previous example of the lolcow container, it’s actually somewhat easier to build this container using Docker Hub as a starting point.

BootStrap: docker
From: ubuntu:16.04

    apt-get -y update
    apt-get -y install fortune cowsay lolcat

    export LC_ALL=C
    export PATH=/usr/games:$PATH

    fortune | cowsay | lolcat

Since we are using a pre-built container from Docker Hub, we don’t have to worry about configuring apt to work with the universe repositories.

Miscellaneous Topics

details of the Singularity security model

Disclaimer: Many of the things in this section are remarkably bad practices. They are just here to illustrate examples. Don’t try them outside of this context. Maybe don’t actually try them inside of this context either.

Let’s say that you want to run a command as root inside of a Singularity container.

$ singularity exec docker://ubuntu sudo whoami
/.singularity.d/actions/exec: 9: exec: sudo: not found

Whoops! The sudo program is not installed. Let’s build the container as a sandbox and install it. Obviously, we must do so as root:

$ sudo singularity build --sandbox backdoor.simg docker://ubuntu

$ sudo singularity shell --writable backdoor.simg

Singularity backdoor.simg:~> apt-get update && apt-get install -y sudo

Singularity backdoor.simg:~> which sudo

And we’ll also make it so that our normal user can run all commands via sudo without a password. (Note this renders the container non-portable but pretty much all of this is a terrible idea anyway.)

Singularity backdoor.simg:~> echo "student ALL=(ALL) NOPASSWD:ALL" >>/etc/sudoers

Singularity backdoor.simg:~> exit

Now we should be able to use sudo to escalate privs inside of a container as the student user.


$ singularity exec backdoor.simg sudo whoami
sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?

Wrong. :-(

The error should give us a hint. But let’s try a few more things before we give up.

An easier solution might just be to create a root password. By defualt, Ubuntu does not have a root password, but once we add one we should be able to become root with su.

$ sudo singularity shell --writable backdoor.simg

Singularity backdoor.simg:~> passwd root
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully

Singularity backdoor.simg:~> exit

I just set it to password so it would be easy to remember. Now I’m ready to shell in and use su to become root.

$ singularity shell backdoor.simg
Singularity: Invoking an interactive shell within container...

Singularity backdoor.simg:~> su -
su: Authentication failure

Singularity backdoor.simg:~> whoami

Singularity backdoor.simg:~> exit

I entered the correct password, but I still am not root!

One more try before we give up. Let’s write our own sudo program! (Disclaimer: This is a terrible idea.)

For this to work, we need to write a little C.

If this program is compiled, chowned to root, and the set-user-id bit is set, 
it will take a single string as input and execute the string as a command with 
root privs. :-O

#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char** argv) {

    if ( argc == 1 || argc > 2 ) {
        printf("ERROR: %s requires exactly 1 argument\n", argv[0]);
        return 1;

    if ( setuid(0) != 0 ) {
        printf("ERROR: failed to set effective user id to 0 (root)\n");
        return 1;

    if ( system(argv[1]) != 0 ) {
        printf("ERROR: failed to execute command '%s'\n", argv[1]);
        return 1;

    return 0;

Save this text into a file called my_sudo.c.

Now if you compile the program, chown the resulting binary to root and add the user SUID bit, you will have a program that takes a single string argument, elevates the user’s permissions to root, and executes it.

Now just to state the obvious, this is a dangerous program to have around. You probably do NOT want to actually try this just in case you make a mistake and this ugly program ends up sitting around on your host system somewhere. Just read the tutorial and take my word for it.

I’ll install this program within the container being careful to compile it within the container (even though in this case the container is just a bare directory so we will still need to remove it later).

$ mv my_sudo.c /tmp/

$ sudo singularity shell --writable backdoor.simg/

Singularity backdoor.simg:~> apt-get install -y gcc

Singularity backdoor.simg:~> mv /tmp/my_sudo.c /bin/

Singularity backdoor.simg:~> cc /bin/my_sudo.c -o /bin/my_sudo

Singularity backdoor.simg:~> chmod u+s /bin/my_sudo

Singularity backdoor.simg:~> ls -l /bin/my_sudo
-rwsr-xr-x 1 root root 8424 Jul 25 02:28 /bin/my_sudo

Singularity backdoor.simg:~> exit

The dirty little program is now set up and ready to go inside of the container at /bin/my_sudo. I should be able use it to elevate my privs.

$ singularity exec backdoor.simg my_sudo whoami
ERROR: failed to set effective user id to 0 (root)

Nope! Even trying explicitly to create an exploit in this way does not work.

Note that in this case we built a sandbox, meaning backdoor.simg is just a bare directory. my_sudo exists as an SUID program in that directory. If we were to call it outside of the context of the Singularity runtime, it would quite happily elevate our privileges.

$ backdoor.simg/bin/my_sudo whoami

So in this case, the Singularity runtime actually provides an additional layer of security over and above what is provided on the host system.

Before proceeding any further, stop and delete this program from your host system if you actually followed this portion of the tutorial.

$ sudo rm backdoor.simg/bin/my_sudo*

These examples should serve to illustrate the guiding security principle of the Singularity runtime. Singularity allows untrusted users to run untrusted containers safely.

How does Singularity block privilege escalation inside of the container? When Singularity runs, it mounts a directory or image containing a file system to a specific location within the root file system and then makes it appear as though this mount is the actual root file system. Critically, when the container file system is mounted, the MS_NOSUID flag is passed to the mount program. This prevents set-user-ID (SUID) and set-group-ID (SGID) bits or file capabilities from being honored when executing programs from the container file system.

At this point, even if you have never heard of an SUID program you should have an idea how they work. Setting the SUID bit allows anyone to run a program as though they were the user who owns the program. That’s how we wrote our own my_sudo program above. It turns out that’s how programs like sudo and su work as well. But you may be surprised to learn that other programs can have the SUID bit set. For instance, on many systems ping must have the SUID bit set because it needs to open a raw socket which is a privileged operation. This means that ping will not work in a Singularity container when the container is run without privileges!

On some newer systems, ping is controlled via Linux file capabilities instead. Capabilities allow developers to give finer grained control over the privileged processes that are carried out by a program. In the case of ping this allows the program to create raw sockets without granting it additional privileges. The MS_NOSUID flag used by Singularity to mount containers blocks this type of escalation pathway as well.

You may wonder how Singularity allows you to be the same user inside the container as outside of the container. Let’s continue to examine the backdoor.simg image we create above. Since it’s a sandbox (a bare directory) we can just look inside of it. Let’s have a look at the /etc/passwd file and /etc/group files that determine what users and groups exist on the system respectively.

$ cat backdoor.simg/etc/passwd
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin

$ cat backdoor.simg/etc/group

These are all just generic user and groups that appear in any Ubuntu installation and are necessary to make things work properly. Now let’s shell into the container and look again.

$ singularity shell backdoor.simg

Singularity backdoor.simg:~> tail -n 3 /etc/passwd

Singularity backdoor.simg:~> tail -n 3 /etc/group


Singularity dynamically appends user and group information to these files at runtime. That is how we are able to be the same user with the same groups seamlessly inside and outside of the container.

These are the basic security principles of code running within a Singularity container. But what about the security of the Singularity program itself running on the host system?

Several privileged operations need to be carried out by the Singularity runtime. For instance, Singularity must mount the file system contained in an image or directory on behalf of the user. Other container runtimes handle this privilege escalation by means of a root-owned daemon process that carries out operations on behalf of the user. This strategy has advantages and disadvantages. In particular several aspects of this design render it unattractive in an HPC environment. Most notably, in the context of a batch scheduler such as Slurm or PBS, the root-owned daemon model allows containerized processes to escape control of the resource manager and go rogue on compute nodes.

Singularity uses a program with an SUID bit to elevate privileges for operations such as container mounting. This method also provides advantages and disadvantages. The actual Singularity program only runs for tens of milliseconds while it sets up new namespaces, mounts the container file system, and then uses the exec program to run a payload within the container. exec exchanges the calling program for the program that is called in memory. In this way, Singularity effectively execs itself out of existence, and once the containerized program begins to run, no traces of the Singularity program remain.

SUID bit programs must be designed with great care. Singularity minimizes the amount of code that is executed with elevated privileges by adopting a model allowing privs to be increased and dropped as needed. Singularity’s execution has been made as transparent as possible so that the code can be audited with ease. If you execute a Singularity command with the --debug option, you will see exactly which operations are carried out with elevated privs.

pipes and redirection

As we demonstrated earlier, pipes and redirects work as expected between a container and host system. If you need to pipe the output of one command in your container to another command in your container things may be more complicated.

$ singularity exec lolcow.img fortune | singularity exec lolcow.img cowsay

X11 and OpenGL

You can use Singularity containers to display graphics through common protocols. To do this, you need to install the proper graphics stack within the Singularity container. For instance if you want to display X11 graphics you must install xorg within your container. In an Ubuntu container the command would look like this.

$ apt-get install xorg

GPU computing

In Singularity v2.3+ the experimental --nv option will look for NVIDIA libraries on the host system and automatically bind mount them to the container so that GPUs work seamlessly.

Using the network on the host system

Network ports on the host system are accessible from within the container and work seamlessly. For example, you could install ipython within a container, start a jupyter notebook instance, and then connect to that instance using a browser running outside of the container on the host system or from another host.

a note on SUID programs and daemons

Some programs need root privileges to run. These often include services or daemons that start via the init.d or system.d systems and run in the background. For instance, sshd the ssh daemon that listens on port 22 and allows another user to connect to your computer requires root privileges. You will not be able to run it in a container unless you start the container as root.

Other programs may set the SUID bit or capabilities to run as root or with elevated privileges without your knowledge. For instance, the well-known ping program actually runs with elevated privileges (and needs to since it sets up a raw network socket). This program will not run in a container unless you are root in the container.

Practical examples

Train a TensorFlow model on the classic MNIST data set

In this example we will train a TensorFlow model to categorize handwritten digits using the classic MNIST data set.

First, let’s download the models and scripts.

$ git clone

The actual script that we want to run is here:

$ ll models/tutorials/image/mnist/
-rw-rw-r-- 1 vagrant 14K Jul 25 16:16 models/tutorials/image/mnist/

Now we will just use the exec command with a TensorFlow container from DockerHub to run the script and start TensorFlow.

$ singularity exec docker://tensorflow/tensorflow:latest \
    python models/tutorials/image/mnist/

If your host system happens to have a GPU that you want to use, the following command will suffice:

$ singularity exec --nv docker://tensorflow/tensorflow:latest-gpu \
    python models/tutorials/image/mnist/

The --nv option tells Singularity to search for any NVIDIA driver related libraries and binaries and automatically mount them into the container so that the GPU hardware is exposed and usable inside of the container. And then selecting the latest-gpu tag of TensorFlow ensures you get a version of TensorFlow compiled with GPU support.

If you really want to build TensorFlow from scratch, here is an example definition file that you can use to get started. This file will just compile tensorflow for CPU usage (with some optimized instruction sets). If you want to build TensorFlow to run on a GPU, check the instructions here:

BootStrap: docker
From: ubuntu:xenial


    # install some basic deps
    apt-get -y update
    apt-get -y install curl git wget bzip2 \
        pkg-config zip g++ zlib1g-dev unzip python \
        python-numpy python-dev python-pip python-wheel

    # install bazel
    cd /tmp
    chmod +x
    ./ --prefix=/opt/bazel-0.15.2
    export PATH=/opt/bazel-0.15.2/bin:$PATH

    # download tensorflow 1.2.0
    git clone
    cd tensorflow
    git checkout r1.9

    # setup build env (use defaults)
    export PYTHON_BIN_PATH="/usr/bin/python"
    export PYTHON_LIB_PATH="/usr/local/lib/python2.7/site-packages"
    export TF_NEED_JEMALLOC="n"
    export TF_NEED_GCP="n"
    export TF_NEED_HDFS="n"
    export TF_NEED_OPENCL="n"
    export TF_NEED_CUDA="n"
    export CC_OPT_FLAGS="-march=native"
    export TF_NEED_MKL="n"
    export TF_ENABLE_XLA="n"
    export TF_NEED_MPI=0
    export TF_NEED_VERBS="n"

    # build and install tensorflow
    bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both \
        --copt=-msse4.2 //tensorflow/tools/pip_package:build_pip_package
    bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
    pip install /tmp/tensorflow_pkg/tensorflow-*

    export PATH=/opt/bazel-0.15.2/bin:$PATH

    python "$@"

Disclamer: This defintion file built TensorFlow as of July 2018, but there is no guarantee that it will work in the future since TensorFlow develops quickly.

Run a jupyter notebook server

In this example we will create a container with Anaconda and Jupyter.

The definition file for this container is pretty straightforward. In this case, I’ve named it anaconda.def

BootStrap: docker
From: ubuntu:bionic

    apt-get update
    apt-get install -y wget vim

    cd /tmp
    sh -b -p /opt/anaconda3

    export PATH=/opt/anaconda3/bin:$PATH

We install wget and use it to download the Anaconda installer from the site where ContinuumIO hosts it. Then we run the installer with the -b (batch) flag so that the installer runs non-interactively.

It’s also important to note that we are using the -p (prefix) option, and installing Anaconda in the /opt directory. By default, Anaconda installs into the current user’s home directory. Because we need to run the definition file as root, this means that Anaconda would be installed into /root. If we run the container as a normal user we won’t be able to access it for obvious reasons. But even if we run the container as root we won’t be able to access the installed Anaconda, because singularity will bind mount the host system /root (roots home) over the /root that exists inside the container.

As usual, we can build the container with the following command.

$ sudo singularity build anaconda.simg anaconda.def

Once the container is built, we can run it like so:

$ singularity exec --bind /tmp:/run/user anaconda.simg jupyter notebook

Note the --bind /tmp:/run/user option argument pair. This is necessary because Jupyter wants to write temporary session data to the /run/user directory, but the container has been mounted as read only. Using the --bind command, Jupyter will write this data to /tmp on the host system.

This should start the server and give you an html address with a security token to use for the connection.

If you are running Singularity directly on your local system you can just log into your notebook using the html address provided on your screen. If you are running Singularity in a virtual machine, you may need to do some network configuration to make things work properly. If you are running your Jupyter notebook on a remote resource, you probably need to set up an ssh tunnel in a new terminal to access the notebook server. If your like me, and you can never remember the syntax for ssh tunnels, here is a hint:

$ ssh -N -L <port>:localhost:<port> <username>@<host>