Difference between revisions of "Containers for CyberShake"

From SCECpedia
Jump to navigationJump to search
 
(97 intermediate revisions by the same user not shown)
Line 3: Line 3:
 
== Selection of Containers ==
 
== Selection of Containers ==
  
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, IntelMPI to name a few. Shifter is highly reliant on MPICH ABI, which would require site-specific MPI libraries to be copied to the container at run time.
+
The available HPC Containers at the time of selection were Singularity, Charliecloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime. Charliecloud does not have a module in Frontera.
  
== Setting up a serial container ==
+
== Installing Singularity ==
 +
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.
  
(Explains the steps involved in building and running serial code in a container)
+
=====Install Dependencies=====
 +
      <pre>sudo apt-get update && sudo apt-get install -y \
 +
build-essential \
 +
uuid-dev \
 +
libgpgme-dev \
 +
squashfs-tools \
 +
libseccomp-dev \
 +
wget \
 +
pkg-config \
 +
git \
 +
cryptsetup-bin</pre>
  
Image Prework:
 
Convert Sandbox Director to image file:
 
sudo singularity build new-sif.sif myUbuntu
 
imageName sandbox directory
 
  
User Work:
+
=====Download Go=====
Install Dependencies
+
<pre>export VERSION=1.13.5 OS=linux ARCH=amd64 && \
sudo apt-get update && sudo apt-get install -y \
+
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz && \
    build-essential \
+
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz && \
    uuid-dev \
+
rm go$VERSION.$OS-$ARCH.tar.gz</pre>
    libgpgme-dev \
 
    squashfs-tools \
 
    libseccomp-dev \
 
    wget \
 
    pkg-config \
 
    git \
 
    cryptsetup-bin
 
  
Download Go
 
export VERSION=1.13.5 OS=linux ARCH=amd64 && \
 
    wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz && \
 
    sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz && \
 
    rm go$VERSION.$OS-$ARCH.tar.gz
 
  
Set Up Go
+
=====Set Up Go=====
echo 'export GOPATH=${HOME}/go' >> ~/.bashrc && \
+
<pre>echo 'export GOPATH=${HOME}/go' >> ~/.bashrc && \
    echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' >> ~/.bashrc && \
+
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' >> ~/.bashrc && \
    source ~/.bashrc
+
source ~/.bashrc</pre>
  
Install Singularity
 
export VERSION=3.5.2 && # adjust this as necessary \
 
    wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz && \
 
    tar -xzf singularity-${VERSION}.tar.gz && \
 
    cd singularity
 
  
Check if Singularity Works
+
=====Install Singularity=====
git clone https://github.com/sylabs/singularity.git && \
+
Check the Newest Release Version [https://github.com/hpcng/singularity/tags]
    cd singularity && \
+
<pre>export VERSION=3.6.1 && # adjust this as necessary \
    git checkout v3.5.2
+
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz && \
 +
tar -xzf singularity-${VERSION}.tar.gz && \
 +
cd singularity</pre>
  
 +
=====Check if Singularity Works=====
 +
<pre>git clone https://github.com/sylabs/singularity.git && \
 +
cd singularity && \
 +
git checkout v3.5.2</pre>
  
Get Image
+
== Setting up a serial container (on your computer) ==
singularity pull <source>
+
======Get Image======
 +
singularity pull <source>*
 +
<pre>$ singularity build myPythonContainer.sif library://default/ubuntu:latest</pre>
 +
*<sources> include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).
  
Execute Command in from Outside Container
+
======Execute Command in from Outside Container======
Singularity Commands
+
singularity exec imageName command
singularity exec myPythonContainer.sif cat /etc/lsb-release
+
<pre>$ singularity exec myPythonContainer.sif cat /etc/lsb-release</pre>
imageName command
 
  
singularity exec myPythonContainer.sif python3 helloWorld.py
+
singularity exec image_name command
imageName command
+
<pre>$ singularity exec myPythonContainer.sif python3 helloWorld.py</pre>
  
Find Size:
+
Find Size of Container:
singularity cache list
+
<pre>$ singularity cache list</pre>
 +
 
 +
*Note: Singularity cannot run on the Login Node
  
 
== Basic Singularity Commands ==
 
== Basic Singularity Commands ==
Basic Singularity Commands
+
'''Pull''' - pulls a container image from a remote source.
Pull - pulls a container image from a remote source
+
<pre>$ sudo singularity pull <remote source></pre>
$ sudo singularity pull <remote source>
+
<remote source>:
<remote source>
+
 
Singularity Container Services
+
1. Singularity Container Services [https://cloud.sylabs.io/library]
     $ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION
+
     <pre>$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION</pre>
Singularity Hub
+
*Note: the path only needs to match the pull card. please see the remote website for example.
$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION (Note: the path only needs to match the pull card. please see the remote website for example.)
+
2. Singularity Hub [https://singularity-hub.org/]
Docker Hub
+
    <pre>$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION</pre>
$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION
+
*Note: the path only needs to match the pull card. please see the remote website for example.
(Note docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build)
+
3. Docker Hub [https://hub.docker.com/]
 +
    <pre>$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION</pre>
 +
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build  
 +
*Note 2: the path only needs to match the pull card. please see the remote website for example.
 +
 
 +
 
  
Exec - executes an EXTERNAL COMMAND
+
'''Exec''' - executes an EXTERNAL COMMAND
$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND
+
<pre>$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND</pre>
  
Shell - shells into an existing container
+
 
singularity shell IMAGE_NAME.sif
+
'''Shell''' - shells into an existing container
 +
<pre>$ singularity shell IMAGE_NAME.sif</pre>
 
*Note: Your home directory is mounted by default
 
*Note: Your home directory is mounted by default
  
Run - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe
 
$ singularity run IMAGE_NAME.sif
 
  
Build (BIG TO DO: Very important)
+
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe
$ singularity build IMAGE_NAME.sif <source>
+
<pre>$ singularity run IMAGE_NAME.sif</pre>
Sources include
+
 
Another Image either docker or singularity
+
 
Singularity definition file (use to be known as a recipe file), usually denoted with name.def
+
'''Build''' (see build section for more details)
 +
<pre>$ singularity build IMAGE_NAME.sif <source></pre>
 +
<source> include
 +
#Another Image either docker or singularity
 +
#Singularity definition file (use to be known as a recipe file), usually denoted with name.def
 +
 
  
 
Note:  
 
Note:  
 
You can shell into a docker UI - explore different containers without pulling or building
 
You can shell into a docker UI - explore different containers without pulling or building
$ singularity shell docker://ubuntu
+
<pre>$ singularity shell docker://ubuntu</pre>
 +
 
 +
== Using Prebuilt Containers or Building Containers==
 +
 
 +
==== Prebuilt Containers ====
 +
Basic Ubuntu
 +
Basic Ubuntu Container with Python
 +
 
 +
===== Frontera =====
 +
 
 +
Basic Ubuntu with Mvapich
 +
 
 +
===== Summit =====
 +
Basic Ubuntu with ???
 +
 
 +
==== Building Containers ====
 +
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.
 +
--fakeroot did not work for me in Frontera because it could not find me as a user.
 +
<pre>
 +
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def
 +
FATAL:  could not use fakeroot: no user entry found for llocsin
 +
</pre>
 +
 
 +
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.
 +
 
 +
To build from scratch:
 +
# Install Singularity
 +
# Pull a Basic Image and use the --sandbox flag
 +
# Install desired dependencies in the sandbox
 +
## Build Dependencies
 +
## Correct MPI Library and set environment variables
 +
## Add any files you want to run
 +
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.
 +
# Upload Dependency file to Singularity Container Library or Singularity Hub
 +
 
 +
 
 +
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands > pull)
 +
 
  
Creating Definition Files:
+
====== MPI ======
Workflow:
+
Singularity uses two methods.
Set up complex workflows with Recipe File:
+
1. Bind Approach (cannot be used)
Alternatively-
+
You cannot use the bind on a Frontera.
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu
+
2. Hybrid/Host Approach [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]
  
 +
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.
  
 
== Containers on Frontera ==
 
== Containers on Frontera ==
  
 
=== Serial Containers ===
 
=== Serial Containers ===
 +
1. Prepare
 +
#Make helloWorld.py <code>$ echo "print(\"Hello World\")" > helloWorld.py</code>
 +
#Install Module (only if using Supercomputer): <code>$ module load tacc-singularity</code> *Note: module save (if you plan to use singularity a lot)
 +
 +
2. Get a Singularity Image on Frontera
 +
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)
 +
Options:
 +
#By copying a image from your local to Frontera with scp
 +
#Pull from the Computation Node
 +
<pre>idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 </pre>
 +
*Note: This command works in a sbatch file.
 +
 +
3-1. Interface with Computation Node
 +
 +
a. idev session
 +
<pre>idev
 +
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py</pre>
 +
b. sbatch (recommended)
 +
<pre>
 +
#!/bin/bash
 +
 +
#SBATCH -p development
 +
#SBATCH -t 00:05:00
 +
#SBATCH -n 1
 +
#SBATCH -N 1
 +
#SBATCH -J test-singularity-python
 +
#SBATCH -o test-singularity-python.o%j
 +
#SBATCH -e test-singularity-python.e%j
 +
 +
 +
# Run the actual program
 +
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py
 +
</pre>
  
Explain how you got serial containers running on Frontera.
+
3-2. Execute from Local Computer (if Singularity is installed)
 +
<pre>$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py</pre>
  
 
=== MPI Containers ===
 
=== MPI Containers ===
  
Explain how you got MPI containers running on Frontera.
+
======Make MPI Program======
 +
 
 +
Make Example File: sum_sqrt.c
 +
<pre>
 +
#include <mpi.h>
 +
#include <stdio.h>
 +
#include <stdlib.h>
 +
 
 +
int main(int argc, char** argv) {
 +
    //Grab Argument
 +
    char* temp = argv[1];
 +
    int numN = atoi(temp); //N
 +
    printf("Argument N: %d \n", numN);
 +
 
 +
    // Initialize the MPI environment
 +
    MPI_Init(NULL, NULL);
 +
 
 +
    // Get the number of processes
 +
    int world_size;
 +
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);
 +
    // Get the rank of the process
 +
    printf("Processes: %d \n", world_size);
 +
   
 +
    int world_rank;
 +
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
 +
 
 +
    //Local Variables       
 +
    int nglobal = numN;
 +
    int block = nglobal/world_size;
 +
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;
 +
    /** Blocks
 +
    *    int nlocal = nglobal/psize; flipped -> 1000/32 = 31.25 -> 31
 +
    *    31
 +
            int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);
 +
       
 +
 
 +
            rank low    high inclusive
 +
            0    1      31 <=TO Do: Add loop to process 0 nlocal-1
 +
            1    32    62
 +
            2    63    93
 +
            3    93    124
 +
            4    124    135
 +
        * */
 +
 
 +
    if(world_rank==0){ //master process
 +
        int mySum=0;
 +
        int pSum=0;
 +
        int totalSum=0;
 +
 
 +
    printf("Main Process Start\n");
 +
 +
        //send to P processors
 +
        for(int myprocessor=1; myprocessor <world_size; myprocessor++){
 +
        MPI_Send(&block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);
 +
        }
 +
 
 +
        //process my block
 +
        for(int i=1 ; i <= block; i++){
 +
            mySum+=(i*i);
 +
        }
 +
 
 +
        //process rounded truncated block
 +
        for(int left_over=block*world_size+1; left_over <= numN; left_over++){
 +
            mySum+=(left_over*left_over);
 +
        }
 +
        totalSum+=mySum;
 +
 
 +
        //receive P processors
 +
        for(int myprocessor=1; myprocessor < world_size; myprocessor++){
 +
            MPI_Recv(&pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
 +
            totalSum+=pSum;
 +
            printf("MpSum: %d\n", pSum);
 +
        }
 +
 +
        //print final total
 +
        printf("Sum of Squares for %d is %d\n", numN, totalSum);
 +
 
 +
        printf("Main Process End");
 +
    }else if(world_rank != 0){ //worker process
 +
        printf("Start Process: %d\n", world_rank);
 +
        int mySum=0;
 +
        //receive
 +
        MPI_Recv(&block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
 +
 
 +
        //calculate my sum of square
 +
        for(int i=my_lo; i < my_hi; i++){
 +
            mySum+=(i*i);
 +
        }
 +
 
 +
        //send my sum
 +
        MPI_Send(&mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);
 +
        printf("End Process: %d\n", world_rank);
 +
    }
 +
 
 +
 
 +
    // Finalize the MPI environment.
 +
    MPI_Finalize();
 +
 
 +
    return 0;
 +
}
 +
</pre>
 +
 
 +
======Compile Program======
 +
<pre>$ mpicc -o sum_sqrt sum_sqrt.c</pre>
 +
 
 +
 
 +
======Build or Pull a Singularity Image======
 +
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]
 +
mvapich preinstalled in this container
 +
<pre>$ idev -N 1
 +
$ singularity pull shub://mkandes/ubuntu-mvapich</pre>
 +
*Note: This work on Frontera. MPI Library: mvapich
 +
 
 +
======Execute your command======
 +
<pre>$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000</pre>
 +
 
 +
== Resources ==
 +
#Singularity Guide [https://sylabs.io/docs/]
 +
#Singularity Repository [https://github.com/hpcng/singularity]
 +
#Singularity Container Library [https://cloud.sylabs.io/library]
 +
#Singularity Hub [https://singularity-hub.org/]
 +
#Docker Hub [https://hub.docker.com/]
 +
 
 +
TACC - Frontera
 +
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers. Containers do not seem to support singularity exec)
 +
 
 +
ORNL
 +
#Container Builder Tool - [https://github.com/olcf/container-builder]

Latest revision as of 02:09, 8 August 2020

This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.

Selection of Containers

The available HPC Containers at the time of selection were Singularity, Charliecloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime. Charliecloud does not have a module in Frontera.

Installing Singularity

Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.

Install Dependencies
sudo apt-get update && sudo apt-get install -y \
build-essential \
uuid-dev \
libgpgme-dev \
squashfs-tools \
libseccomp-dev \
wget \
pkg-config \
git \
cryptsetup-bin


Download Go
export VERSION=1.13.5 OS=linux ARCH=amd64 && \
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz && \
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz && \
rm go$VERSION.$OS-$ARCH.tar.gz


Set Up Go
echo 'export GOPATH=${HOME}/go' >> ~/.bashrc && \
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' >> ~/.bashrc && \
source ~/.bashrc


Install Singularity

Check the Newest Release Version [1]

export VERSION=3.6.1 && # adjust this as necessary \
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz && \
tar -xzf singularity-${VERSION}.tar.gz && \
cd singularity
Check if Singularity Works
git clone https://github.com/sylabs/singularity.git && \
cd singularity && \
git checkout v3.5.2

Setting up a serial container (on your computer)

Get Image

singularity pull <source>*

$ singularity build myPythonContainer.sif library://default/ubuntu:latest
  • <sources> include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).
Execute Command in from Outside Container

singularity exec imageName command

$ singularity exec myPythonContainer.sif cat /etc/lsb-release

singularity exec image_name command

$ singularity exec myPythonContainer.sif python3 helloWorld.py

Find Size of Container:

$ singularity cache list
  • Note: Singularity cannot run on the Login Node

Basic Singularity Commands

Pull - pulls a container image from a remote source.

$ sudo singularity pull <remote source>

<remote source>:

1. Singularity Container Services [2]

$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION
  • Note: the path only needs to match the pull card. please see the remote website for example.

2. Singularity Hub [3]

$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION
  • Note: the path only needs to match the pull card. please see the remote website for example.

3. Docker Hub [4]

$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION
  • Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build
  • Note 2: the path only needs to match the pull card. please see the remote website for example.


Exec - executes an EXTERNAL COMMAND

$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND


Shell - shells into an existing container

$ singularity shell IMAGE_NAME.sif
  • Note: Your home directory is mounted by default


Run - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe

$ singularity run IMAGE_NAME.sif


Build (see build section for more details)

$ singularity build IMAGE_NAME.sif <source>

<source> include

  1. Another Image either docker or singularity
  2. Singularity definition file (use to be known as a recipe file), usually denoted with name.def


Note: You can shell into a docker UI - explore different containers without pulling or building

$ singularity shell docker://ubuntu

Using Prebuilt Containers or Building Containers

Prebuilt Containers

Basic Ubuntu Basic Ubuntu Container with Python

Frontera

Basic Ubuntu with Mvapich

Summit

Basic Ubuntu with ???

Building Containers

You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work. --fakeroot did not work for me in Frontera because it could not find me as a user.

$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def 
FATAL:   could not use fakeroot: no user entry found for llocsin

--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.

To build from scratch:

  1. Install Singularity
  2. Pull a Basic Image and use the --sandbox flag
  3. Install desired dependencies in the sandbox
    1. Build Dependencies
    2. Correct MPI Library and set environment variables
    3. Add any files you want to run
  4. Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.
  5. Upload Dependency file to Singularity Container Library or Singularity Hub


To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands > pull)


MPI

Singularity uses two methods. 1. Bind Approach (cannot be used) You cannot use the bind on a Frontera. 2. Hybrid/Host Approach [5]

For MPI containers, you have to use the hybrid approach. Frontera does not support bind.

Containers on Frontera

Serial Containers

1. Prepare

  1. Make helloWorld.py $ echo "print(\"Hello World\")" > helloWorld.py
  2. Install Module (only if using Supercomputer): $ module load tacc-singularity *Note: module save (if you plan to use singularity a lot)

2. Get a Singularity Image on Frontera (*Note: If you want to write a particular program, you must have the dependencies installed in the container) Options:

  1. By copying a image from your local to Frontera with scp
  2. Pull from the Computation Node
idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 
  • Note: This command works in a sbatch file.

3-1. Interface with Computation Node

a. idev session

idev
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py

b. sbatch (recommended)

#!/bin/bash

#SBATCH -p development
#SBATCH -t 00:05:00
#SBATCH -n 1
#SBATCH -N 1 
#SBATCH -J test-singularity-python
#SBATCH -o test-singularity-python.o%j
#SBATCH -e test-singularity-python.e%j


# Run the actual program
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py

3-2. Execute from Local Computer (if Singularity is installed)

$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py

MPI Containers

Make MPI Program

Make Example File: sum_sqrt.c

#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char** argv) {
    //Grab Argument
    char* temp = argv[1];
    int numN = atoi(temp); //N
    printf("Argument N: %d \n", numN);

    // Initialize the MPI environment
    MPI_Init(NULL, NULL);

    // Get the number of processes
    int world_size;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);
    // Get the rank of the process
    printf("Processes: %d \n", world_size);
    
    int world_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

    //Local Variables        
    int nglobal = numN;
    int block = nglobal/world_size;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;
    /** Blocks
     *     int nlocal = nglobal/psize; flipped -> 1000/32 = 31.25 -> 31
     *     31
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);
        

            rank low    high inclusive
            0    1      31 <=TO Do: Add loop to process 0 nlocal-1
            1    32     62
            2    63     93
            3    93     124
            4    124    135
        * */
   
    if(world_rank==0){ //master process
        int mySum=0;
        int pSum=0;
        int totalSum=0;

    printf("Main Process Start\n");
 
        //send to P processors
        for(int myprocessor=1; myprocessor <world_size; myprocessor++){
        MPI_Send(&block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);
        }

        //process my block
        for(int i=1 ; i <= block; i++){
             mySum+=(i*i);
        }

        //process rounded truncated block
        for(int left_over=block*world_size+1; left_over <= numN; left_over++){
             mySum+=(left_over*left_over);
        }
        totalSum+=mySum;

        //receive P processors
        for(int myprocessor=1; myprocessor < world_size; myprocessor++){
            MPI_Recv(&pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
            totalSum+=pSum;
            printf("MpSum: %d\n", pSum);
        }
 
        //print final total
        printf("Sum of Squares for %d is %d\n", numN, totalSum);

        printf("Main Process End");
    }else if(world_rank != 0){ //worker process
        printf("Start Process: %d\n", world_rank);
        int mySum=0;
        //receive
        MPI_Recv(&block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);

        //calculate my sum of square
        for(int i=my_lo; i < my_hi; i++){
             mySum+=(i*i);
        }

        //send my sum
        MPI_Send(&mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);
        printf("End Process: %d\n", world_rank);
    }


    // Finalize the MPI environment.
    MPI_Finalize();

    return 0;
}
Compile Program
$ mpicc -o sum_sqrt sum_sqrt.c


Build or Pull a Singularity Image

The singularity image file NEED the same MPI library installed inside the container [6] mvapich preinstalled in this container

$ idev -N 1
$ singularity pull shub://mkandes/ubuntu-mvapich
  • Note: This work on Frontera. MPI Library: mvapich
Execute your command
$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000

Resources

  1. Singularity Guide [7]
  2. Singularity Repository [8]
  3. Singularity Container Library [9]
  4. Singularity Hub [10]
  5. Docker Hub [11]

TACC - Frontera

  1. TACC Containers [12] (More geared for people who are familiar with Docker Containers. Containers do not seem to support singularity exec)

ORNL

  1. Container Builder Tool - [13]