Difference between revisions of "Containers for CyberShake"
Line 5: | Line 5: | ||
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, IntelMPI to name a few. Shifter is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at run time. | The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, IntelMPI to name a few. Shifter is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at run time. | ||
− | == | + | == Installing Singularity == |
− | |||
Install Singularity Dependencies | Install Singularity Dependencies | ||
<code>sudo apt-get update && sudo apt-get install -y \ | <code>sudo apt-get update && sudo apt-get install -y \ | ||
Line 41: | Line 40: | ||
cd singularity && \ | cd singularity && \ | ||
git checkout v3.5.2</code> | git checkout v3.5.2</code> | ||
+ | |||
+ | == Setting up a serial container == | ||
+ | |||
+ | (Explains the steps involved in building and running serial code in a container) | ||
+ | |||
Revision as of 02:30, 7 August 2020
This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.
Contents
Selection of Containers
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, IntelMPI to name a few. Shifter is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at run time.
Installing Singularity
Install Singularity Dependencies
sudo apt-get update && sudo apt-get install -y \
build-essential \
uuid-dev \
libgpgme-dev \
squashfs-tools \
libseccomp-dev \
wget \
pkg-config \
git \
cryptsetup-bin
Download Go
export VERSION=1.13.5 OS=linux ARCH=amd64 && \
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz && \
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz && \
rm go$VERSION.$OS-$ARCH.tar.gz
Set Up Go
echo 'export GOPATH=${HOME}/go' >> ~/.bashrc && \
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' >> ~/.bashrc && \
source ~/.bashrc
Install Singularity
export VERSION=3.5.2 && # adjust this as necessary \
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz && \
tar -xzf singularity-${VERSION}.tar.gz && \
cd singularity
Check if Singularity Works
git clone https://github.com/sylabs/singularity.git && \
cd singularity && \
git checkout v3.5.2
Setting up a serial container
(Explains the steps involved in building and running serial code in a container)
Get Image
singularity pull <source>*
singularity pull library://default/ubuntu:latest
- Sources include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).
Execute Command in from Outside Container
Singularity Commands
singularity exec myPythonContainer.sif cat /etc/lsb-release
imageName command
singularity exec image_name command
singularity exec myPythonContainer.sif python3 helloWorld.py
Find Size of Container:
singularity cache list
Basic Singularity Commands
Basic Singularity Commands Pull - pulls a container image from a remote source $ sudo singularity pull <remote source> <remote source> Singularity Container Services
$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION
Singularity Hub $ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION (Note: the path only needs to match the pull card. please see the remote website for example.) Docker Hub $ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION (Note docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build)
Exec - executes an EXTERNAL COMMAND $ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND
Shell - shells into an existing container singularity shell IMAGE_NAME.sif
- Note: Your home directory is mounted by default
Run - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe $ singularity run IMAGE_NAME.sif
Build (BIG TO DO: Very important) $ singularity build IMAGE_NAME.sif <source> Sources include Another Image either docker or singularity Singularity definition file (use to be known as a recipe file), usually denoted with name.def
Note: You can shell into a docker UI - explore different containers without pulling or building $ singularity shell docker://ubuntu
Creating Definition Files: Workflow: Set up complex workflows with Recipe File: Alternatively- Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu
Containers on Frontera
Serial Containers
Explain how you got serial containers running on Frontera.
MPI Containers
Explain how you got MPI containers running on Frontera.