

<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://strike.scec.org/scecwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Llocsin</id>
	<title>SCECpedia - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://strike.scec.org/scecwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Llocsin"/>
	<link rel="alternate" type="text/html" href="https://strike.scec.org/scecpedia/Special:Contributions/Llocsin"/>
	<updated>2026-04-28T07:10:14Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.34.2</generator>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=User:Llocsin&amp;diff=24924</id>
		<title>User:Llocsin</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=User:Llocsin&amp;diff=24924"/>
		<updated>2020-08-14T01:22:11Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* SVN to Git Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==SVN to Git Plan==&lt;br /&gt;
Convert SVN repository to a Git Repository.&lt;br /&gt;
====Steps:====&lt;br /&gt;
#Download svn2git tool: https://github.com/nirvdrum/svn2git&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone &amp;lt;remote&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
#Pull desired SVN repository&lt;br /&gt;
##&amp;lt;pre&amp;gt;svn checkout &amp;lt;remote&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
#Create Text Document of Authors to link to Git Accounts&lt;br /&gt;
##create authors.txt&lt;br /&gt;
##add in all authors with the following format:&lt;br /&gt;
###jcoglan = James Coglan &amp;lt;jcoglan@never-you-mind.com&amp;gt;&lt;br /&gt;
###stnick = Santa Claus &amp;lt;nicholas@lapland.com&amp;gt; (Recommended to link everyone’s account but if you can’t, it’s not required to link everyone’s account, but you must provide an email, if only a made up one.)&lt;br /&gt;
#Convert the repositories with the git commands:&lt;br /&gt;
##$ svn2git http://source.usc.edu/svn/mesh_partitioner/ --trunk trunk --tags tags --nobranches --authors ~/authors.txt (Note: --trunk trunk (the trunk argument is the name of the trunk repository-tags tags (tags is the name of your repository of tags))&lt;br /&gt;
#Push to the desired git remote. (In this example we use github)&lt;br /&gt;
##Add Remote&lt;br /&gt;
###git remote add &amp;lt;remote-name&amp;gt; &amp;lt;remote-url&amp;gt;##git remote origin git@github.com:SCECcode/CyberShake.git&lt;br /&gt;
##Commit&lt;br /&gt;
###git add .&lt;br /&gt;
###git commit -m “Initial Commit of Converted SVN to Git Repository Code”&lt;br /&gt;
##Use tags command&lt;br /&gt;
###git push  --tags&lt;br /&gt;
&lt;br /&gt;
Rebase History&lt;br /&gt;
If the hosted Subversion repository’s history possesses commits not yet in the local Git repository, the dcommit operation will be rejected until the commits are acquired with this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;git svn rebase&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Resources:===&lt;br /&gt;
#svn2git tool (from the guide): https://github.com/nirvdrum/svn2git&lt;br /&gt;
#Official Git Documentation: &lt;br /&gt;
#Guide: https://viastudio.com/migrate-svn-git/: https://github.github.com/training-kit/downloads/subversion-migration/&lt;br /&gt;
#Github Importer Tool (alternate): https://docs.github.com/en/github/importing-your-projects-to-github/about-github-importer&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
To Do:&lt;br /&gt;
&lt;br /&gt;
Basic Singularity Commands&lt;br /&gt;
Pull - pulls a container image from a remote source&lt;br /&gt;
$ sudo singularity pull &amp;lt;remote source&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;&lt;br /&gt;
Singularity Container Services&lt;br /&gt;
    $ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&lt;br /&gt;
Singularity Hub&lt;br /&gt;
$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION  (Note: the path only needs to match the pull card. please see the remote website for example.)&lt;br /&gt;
Docker Hub&lt;br /&gt;
$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&lt;br /&gt;
(Note docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build)&lt;br /&gt;
&lt;br /&gt;
Exec - executes an EXTERNAL COMMAND&lt;br /&gt;
$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&lt;br /&gt;
&lt;br /&gt;
Shell - shells into an existing container&lt;br /&gt;
singularity shell IMAGE_NAME.sif&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
Run - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
$ singularity run IMAGE_NAME.sif&lt;br /&gt;
&lt;br /&gt;
Build (BIG TO DO: Very important)&lt;br /&gt;
$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&lt;br /&gt;
Sources include&lt;br /&gt;
Another Image either docker or singularity&lt;br /&gt;
Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
$ singularity shell docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files:&lt;br /&gt;
Workflow:&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
WRITE RECIPE:&lt;br /&gt;
File Name: Singularity.def&lt;br /&gt;
&lt;br /&gt;
Bootstrap: docker&lt;br /&gt;
From: ubuntu:18.04&lt;br /&gt;
%post&lt;br /&gt;
    apt-get -y update&lt;br /&gt;
    apt-get -y install python3&lt;br /&gt;
%files&lt;br /&gt;
    helloWorld.py /&lt;br /&gt;
&lt;br /&gt;
%runscript&lt;br /&gt;
    python3 /hello_world.py&lt;br /&gt;
&lt;br /&gt;
$ sudo singularity build CONTAINER_NAME.sif Singularity.def&lt;br /&gt;
$ sudo singularity run CONTAINER_NAME.sif&lt;br /&gt;
Result:&lt;br /&gt;
Hello World&lt;br /&gt;
&lt;br /&gt;
Resources:&lt;br /&gt;
Singularity Guide:&lt;br /&gt;
Singularity Help:&lt;br /&gt;
&lt;br /&gt;
Pre-Made Containers:&lt;br /&gt;
&lt;br /&gt;
TACC Resources:&lt;br /&gt;
&lt;br /&gt;
MVAPICH Dependances&lt;br /&gt;
MPICH Dependencies:&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=User:Llocsin&amp;diff=24923</id>
		<title>User:Llocsin</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=User:Llocsin&amp;diff=24923"/>
		<updated>2020-08-14T00:33:40Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* SVN to Git Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==SVN to Git Plan==&lt;br /&gt;
Convert SVN repository to a Git Repository.&lt;br /&gt;
====Steps:====&lt;br /&gt;
#Download svn2git tool: https://github.com/nirvdrum/svn2git&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone &amp;lt;remote&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
#Pull desired SVN repository&lt;br /&gt;
##&amp;lt;pre&amp;gt;svn checkout &amp;lt;remote&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
#Create Text Document of Authors to link to Git Accounts&lt;br /&gt;
##create authors.txt&lt;br /&gt;
##add in all authors with the following format:&lt;br /&gt;
###jcoglan = James Coglan &amp;lt;jcoglan@never-you-mind.com&amp;gt;&lt;br /&gt;
###stnick = Santa Claus &amp;lt;nicholas@lapland.com&amp;gt; (Recommended to link everyone’s account but if you can’t, it’s not required to link everyone’s account, but you must provide an email, if only a made up one.)&lt;br /&gt;
#Convert the repositories with the git commands:&lt;br /&gt;
##$ svn2git http://source.usc.edu/svn/mesh_partitioner/ --trunk trunk --tags tags --nobranches --authors ~/authors.txt (Note: --trunk trunk (the trunk argument is the name of the trunk repository-tags tags (tags is the name of your repository of tags))&lt;br /&gt;
#Push to the desired git remote. (In this example we use github)&lt;br /&gt;
##Add Remote&lt;br /&gt;
###git remote add &amp;lt;remote-name&amp;gt; &amp;lt;remote-url&amp;gt;##git remote origin git@github.com:SCECcode/CyberShake.git&lt;br /&gt;
##Commit&lt;br /&gt;
###git add .&lt;br /&gt;
###git commit -m “Initial Commit of Converted SVN to Git Repository Code”&lt;br /&gt;
##Use tags command&lt;br /&gt;
###git push  --tags&lt;br /&gt;
&lt;br /&gt;
===Resources:===&lt;br /&gt;
#svn2git tool (from the guide): https://github.com/nirvdrum/svn2git&lt;br /&gt;
#Official Git Documentation: &lt;br /&gt;
#Guide: https://viastudio.com/migrate-svn-git/: https://github.github.com/training-kit/downloads/subversion-migration/&lt;br /&gt;
#Github Importer Tool (alternate): https://docs.github.com/en/github/importing-your-projects-to-github/about-github-importer&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
To Do:&lt;br /&gt;
&lt;br /&gt;
Basic Singularity Commands&lt;br /&gt;
Pull - pulls a container image from a remote source&lt;br /&gt;
$ sudo singularity pull &amp;lt;remote source&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;&lt;br /&gt;
Singularity Container Services&lt;br /&gt;
    $ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&lt;br /&gt;
Singularity Hub&lt;br /&gt;
$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION  (Note: the path only needs to match the pull card. please see the remote website for example.)&lt;br /&gt;
Docker Hub&lt;br /&gt;
$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&lt;br /&gt;
(Note docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build)&lt;br /&gt;
&lt;br /&gt;
Exec - executes an EXTERNAL COMMAND&lt;br /&gt;
$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&lt;br /&gt;
&lt;br /&gt;
Shell - shells into an existing container&lt;br /&gt;
singularity shell IMAGE_NAME.sif&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
Run - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
$ singularity run IMAGE_NAME.sif&lt;br /&gt;
&lt;br /&gt;
Build (BIG TO DO: Very important)&lt;br /&gt;
$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&lt;br /&gt;
Sources include&lt;br /&gt;
Another Image either docker or singularity&lt;br /&gt;
Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
$ singularity shell docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files:&lt;br /&gt;
Workflow:&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
WRITE RECIPE:&lt;br /&gt;
File Name: Singularity.def&lt;br /&gt;
&lt;br /&gt;
Bootstrap: docker&lt;br /&gt;
From: ubuntu:18.04&lt;br /&gt;
%post&lt;br /&gt;
    apt-get -y update&lt;br /&gt;
    apt-get -y install python3&lt;br /&gt;
%files&lt;br /&gt;
    helloWorld.py /&lt;br /&gt;
&lt;br /&gt;
%runscript&lt;br /&gt;
    python3 /hello_world.py&lt;br /&gt;
&lt;br /&gt;
$ sudo singularity build CONTAINER_NAME.sif Singularity.def&lt;br /&gt;
$ sudo singularity run CONTAINER_NAME.sif&lt;br /&gt;
Result:&lt;br /&gt;
Hello World&lt;br /&gt;
&lt;br /&gt;
Resources:&lt;br /&gt;
Singularity Guide:&lt;br /&gt;
Singularity Help:&lt;br /&gt;
&lt;br /&gt;
Pre-Made Containers:&lt;br /&gt;
&lt;br /&gt;
TACC Resources:&lt;br /&gt;
&lt;br /&gt;
MVAPICH Dependances&lt;br /&gt;
MPICH Dependencies:&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24915</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24915"/>
		<updated>2020-08-08T02:09:17Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Install Singularity */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charliecloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime. Charliecloud does not have a module in Frontera.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
Check the Newest Release Version [https://github.com/hpcng/singularity/tags]&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.6.1 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
======Get Image======&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
======Execute Command in from Outside Container======&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
#Another Image either docker or singularity&lt;br /&gt;
#Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
Basic Ubuntu&lt;br /&gt;
Basic Ubuntu Container with Python&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
Basic Ubuntu with Mvapich&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
Basic Ubuntu with ???&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind Approach (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. &lt;br /&gt;
2. Hybrid/Host Approach [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers. Containers do not seem to support singularity exec)&lt;br /&gt;
&lt;br /&gt;
ORNL&lt;br /&gt;
#Container Builder Tool - [https://github.com/olcf/container-builder]&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24914</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24914"/>
		<updated>2020-08-08T02:08:58Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Install Singularity */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charliecloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime. Charliecloud does not have a module in Frontera.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
Check the Release Version [https://github.com/hpcng/singularity/tags]&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.6.1 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
======Get Image======&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
======Execute Command in from Outside Container======&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
#Another Image either docker or singularity&lt;br /&gt;
#Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
Basic Ubuntu&lt;br /&gt;
Basic Ubuntu Container with Python&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
Basic Ubuntu with Mvapich&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
Basic Ubuntu with ???&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind Approach (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. &lt;br /&gt;
2. Hybrid/Host Approach [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers. Containers do not seem to support singularity exec)&lt;br /&gt;
&lt;br /&gt;
ORNL&lt;br /&gt;
#Container Builder Tool - [https://github.com/olcf/container-builder]&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24913</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24913"/>
		<updated>2020-08-08T02:06:58Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charliecloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime. Charliecloud does not have a module in Frontera.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
======Get Image======&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
======Execute Command in from Outside Container======&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
#Another Image either docker or singularity&lt;br /&gt;
#Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
Basic Ubuntu&lt;br /&gt;
Basic Ubuntu Container with Python&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
Basic Ubuntu with Mvapich&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
Basic Ubuntu with ???&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind Approach (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. &lt;br /&gt;
2. Hybrid/Host Approach [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers. Containers do not seem to support singularity exec)&lt;br /&gt;
&lt;br /&gt;
ORNL&lt;br /&gt;
#Container Builder Tool - [https://github.com/olcf/container-builder]&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24912</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24912"/>
		<updated>2020-08-07T22:32:55Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Prebuilt Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charliecloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime. Charliecloud does not have a module in Frontera.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
======Get Image======&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
======Execute Command in from Outside Container======&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
#Another Image either docker or singularity&lt;br /&gt;
#Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
Basic Ubuntu&lt;br /&gt;
Basic Ubuntu Container with Python&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
Basic Ubuntu with Mvapich&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
Basic Ubuntu with ???&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind Approach (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. &lt;br /&gt;
2. Hybrid/Host Approach [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers. Containers do not seem to support singularity exec)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24906</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24906"/>
		<updated>2020-08-07T08:03:03Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Basic Singularity Commands */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charliecloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime. Charliecloud does not have a module in Frontera.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
======Get Image======&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
======Execute Command in from Outside Container======&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
#Another Image either docker or singularity&lt;br /&gt;
#Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind Approach (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. &lt;br /&gt;
2. Hybrid/Host Approach [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers. Containers do not seem to support singularity exec)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24905</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24905"/>
		<updated>2020-08-07T08:02:05Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charliecloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime. Charliecloud does not have a module in Frontera.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
======Get Image======&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
======Execute Command in from Outside Container======&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind Approach (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. &lt;br /&gt;
2. Hybrid/Host Approach [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers. Containers do not seem to support singularity exec)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24904</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24904"/>
		<updated>2020-08-07T08:00:02Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Selection of Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charliecloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime. Charliecloud does not have a module in Frontera.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
======Get Image======&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
======Execute Command in from Outside Container======&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind Approach (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. &lt;br /&gt;
2. Hybrid/Host Approach [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24903</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24903"/>
		<updated>2020-08-07T07:59:35Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Selection of Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime. Charlie Cloud does not have a module in Frontera.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
======Get Image======&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
======Execute Command in from Outside Container======&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind Approach (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. &lt;br /&gt;
2. Hybrid/Host Approach [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24902</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24902"/>
		<updated>2020-08-07T07:55:53Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Build or Pull a Singularity Image */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
======Get Image======&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
======Execute Command in from Outside Container======&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind Approach (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. &lt;br /&gt;
2. Hybrid/Host Approach [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24901</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24901"/>
		<updated>2020-08-07T07:55:20Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* MPI */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
======Get Image======&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
======Execute Command in from Outside Container======&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind Approach (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. &lt;br /&gt;
2. Hybrid/Host Approach [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24900</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24900"/>
		<updated>2020-08-07T07:53:51Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* MPI */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
======Get Image======&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
======Execute Command in from Outside Container======&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
# Bind Approach (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. &lt;br /&gt;
#Hybrid/Host Approach&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24899</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24899"/>
		<updated>2020-08-07T07:52:59Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Execute Command in from Outside Container */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
======Get Image======&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
======Execute Command in from Outside Container======&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. Also, The &lt;br /&gt;
2.&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24898</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24898"/>
		<updated>2020-08-07T07:52:51Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Get Image */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
======Get Image======&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
=======Execute Command in from Outside Container=======&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. Also, The &lt;br /&gt;
2.&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24897</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24897"/>
		<updated>2020-08-07T07:52:44Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Setting up a serial container (on your computer) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
=======Get Image=======&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
=======Execute Command in from Outside Container=======&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. Also, The &lt;br /&gt;
2.&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24896</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24896"/>
		<updated>2020-08-07T07:52:27Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Setting up a serial container (on your computer) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
====Get Image====&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
====Execute Command in from Outside Container====&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. Also, The &lt;br /&gt;
2.&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24895</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24895"/>
		<updated>2020-08-07T07:51:25Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Compile Program */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. Also, The &lt;br /&gt;
2.&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== &lt;br /&gt;
The singularity image file NEED the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
This work on Frontera. MPI Library: mvapich&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24894</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24894"/>
		<updated>2020-08-07T07:50:14Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Make MPI Program */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. Also, The &lt;br /&gt;
2.&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
Make Example File: sum_sqrt.c&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24893</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24893"/>
		<updated>2020-08-07T07:49:43Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* MPI Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. Also, The &lt;br /&gt;
2.&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
======Make MPI Program======&lt;br /&gt;
&lt;br /&gt;
 - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Compile Program======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
======Build or Pull a Singularity Image====== with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======Execute your command======&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24892</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24892"/>
		<updated>2020-08-07T07:46:47Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Installing Singularity */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
=====Install Dependencies=====&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Download Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Set Up Go=====&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Install Singularity=====&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Check if Singularity Works=====&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. Also, The &lt;br /&gt;
2.&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24891</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24891"/>
		<updated>2020-08-07T07:46:02Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Building Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH (see basic commands &amp;gt; pull)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. Also, The &lt;br /&gt;
2.&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24890</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24890"/>
		<updated>2020-08-07T07:45:32Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Building Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
# Install Singularity&lt;br /&gt;
# Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
# Install desired dependencies in the sandbox&lt;br /&gt;
## Build Dependencies&lt;br /&gt;
## Correct MPI Library and set environment variables&lt;br /&gt;
## Add any files you want to run&lt;br /&gt;
# Create a Definition File - Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
# Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. Also, The &lt;br /&gt;
2.&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24889</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24889"/>
		<updated>2020-08-07T07:44:32Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Building Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because you do not have sudo and --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. Just do that.&lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
1. Install Singularity&lt;br /&gt;
2. Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
3. Install desired dependencies in the sandbox&lt;br /&gt;
a. Build Dependencies&lt;br /&gt;
b. Correct MPI Library and set environment variables&lt;br /&gt;
c. Add any files you want to run&lt;br /&gt;
4. Create a Definition File &lt;br /&gt;
Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
5. Upload Dependency file to Singularity Container Library or Singularity Hub&lt;br /&gt;
&lt;br /&gt;
To retrieve your container, just pull from library://USER/PATH or sub://USER/PATH&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. Also, The &lt;br /&gt;
2.&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24888</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24888"/>
		<updated>2020-08-07T07:42:24Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Building Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
You cannot build containers on Frontera because --fakeroot does not work.&lt;br /&gt;
--fakeroot did not work for me in Frontera because it could not find me as a user.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity build --fakeroot hello.sif ubunutu18-mvapich.def &lt;br /&gt;
FATAL:   could not use fakeroot: no user entry found for llocsin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--remote flag is just a step away from building it yourself and uploading it to a remote. &lt;br /&gt;
&lt;br /&gt;
To build from scratch:&lt;br /&gt;
1. Install Singularity&lt;br /&gt;
2. Pull a Basic Image and use the --sandbox flag&lt;br /&gt;
3. Install desired dependencies in the sandbox&lt;br /&gt;
a. Build Dependencies&lt;br /&gt;
b. Correct MPI Library and set environment variables&lt;br /&gt;
c. Add any files you want to run&lt;br /&gt;
4. Create a Definition File &lt;br /&gt;
Transferring the setup commands that you tested in the build file into a definition file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====== MPI ======&lt;br /&gt;
Singularity uses two methods. &lt;br /&gt;
1. Bind (cannot be used)&lt;br /&gt;
You cannot use the bind on a Frontera. Also, The &lt;br /&gt;
2.&lt;br /&gt;
For MPI containers, you have to use the hybrid approach. Frontera does not support bind.&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24887</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24887"/>
		<updated>2020-08-07T07:28:40Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Using Prebuilt Containers or Building Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24886</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24886"/>
		<updated>2020-08-07T07:26:02Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Using Prebuilt Containers or Building Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
==== Prebuilt Containers ====&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
===== Generic =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24885</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24885"/>
		<updated>2020-08-07T07:25:35Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Building or Using Prebuilt Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using Prebuilt Containers or Building Containers==&lt;br /&gt;
&lt;br /&gt;
===== Frontera =====&lt;br /&gt;
&lt;br /&gt;
===== Summit =====&lt;br /&gt;
&lt;br /&gt;
===== Generic =====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Building Containers ====&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24884</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24884"/>
		<updated>2020-08-07T07:21:49Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Basic Singularity Commands */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (see build section for more details)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Building or Using Prebuilt Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24883</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24883"/>
		<updated>2020-08-07T07:20:43Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Building or Using Pre Made Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Prebuilt Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24882</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24882"/>
		<updated>2020-08-07T07:19:29Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* MPI Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char** argv) {&lt;br /&gt;
    //Grab Argument&lt;br /&gt;
    char* temp = argv[1];&lt;br /&gt;
    int numN = atoi(temp); //N&lt;br /&gt;
    printf(&amp;quot;Argument N: %d \n&amp;quot;, numN);&lt;br /&gt;
&lt;br /&gt;
    // Initialize the MPI environment&lt;br /&gt;
    MPI_Init(NULL, NULL);&lt;br /&gt;
&lt;br /&gt;
    // Get the number of processes&lt;br /&gt;
    int world_size;&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);&lt;br /&gt;
    // Get the rank of the process&lt;br /&gt;
    printf(&amp;quot;Processes: %d \n&amp;quot;, world_size);&lt;br /&gt;
    &lt;br /&gt;
    int world_rank;&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);&lt;br /&gt;
&lt;br /&gt;
    //Local Variables        &lt;br /&gt;
    int nglobal = numN;&lt;br /&gt;
    int block = nglobal/world_size;&lt;br /&gt;
    int my_lo = (world_rank*block)+1, my_hi = (world_rank+1)*block;&lt;br /&gt;
    /** Blocks&lt;br /&gt;
     *     int nlocal = nglobal/psize; flipped -&amp;gt; 1000/32 = 31.25 -&amp;gt; 31&lt;br /&gt;
     *     31&lt;br /&gt;
             int my_lo = (myrank*nlocal)+1, my_hi = (myrank+1)*nlocal);&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
            rank low    high inclusive&lt;br /&gt;
            0    1      31 &amp;lt;=TO Do: Add loop to process 0 nlocal-1&lt;br /&gt;
            1    32     62&lt;br /&gt;
            2    63     93&lt;br /&gt;
            3    93     124&lt;br /&gt;
            4    124    135&lt;br /&gt;
        * */&lt;br /&gt;
   &lt;br /&gt;
    if(world_rank==0){ //master process&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        int pSum=0;&lt;br /&gt;
        int totalSum=0;&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Main Process Start\n&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
        //send to P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt;world_size; myprocessor++){&lt;br /&gt;
        MPI_Send(&amp;amp;block, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process my block&lt;br /&gt;
        for(int i=1 ; i &amp;lt;= block; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //process rounded truncated block&lt;br /&gt;
        for(int left_over=block*world_size+1; left_over &amp;lt;= numN; left_over++){&lt;br /&gt;
             mySum+=(left_over*left_over);&lt;br /&gt;
        }&lt;br /&gt;
        totalSum+=mySum;&lt;br /&gt;
&lt;br /&gt;
        //receive P processors&lt;br /&gt;
        for(int myprocessor=1; myprocessor &amp;lt; world_size; myprocessor++){&lt;br /&gt;
            MPI_Recv(&amp;amp;pSum, 1, MPI_INT, myprocessor, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
            totalSum+=pSum;&lt;br /&gt;
            printf(&amp;quot;MpSum: %d\n&amp;quot;, pSum);&lt;br /&gt;
        }&lt;br /&gt;
 &lt;br /&gt;
        //print final total&lt;br /&gt;
        printf(&amp;quot;Sum of Squares for %d is %d\n&amp;quot;, numN, totalSum);&lt;br /&gt;
&lt;br /&gt;
        printf(&amp;quot;Main Process End&amp;quot;);&lt;br /&gt;
    }else if(world_rank != 0){ //worker process&lt;br /&gt;
        printf(&amp;quot;Start Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
        int mySum=0;&lt;br /&gt;
        //receive&lt;br /&gt;
        MPI_Recv(&amp;amp;block, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);&lt;br /&gt;
&lt;br /&gt;
        //calculate my sum of square&lt;br /&gt;
        for(int i=my_lo; i &amp;lt; my_hi; i++){&lt;br /&gt;
             mySum+=(i*i);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //send my sum&lt;br /&gt;
        MPI_Send(&amp;amp;mySum, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);&lt;br /&gt;
        printf(&amp;quot;End Process: %d\n&amp;quot;, world_rank);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // Finalize the MPI environment.&lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull shub://mkandes/ubuntu-mvapich&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ubuntu-mvapich_latest.sif ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24881</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24881"/>
		<updated>2020-08-07T07:11:32Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Serial Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24880</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24880"/>
		<updated>2020-08-07T07:11:05Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Serial Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24879</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24879"/>
		<updated>2020-08-07T07:10:47Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Serial Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
 a. idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
 b. sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24878</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24878"/>
		<updated>2020-08-07T07:09:47Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Serial Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24877</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24877"/>
		<updated>2020-08-07T07:09:20Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Containers on Frontera */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH -p development&lt;br /&gt;
#SBATCH -t 00:05:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -N 1 &lt;br /&gt;
#SBATCH -J test-singularity-python&lt;br /&gt;
#SBATCH -o test-singularity-python.o%j&lt;br /&gt;
#SBATCH -e test-singularity-python.e%j&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Run the actual program&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloPython.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24869</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24869"/>
		<updated>2020-08-07T05:08:42Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Serial Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to write a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
TO DO: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24868</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24868"/>
		<updated>2020-08-07T05:08:19Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Serial Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to wrong a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
#By copying a image from your local to Frontera with scp&lt;br /&gt;
#Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
TO DO: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24867</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24867"/>
		<updated>2020-08-07T05:08:04Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Serial Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to wrong a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
##By copying a image from your local to Frontera with scp&lt;br /&gt;
##Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
TO DO: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24866</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24866"/>
		<updated>2020-08-07T05:07:49Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Serial Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to wrong a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
Options: &lt;br /&gt;
##By copying it from your local to Frontera with scp&lt;br /&gt;
##Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
TO DO: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24865</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24865"/>
		<updated>2020-08-07T05:07:06Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Serial Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to wrong a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
##Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
##By copying it from your local to Frontera with scp&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
TO DO: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24864</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24864"/>
		<updated>2020-08-07T05:05:49Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Serial Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to wrong a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
# Copy the container from your computer to frontera:&lt;br /&gt;
##scp&lt;br /&gt;
##Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
TO DO: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Local Computer (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24863</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24863"/>
		<updated>2020-08-07T05:04:43Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Serial Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt; *Note: module save (if you plan to use singularity a lot)&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to wrong a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
# Copy the container from your computer to frontera:&lt;br /&gt;
##scp&lt;br /&gt;
##Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
TO DO: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Locally (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24862</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24862"/>
		<updated>2020-08-07T05:04:00Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Serial Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Make helloWorld.py &amp;lt;code&amp;gt;$ echo &amp;quot;print(\&amp;quot;Hello World\&amp;quot;)&amp;quot; &amp;gt; helloWorld.py&amp;lt;/code&amp;gt;&lt;br /&gt;
#Install Module (only if using Supercomputer): &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to wrong a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
# Copy the container from your computer to frontera:&lt;br /&gt;
##scp&lt;br /&gt;
##Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3-1. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
TO DO: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3-2. Execute from Locally (if Singularity is installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec ubuntu18.10-python3_latest.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24861</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24861"/>
		<updated>2020-08-07T04:46:09Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Serial Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Install Module: &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt;&lt;br /&gt;
#Make HelloWorld.py &amp;lt;code&amp;gt;print(&amp;quot;Hello World!&amp;quot;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to wrong a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
# Copy the container from your computer to frontera:&lt;br /&gt;
##scp&lt;br /&gt;
##Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull singularity pull library://libii/scec/ubuntu18.10-python3:sha256.522b070ad79309ef7526f87c34f0f8518e7d7acc6399aa6372fb0cf28fea25a1 &amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec IMAGE_NAME.sif python3 helloworld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
TO DO: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24860</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24860"/>
		<updated>2020-08-07T04:27:36Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Installing Singularity */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers. Use of premade containers does not require installation.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Install Module: &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt;&lt;br /&gt;
#Make HelloWorld.py &amp;lt;code&amp;gt;print(&amp;quot;Hello World!&amp;quot;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to wrong a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
# Copy the container from your computer to frontera:&lt;br /&gt;
##scp&lt;br /&gt;
##Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec IMAGE_NAME.sif python3 helloworld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
TO DO: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24859</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24859"/>
		<updated>2020-08-07T04:26:59Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Installing Singularity */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Recommended for people who want to run Singularity locally or create there own custom containers.&lt;br /&gt;
&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Install Module: &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt;&lt;br /&gt;
#Make HelloWorld.py &amp;lt;code&amp;gt;print(&amp;quot;Hello World!&amp;quot;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to wrong a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
# Copy the container from your computer to frontera:&lt;br /&gt;
##scp&lt;br /&gt;
##Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec IMAGE_NAME.sif python3 helloworld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
TO DO: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24858</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24858"/>
		<updated>2020-08-07T04:24:53Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Install Module: &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt;&lt;br /&gt;
#Make HelloWorld.py &amp;lt;code&amp;gt;print(&amp;quot;Hello World!&amp;quot;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to wrong a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
# Copy the container from your computer to frontera:&lt;br /&gt;
##scp&lt;br /&gt;
##Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec IMAGE_NAME.sif python3 helloworld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
TO DO: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (More geared for people who are familiar with Docker Containers)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24857</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24857"/>
		<updated>2020-08-07T04:15:49Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Serial Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Install Module: &amp;lt;code&amp;gt;$ module load tacc-singularity&amp;lt;/code&amp;gt;&lt;br /&gt;
#Make HelloWorld.py &amp;lt;code&amp;gt;print(&amp;quot;Hello World!&amp;quot;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to wrong a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
# Copy the container from your computer to frontera:&lt;br /&gt;
##scp&lt;br /&gt;
##Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec IMAGE_NAME.sif python3 helloworld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
TO DO: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (Don't use them now. They don't work well. Maybe they will update it.)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
	<entry>
		<id>https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24856</id>
		<title>Containers for CyberShake</title>
		<link rel="alternate" type="text/html" href="https://strike.scec.org/scecwiki/index.php?title=Containers_for_CyberShake&amp;diff=24856"/>
		<updated>2020-08-07T04:15:30Z</updated>

		<summary type="html">&lt;p&gt;Llocsin: /* Serial Containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is to document the steps involved in enabling the CyberShake codebase to run in a container environment.&lt;br /&gt;
&lt;br /&gt;
== Selection of Containers ==&lt;br /&gt;
&lt;br /&gt;
The available HPC Containers at the time of selection were Singularity, Charlie Cloud, and Shifter. Between the 3 of these container technologies, Singularity was widely adapted and had more open source tools. Because of this wide adaptation the module already existed in the Frontera system. Singularity has built-in support for different MPI libraries from OpenMPI, MPICH, and IntelMPI to name a few. Shifter, although light weight, is highly reliant on MPICH ABI. This would require site-specific MPI libraries to be copied to the container at runtime.&lt;br /&gt;
&lt;br /&gt;
== Installing Singularity ==&lt;br /&gt;
Install Dependencies&lt;br /&gt;
      &amp;lt;pre&amp;gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y \&lt;br /&gt;
build-essential \&lt;br /&gt;
uuid-dev \&lt;br /&gt;
libgpgme-dev \&lt;br /&gt;
squashfs-tools \&lt;br /&gt;
libseccomp-dev \&lt;br /&gt;
wget \&lt;br /&gt;
pkg-config \&lt;br /&gt;
git \&lt;br /&gt;
cryptsetup-bin&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download Go&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=1.13.5 OS=linux ARCH=amd64 &amp;amp;&amp;amp; \&lt;br /&gt;
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
rm go$VERSION.$OS-$ARCH.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set Up Go&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'export GOPATH=${HOME}/go' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; \&lt;br /&gt;
source ~/.bashrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install Singularity&lt;br /&gt;
&amp;lt;pre&amp;gt;export VERSION=3.5.2 &amp;amp;&amp;amp; # adjust this as necessary \&lt;br /&gt;
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
tar -xzf singularity-${VERSION}.tar.gz &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check if Singularity Works&lt;br /&gt;
&amp;lt;pre&amp;gt;git clone https://github.com/sylabs/singularity.git &amp;amp;&amp;amp; \&lt;br /&gt;
cd singularity &amp;amp;&amp;amp; \&lt;br /&gt;
git checkout v3.5.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up a serial container (on your computer) ==&lt;br /&gt;
Get Image&lt;br /&gt;
singularity pull &amp;lt;source&amp;gt;*&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build myPythonContainer.sif library://default/ubuntu:latest&amp;lt;/pre&amp;gt;&lt;br /&gt;
*&amp;lt;sources&amp;gt; include Singularity Container Library (library), Singularity Hub (shub) and Docker Hub (docker).&lt;br /&gt;
&lt;br /&gt;
Execute Command in from Outside Container&lt;br /&gt;
singularity exec imageName command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif cat /etc/lsb-release&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
singularity exec image_name command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec myPythonContainer.sif python3 helloWorld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find Size of Container:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity cache list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Note: Singularity cannot run on the Login Node&lt;br /&gt;
&lt;br /&gt;
== Basic Singularity Commands ==&lt;br /&gt;
'''Pull''' - pulls a container image from a remote source.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo singularity pull &amp;lt;remote source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;remote source&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
1. Singularity Container Services [https://cloud.sylabs.io/library]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif library://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
2. Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity pull --name CONTAINER_NAME.sif shub://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
3. Docker Hub [https://hub.docker.com/]&lt;br /&gt;
    &amp;lt;pre&amp;gt;$ sudo singularity build CONTAINER_NAME.sif docker://USER/PULL_PATH:VERSION&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note 1: docker images have layers and it needs to be merged into 1 singularity image. For that to happen you MUST use: build &lt;br /&gt;
*Note 2: the path only needs to match the pull card. please see the remote website for example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Exec''' - executes an EXTERNAL COMMAND&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity exec IMAGE_NAME.sif EXTERNAL_COMMAND&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Shell''' - shells into an existing container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: Your home directory is mounted by default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run''' - runs an image. Run is based on the Run Script parameters that were placed into the container when the image was built based the recipe&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity run IMAGE_NAME.sif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Build''' (BIG TO DO: Very important... a lot of details and opinions)&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity build IMAGE_NAME.sif &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;source&amp;gt; include&lt;br /&gt;
-Another Image either docker or singularity&lt;br /&gt;
-Singularity definition file (use to be known as a recipe file), usually denoted with name.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
You can shell into a docker UI - explore different containers without pulling or building&lt;br /&gt;
&amp;lt;pre&amp;gt;$ singularity shell docker://ubuntu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creating Definition Files: (To Do)&lt;br /&gt;
Set up complex workflows with Recipe File:&lt;br /&gt;
Alternatively-&lt;br /&gt;
Sandbox Directory Prototype Final Container: sudo singularity build --sandbox ubuntu_s docker://ubuntu&lt;br /&gt;
&lt;br /&gt;
== Building or Using Pre Made Containers ==&lt;br /&gt;
=== Frontera ===&lt;br /&gt;
&lt;br /&gt;
=== Summit ===&lt;br /&gt;
&lt;br /&gt;
=== Generic ===&lt;br /&gt;
&lt;br /&gt;
== Containers on Frontera ==&lt;br /&gt;
&lt;br /&gt;
=== Serial Containers ===&lt;br /&gt;
1. Prepare&lt;br /&gt;
#Install Module: &amp;lt;code&amp;gt;$module spider tacc-singularity&amp;lt;/code&amp;gt;&lt;br /&gt;
#Make HelloWorld.py &amp;lt;code&amp;gt;print(&amp;quot;Hello World!&amp;quot;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Get a Singularity Image on Frontera&lt;br /&gt;
(*Note: If you want to wrong a particular program, you must have the dependencies installed in the container)&lt;br /&gt;
# Copy the container from your computer to frontera:&lt;br /&gt;
##scp&lt;br /&gt;
##Pull from the Computation Node&lt;br /&gt;
&amp;lt;pre&amp;gt;idev -N 1; singularity pull &amp;lt;source&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: This command works in a sbatch file.&lt;br /&gt;
&lt;br /&gt;
3. Interface with Computation Node&lt;br /&gt;
# idev session&lt;br /&gt;
&amp;lt;pre&amp;gt;idev&lt;br /&gt;
ibrun singularity exec IMAGE_NAME.sif python3 helloworld.py&amp;lt;/pre&amp;gt;&lt;br /&gt;
# sbatch (recommended)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
TO DO: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Containers ===&lt;br /&gt;
&lt;br /&gt;
1. Make MPI Program - (Ex: named sum_sqrt.c)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Compile Program&lt;br /&gt;
&amp;lt;pre&amp;gt;$ mpicc -o sum_sqrt sum_sqrt.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Build or Pull a Singularity Image with the same MPI library installed inside the container [https://sylabs.io/guides/3.6/user-guide/mpi.html#hybrid-model]&lt;br /&gt;
mvapich preinstalled in this container&lt;br /&gt;
&amp;lt;pre&amp;gt;$ idev -N 1&lt;br /&gt;
$ singularity pull &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Execute your command&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ibrun singularity exec ./sum_sqrt 100000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
#Singularity Guide [https://sylabs.io/docs/]&lt;br /&gt;
#Singularity Repository [https://github.com/hpcng/singularity]&lt;br /&gt;
#Singularity Container Library [https://cloud.sylabs.io/library]&lt;br /&gt;
#Singularity Hub [https://singularity-hub.org/]&lt;br /&gt;
#Docker Hub [https://hub.docker.com/]&lt;br /&gt;
&lt;br /&gt;
TACC - Frontera&lt;br /&gt;
#TACC Containers [https://github.com/TACC/tacc-containers/tree/master/containers] (Don't use them now. They don't work well. Maybe they will update it.)&lt;/div&gt;</summary>
		<author><name>Llocsin</name></author>
		
	</entry>
</feed>