How to Create a New OSG Singularity Container

  1. create a repository on GitHub
    • must contain a Dockerfile in the top level
    • sufficient if not necessary: clone another such repository, e. g. JeffersonLab/gluex_docker_prod
    • call it my_account_github/my_container_github
  2. create a repository on DockerHub
    • from Dashboard (e. g. for account my_account_docker):
      • Use Create pull-down
      • Click on Create Automated Build
      • Click on Create Auto-build/Github (GitHub account must have been linked in advance)
      • Click on the GitHub repository created above (my_account_github/my_container_github)
      • Fill in the Create Automated Build form
        • choose Public for Visibility
        • give the repository a name for Docker
        • E. g., my_container_docker
    • new repository should show up on the Dashboard (my_container_docker)
      • Click on new repository, should see Source Repository box filled in with my_account_github/my_containter_github
  3. to trigger build, change Dockerfile on GitHub on the master branch
    • must be something other than comments
  4. Ask Richard to submit a pull request to list the new Docker repository on the OSG list
    • He needs to add the Docker repository (e. g., my_account_docker/my_container_docker) as a single line to the file docker_images.txt in the top level of the opensciencegrid/cvmfs-singularity-sync repository.
    • A corresponding singularity container (my_container_docker) will appear in the OSG cvmfs share (/cvmfs/singularity.opensciencegrid.org/my_account_docker/my_container_docker:latest)
    • Successful builds of the Docker container will refresh the Singularity container.
Advertisements

Creating a Singularity Container for Building

April 10, 2018

To build a sandbox container from latest CentOS7:

sudo /usr/local/bin/singularity build --sandbox centos7.builder Singularity.centos7.builder

To enter the container and make modifications to it (e. g., install RPMs via gluex_install):

sudo /usr/local/bin/singularity shell --bind /home/marki/gluex_install:/root/gluex_install --writable centos7.builder

GlueX Bi-Weekly Meeting Report

March 13, 2018

  • New sim-recon release, version 2.24.0.
  • MCwrapper 1.13 has been released.
  • Open Science Grid open for business (OSG-OFB) for GlueX simulations for all collaborators
    • software and database update being done automatically
      • 3 new builds out there
      • calibration constants (CCDB) and run constants (RCDB) updated nightly
    • mechanism for random trigger distribution in place
    • access via MCwrapper transparently
    • N. B: you need to get a personal certificate from the OSG. See the preparation section of “Using the Grid” on the wiki.
  • Other container related work continues
    • Running at NERSC.
    • Container standardization/distribution.
    • Using GlueX software on your desktop via containers.
    • We are still meeting weekly on container-related topics.

 

Build_scripts on OSG

  • Timeline
    • Mon.: Richard puts in pull request for markito3/gluex_docker_devel
    • Tue.: Get hd_root help message on OSG
    • Wed: Successful run of MCwrapper rho simulation
    • Thu: Resources and fresh CCDB and RCDB uploaded
  • Features
    • environment setting, standard syntax works:
      bs=/group/halld/Software/build_scripts
      dist=/group/halld/www/halldweb/html/dist
      source $bs/gluex_env_jlab.sh $dist/version_2.26.xml
    • resources in standard location:
      export JANA_RESOURCE_DIR=/group/halld/www/halldweb/html/resources
    • sqlite databases in standard location:
      export JANA_CALIB_URL=sqlite:////group/halld/www/halldweb/html/dist/ccdb.sqlite
    • reduces load on halldweb.jlab.org
  • Modifications
    • Environment setting was removed from container
    • Oasis bound to container, connection to /group done with soft link:
      $ ls -l /group
      lrwxrwxrwx 1 981 668 44 Mar 2 08:48 /group -> /cvmfs/oasis.opensciencegrid.org/gluex/group
  • Container dance
    • go to Docker Hub, create a repository (think historical collection of Docker containers)
    • link to a “personal” GitHub repository that has a Dockerfile
    • submit a pull request on OSG GitHub repository to add Docker repository to list of Docker repositories
    • wait…Singularity container will appear on OSG’s Singularity CVMFS:
      > ls -l /cvmfs/singularity.opensciencegrid.org/markito3/
      total 1
      lrwxrwxrwx 1 cvmfs cvmfs 112 Mar 2 08:48 gluex_docker_devel:latest -> /cvmfs/singularity.opensciencegrid.org/.images/71/71051c12b2d682bad4d96b8b2f1861486842b29d45a8be9baf3f8d38c32537
    • Changes to the container:
      • Push new Dockerfile to personal GitHub repository.
      • Docker Hub automatically builds a new instance of the Docker container.
      • OSG automatically creates a new Singularity container and updates CVMFS.
      • All OSG nodes automatically see new version of container.
  • Minor Issues
    • Standard CentOS7 build at JLab will not work in “naive” container.
      • Non-standard container or non-standard build?
    • /group binding mechanism does not work for non-CVMFS /group directory.
      • Different container or better CVMFS binding scheme?
    • Finalize rsync scheme for code/databases/resources

GlueX Singularity Container Notes

OSG experience

  • Ran with Richard’s osg-container.sh
  • Invoked using Thomas’s MCwrapper
  • Jobs fail for lack of RCDB access

Singularity work

  • singularity ext3 gluex image
    • start from Docker, centos:latest
    • initial build into “sandbox” (standard directory tree)
    • do package additions via build_scripts
    • convert from sandbox to ext3
    • container size: 1.1 GB
  • complete build of gluex stack using container, but external to container
    • built with version_2.26.xml (most recent is 2.27)
    • starts at 28 GB
    • after trimming: 8.4 GB with everything (below, sizes in kB)
      • 4914204 sim-recon
      • 1613104 geant4
      • 1146088 root
      • 383184 hdgeant4
      • 189660 jana
      • 152736 cernlib
      • 105552 lapack
      • 65732 ccdb
      • 58660 xerces-c
      • 58376 rcdb
      • 10268 sqlitecpp
      • 6988 hdds
      • 4344 evio
      • 3460 amptools
      • 2924 gluex_root_analysis
      • 1728 hd_utilities
      • 428 build_scripts-latest
      • 48 latest.tar.gz
      • 4 version.xml
      • 4 setup.sh
      • 4 setup.csh
      • 0 build_scripts
  • questions:
    • how to put on oasis?
      • proposal: use build_scripts directory structure
    • will it run in existing container?
      • likely yes
    • what to do about CCDB, RCDB, resources?
      • proposal: reproduce /group/halld directory structure
        • can update in an rsync-like manner

Clarification on Container Formats

Hey all,

From the help message:

CONTAINER PATH:
When Singularity builds the container, the output format can be one of
multiple formats:

default: The compressed Singularity read only image format (default)
sandbox: This is a read-write container within a directory structure
writable: Legacy writable image format

Clearly the sandbox is a normal directory tree with discrete files. There is also mention in the documentation about ext3 formatted and squashfs formatted files. Are these “writable” and “default” respectively?

“default” cannot be modified then, even by root?

Does “writable” imply “deprecated”, i. e, as a newbie should I avoid that format going forward?

What about the file extension “.img” and “.simg” I see. Which is who?

— Mark


Hi Mark,

Great questions.
defualt = squashfs
sandbox = directory
writable = ext3
Yes, a squashfs image cannot be modified even as root.  It is compressed and runs in a compressed state.
Yes writable does imply deprecated.  That could have been named better.
We are using .img to denote an ext3 image and .simg to denote a squashfs image.  But it is not necessary to do so.  You can name your images whatever you want.
Dave