Notes on singularity and gluex software

July 29, 2017

Things to do:

  • Figure out how to copy files to oasis
  • Figure out how to copy containers to singlularity cvmfs

Useful commands:

singularity expand centos7.img
sudo /usr/local/bin/singularity shell --writable centos7.img
/usr/local/bin/singularity shell --bind /group/halld:/group/halld centos7.img

Getting mysql shared library to be seen by container:

> eval ` -l /home/marki/lib`
> -l /home/marki/lib
> ls /home/marki/lib
> cp /usr/lib64/mysql/ /u/scratch/marki

Try to find where the non-standard library is coming from, on ifarm1401:

> repoquery -f /usr/lib64/mysql/
> repoquery -i mysql-community-libs-0:5.7.15-1.el7.x86_64

Name        : mysql-community-libs
Version     : 5.7.15
Release     : 1.el7
Architecture: x86_64
Size        : 9898444
Packager    : MySQL Release Engineering <>
Group       : Applications/Databases
URL         :
Repository  : mysql
Summary     : Shared libraries for MySQL database client applications
Source      : mysql-community-5.7.15-1.el7.src.rpm
Description :
This package contains the shared libraries for MySQL client

Tracking down the mysql shared library needed:

In CentOS7 Singularity container:

> mysql --version
mysql Ver 15.1 Distrib 5.5.52-MariaDB, for Linux (x86_64) using readline 5.1

On ifarm1402:

> mysql --version
mysql Ver 14.14 Distrib 5.7.15, for Linux (x86_64) using EditLine wrapper
> ldd `which hd_root` | grep mysql => /usr/lib64/mysql/ (0x00007f670896c000)

On lorentz:

> mysql --version
mysql Ver 15.1 Distrib 5.5.52-MariaDB, for Linux (x86_64) using readline 5.1
> ldd `which hd_root` | grep mysql => /usr/lib64/mysql/ (0x00007f09a0cfd000)

Special repo on ifarm:

> pushd /etc/yum.repos.d
/etc/yum.repos.d /u/scratch/marki
ifarm1402:marki:yum.repos.d> ls
core72.repo epel-testing.repo.bak mysql.repo scicomp-extras.repo
epel.repo eple.repo.bak2 salt.repo
ifarm1402:marki:yum.repos.d> cat mysql.repo
# mysql rhel7 mirror
name = MySQL Community

GlueX and the Open Science Grid

  • at JLab
    • submit host installed: for submitting jobs to the OSG
    • SciComp did installation in consultation with OSG experts
    • log-in with CUE credentials for authorized users
    • JLab users submit jobs to the OSG with the installed software
  • at Collaborating Institutions
    • meeting was held with (different) set of OSG personnel to discuss contribute of University-based clusters to OSG infrastructure
    • makes these nodes available for general GlueX computing
    • UConn, NU already contributing
    • prospective contributions from CMU, IU, FIU, FSU

Data Challenge 2 Output Size

Just did the calculation: our second data challenge wrote 19.7 TB of data from 609,000 jobs, which gives an average file size of 32.4 MB. The jobs that ran here produced files of 100 MB or so; at JLab we were not subject to preemption and so we could afford to run longer. But the file count was dominated by the OSG jobs. I believe each job ran for 24 hours here (single-threaded), rather than the 8 hours Richard mentioned on the OSG. So within factors of your Simple calculation #1.

A stub file (rest/dana_rest_09001_2000065.hddm)
looks like:

creationTime=2014-04-01 13:46:25

and used this command:

find rest -name \*.hddm -exec grep size= {} \; > dc_02_rest_size.txt

and this script to do the count:

#!/usr/bin/env perl
$total_size = 0;
$count = 0;
while ($line = <STDIN>) {
    print $line;
    chomp $line;
    @t = split(/size=/, $line);
    print "$t[1]\n";
    $size = $t[1];
    $total_size += $size;
    print "count = $count size=$size total_size=$total_size\n";
print "$total_size $count\n";
exit 0;

which ended like this:

count = 608758 size=25341077 total_size=19741252163453
count = 608759 size=17263891 total_size=19741269427344
19741269427344 608759

Notes on OSG Appliance Meeting, December 12, 2016

Present: Chip, Graham, MMI, Richard (UConn)

  • Chip:
    • quality of service feature/concern
    • I/O bandwidth an issue
  • Richard:
    • need 128 GB
    • 20 TB of disk
    • 1 Gbit coming in should be sufficient
  • System:
    • should arrive mid-Jan.
    • personnel to stand the system up should be available
  • OSG stuff, Richard
    • Ticket source: UConn
      • to get the X509 stuff
    • Users need to log into appliance
    • Shadow jobs on appliance for each running job
  • Action items:
    • get numbers for data footprint from dc2
    • add off-site computing to official computing plan for GlueX