Notes on versions used by recon, 2017/01, version 1


  • The version file is /group/halld/data_monitoring/run_conditions/RunPeriod-2017-01/version_recon_2017_01_ver01.xml
  • The file refers to the builds only by home directory.
  • There are two sim-recon tags for this recon pass: “recon-2017_01-ver01-batch01” and “recon-2017_01-ver01-batch02”
  • For sim-recon, in the directory named in the version file, the master branch is checked out. git diff shows that it is consistent with the tag “recon-2017_01-ver01-batch02”
  • For hdds, the tag is “recon-2017_01-ver01-batch01”. There is only one.
  • For hdds, in the directory named in version file, the master branch is checked out, at commit 1f8c873 from June 15. It is different from the tag, but only in the file “ForwardMWPC_HDDS.xml”.

GlueX Meeting Report, July 19, 2017

  • New version package: version_2.13.2.xml
    • New versions of jana, sim-recon, root, ccdb, hdgeant4, and gluex_root analysis
    • faster CCDB startup, but fix to end problem, included
    • Corresponding release of HDPM 0.7.2
  • mcsmear development branch, Sean
  • Running mini launches on the grid
    • updates to OSG capabilities from Richard and Sean
  • HDvis update
    • Runs in the browser
    • Reads events from a Jana-based server
  • Computing Round Table
    • OSG: GlueX experience – Prof. Richard Jones (University of Connecticut)
    • OSG: Overview and plans – Prof. Würthwein Frank (University of
      California, San Diego)
    • Experience with Singularity – Dr. Matthew Vaughn (Texas Advanced
      Computing Center)

CCDB, Lustre, and SQLite

Did a test where I copied a CCDB SQLite file from the group disk to the volatile disk (Lustre) and from there to a local disk on my Linux box (non-Lustre). I tried the ccdb -i command, and gave the “ls” command inside of that. It failed on Lustre as expected:

CCDB provider unable to connect to sqlite:////volatile/halld/home/marki/ccdb.sqlite. Aborting command. Exception details: (sqlite3.OperationalError) database is locked [SQL: u'SELECT "schemaVersions"."schemaVersion" AS "schemaVersions_schemaVersion", "schemaVersions".id AS "schemaVersions_id" \nFROM "schemaVersions"\n LIMIT ? OFFSET ?'] [parameters: (1, 0)]

but it succeeded from the original location on the group disk and on my local disk. I conclude that a file passing through Lustre during its lifetime does not ruin it. This is inconsistent with Elton’s experience.


Notes on Python and RHEL6

tar zxvf Python-2.7.13.tgz
cd Python-2.7.13
./configure --prefix=`pwd` --enable-shared
make install

> diff ~/build_scripts/gluex_env_jlab.csh gluex_env_jlab_my_python.csh
<     setenv PATH $BUILD_SCRIPTS/patches/jlab_extras/rh6:/apps/python/PRO/bin:$PATH
<     setenv LD_LIBRARY_PATH /apps/python/PRO/lib:$LD_LIBRARY_PATH
>     set pypath=/group/halld/Software/builds/$BMS_OSNAME/python/Python-2.7.13
>     setenv PATH $pypath/bin:$PATH
>     setenv LD_LIBRARY_PATH $pypath/lib:$LD_LIBRARY_PATH

> git diff GNUmakefile
diff --git a/GNUmakefile b/GNUmakefile
index 81bc97f..1e461d9 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -83,7 +83,7 @@ INTYLIBS += -Wl,--whole-archive $(DANALIBS) -Wl,--no-whole-archive
 INTYLIBS += -L${XERCESCROOT}/lib -lxerces-c
 INTYLIBS += -L$(G4TMPDIR) -lhdds
-INTYLIBS += -lboost_python $(shell python-config --ldflags)
+INTYLIBS += -lboost_python -L$(shell python-config --prefix)/lib $(shell python-config --ldflags)
 INTYLIBS += -L$(G4ROOT)/lib64 $(patsubst $(G4ROOT)/lib64/, -l%, $(G4shared_libs))
 .PHONY: all