kinit -n [userid] xhost + cmsuafng.fnal.gov ssh -t cmsuafng.fnal.gov
source /afs/fnal.gov/files/code/cms/setup/cshrc uaf [LPC only] setenv SCRAM_ARCH `scramv1 arch`
mkdir ~/DEV cd ~/DEV scramv1 project CMSSW CMSSW_0_9_0 cd CMSSW_0_9_0/src project CMSSWat the LPC only:
cmscvsroot CMSSW cvs login [98passwd]
Next, check out the necessary packages:
cvs co -r CMSSW_0_9_0 RecoLocalMuon/CSCRecHit(In earlier versions of these instructions, other packages were required, such as DataFormats/CSCDigi, EventFilter/CSCRawToDigi and CondFormats/CSCObjects. The mods deposited there have since been implemented in 0_9_0.)
In order to run on the intermediate streamer files written since 16-Aug-2006, you will need the following code also:
cvs co -r CMSSW_0_9_0 IOPool/Streamer cvs co -r V03-04-06 IOPool/Streamer/src/StreamerInputFile.ccHopefully this will not be needed long - there is a plan to convert the intermediate stream files (which have a file extension .dat) to root files. Note that one has to use "source = NewEventStreamFileReader" rather than "source = PoolConvert" in the configuration file, as discussed below.
Get copies of the "private" routines that Michael wrote as a starting point.
(Notice the periods at the end!)
These routines are meant to be for your analysis. You can get an idea
of how to hook your own routines into the CMSSW framework: aside from
the routines you need to list them in the BuildFile file. Without that,
the routines will not be "known" to CMSSW.
cd RecoLocalMuon/CSCRecHit/test/ cp -v ~schmittm/PUBLIC/CMSSW_0_9_0/src/RecoLocalMuon/CSCRecHit/test/myRoutines.cc . cp -v ~schmittm/PUBLIC/CMSSW_0_9_0/src/RecoLocalMuon/CSCRecHit/test/myRoutines.h . cp -v ~schmittm/PUBLIC/CMSSW_0_9_0/src/RecoLocalMuon/CSCRecHit/test/myHistograms.h . cp -v ~schmittm/PUBLIC/CMSSW_0_9_0/src/RecoLocalMuon/CSCRecHit/test/BuildFile . cp -v ~schmittm/PUBLIC/CMSSW_0_9_0/src/RecoLocalMuon/CSCRecHit/test/mtccproc.cfg .Notice!
Meanwhile I have copied the current version of all these files to ~schmittm/public/ on lxplus where you can get at them, I hope. The tar file ALL_THESE_FILES.tar.gz contains all the files in that area include the /results subdirectory.
cd ~/DEV/CMSSW_0_9_0/src eval `scramv1 runtime -csh` scramv1 bThis will take several minutes.
@@@@ Checking shared library for missing symbols:
tmp/slc3_ia32_gcc323/src/RecoLocalMuon/CSCRecHit/test/libCSCRecHitReader.so: undefined symbols
_ZNK12CSCStripDigi8getStripEv
needed by tmp/slc3_ia32_gcc323/src/RecoLocalMuon/CSCRecHit/test/libCSCRecHitReader.so
If that occurs, then try scramv1 b a second (or even third) time.
cd ~/DEV/CMSSW_0_9_0/src/RecoLocalMuon/CSCRecHit/test eval `scramv1 runtime -csh` cmsRun mtccproc.cfgThe program may "sit" there for a while before showing anything on the screen. This is normal.
ConvertFile::ConvertFile() File file:./mtcc.root was not found.then you need to make a soft link to a real file -- see below.
If the job runs successfully, then you should see something like this:
Runs Run 2181 /Run /Runs InputSourceClass PoolSource /InputSourceClass EventsRead 1000 /EventsRead /InputFile TimeReport Time report complete in 37.4453 seconds Time Summary: Min: 0.00365001 Max: 7.43148 Avg: 0.0374453 monHists: Write monHists: Write monHists: Write monHists: Write monHistsGlobal: Write
The configuration file is mtccproc.cfg and it looks like this:
As you can see,
process TEST = {
# Specify your input file(s)
# Use this for root files.
source = PoolConvert {
untracked vstring fileNames = {
'file:./mtcc.root'
}
untracked int32 maxEvents = 1000
}
# Use this for the "intermediate streamer" files written on 16-Aug
# source = NewEventStreamFileReader{
# string fileName = "mtcc2335.0.dat"
# int32 max_event_size = 2000000
# int32 max_queue_depth = 5
# untracked int32 maxEvents = -1
# }
# Specify the output file for the reconstructed events.
module out = PoolOutputModule {
untracked string fileName ="eventsOutput.root"
}
# These are some useful utilities.
service = Timing { }
service = SimpleMemoryCheck { }
# specify the data unpacker, which makes the digis
module cscunpacker = CSCDCCUnpacker {
untracked bool Debug = false
untracked bool PrintEventNumber = false
FileInPath theMappingFile = "CondFormats/CSCObjects/data/csc_slice_test_map.txt"
untracked bool UseExaminer = true
untracked uint32 ErrorMask = 0xDFCFEFFF
untracked uint32 ExaminerMask = 0x7FB7BF6
}
# These are some geometry modules needed for recHits and segments
# (I'm not sure what to do about the magnetic field.)
include "Geometry/MuonCommonData/data/muonIdealGeometryXML.cfi"
include "Geometry/CSCGeometry/data/cscGeometry.cfi"
# include "MagneticField/Engine/data/volumeBasedMagneticField.cfi"
# Specify the recHit reconstruction routine.
# The first file sets default paramters.
include "RecoLocalMuon/CSCRecHit/data/CSCRecHit2DProducer.cfi"
module rechitproducer = CSCRecHit2DProducer{
string CSCStripDigiProducer = "cscunpacker"
string CSCWireDigiProducer = "cscunpacker"
int32 no_of_chamber_types = 9
vstring algo_types = {
"CSCRecHit2DFromStripsAndWires",
"CSCRecHit2DFromORedStrips"
}
VPSet algo_psets = {
{using common_params},
{using common_params}
}
vint32 algo_per_chamber_type = { 2, 1, 1, 1, 1, 1, 1, 1, 1 }
untracked bool debug = true
}
# Specify the segment reconstruction routine.
include "RecoLocalMuon/CSCSegment/data/CSCSegmentAlgorithmSK.cfi"
include "RecoLocalMuon/CSCSegment/data/CSCSegmentAlgorithmTC.cfi"
module segmentproducer = CSCSegmentProducer {
InputTag inputObjects = rechitproducer
# Name of RecHitProducer producer module(s)...
string CSCRecHit2DProducer = "rechitproducer"
# Choice of the building algo: 1 SK, 2 TC...
int32 algo_type = 1
VPSet algo_psets = {
{using CSCSegAlgoSK}, {using CSCSegAlgoTC}
}
}
# Finally, specify my user analysis package !
# The ascii files are for dumping values and tend to be very large.
# The rootFileName is where the histograms will go.
module mine = myRoutines{
untracked bool writeAsciiFiles = false
untracked string rootFileName = 'monHists2281Test.root'
}
# Here is the path of modules for the analysis of a single event:
path p = {cscunpacker, rechitproducer, segmentproducer, mine}
# Specify the output:
# endpath e = {out}
}
(For example, "ln -s /tmp/schmittm/MTCC/mtcc.00002181.A.testStorageManager_0.0.root mtcc.root".)
By default, this is not written - you need to uncomment the line
endpath e = {out} in order to get the file.
It might be wise to direct that output to the /tmp/ disk.
I have updated the code so that you can turn the writing to the
ascii file on and off, and also so that you can specify the name
of the root file for your histograms.
The routines myRoutines and myHistograms create collections of histograms some of which may be useful for monitoring the MTCC data. All of these histograms are written to the file specified in mtccproc.cfg. (This file can get large due to a number of 2D histograms. You may want to specify a file name in your /tmp area.)
You can make formatted plot from these histograms using the
routines stored in
~schmittm/PUBLIC/CMSSW_0_9_0/src/RecoLocalMuon/CSCRecHit/test/results
I will post instructions for those plotting macros after
they have been cleaned up and organized a bit better.
I have example MTCC data files you can use:
You should make soft links to the files you want to analyze as
explained above. You can use the soft link in the mtccproc.cfg file.
In order to get these files yourself, or to get other files, you need to use CASTOR which is a general file-storage system at CERN. At Fermilab there is a different system.
In order to find what files are available, try something like:
nsls -l /castor/cern.ch/cms/emuslice/globaldaq/
The latest MTCC data are written to
/castor/cern.ch/cms/MTCC/data/0000NNNN/A/
where NNNN is the run number.
To copy a file to your own disk, use
rfcp /castor/cern.ch/cms/MTCC/data/0000NNNN/A/(file name) (location)
Note that this can take several minutes.
At the LPC, some files have been put on a public disk:
/uscmst1b_scratch/lpc1/lpcmuon/MTCC.
See the LPC cosmic data
web page for more information.
Ingo Bloch says that the right way to specify a file on dCache in the configuration file is like this:
"dcache:/pnfs/cms/WAX/7/PHEDEX2/MTCC/data/00002274/A/mtcc.00002274.A.testStorageManager_0.0.root"(notice there is no file: specification in front). For reading the so-called intermediate streamer files, which have the .dat extension, you cannot use dcache. In this case you need to copy your desired file to your /tmp area like this:
dccp /pnfs/cms/WAX/7/PHEDEX2/MTCC/data/00002426/A/mtcc.00002426.A.testStorageManager_0.0.dat (your file)and then specify (your file). Note that you can read only one of these files at a time, and they typically contan 10k events.
Something similar can be done at CERN. For example, Oana Boeriu sent this recommendation:
"rfio:/castor.cern.ch/cms/MTCC/data/00002274/A/mtcc.00002274.A.testStorageManager_0.0.root"Again, this does not work with the intermediate streamer files.