Minutes for E895 Meeting at LBNL on September 3 (more abbreviated/summarized than usual) Attending: Hans-Georg Ritter (HGR), Heng Liu (HL), Nathan Stone (NS), Mike Lisa (ML), Mark Gilkes (MG), Sergei Panitkin (SP), Dieter Best (DB), Chris Pinkenburg (CP) NS - how do we establish the run number, for purposes of pass1 running? HL - Can't you just get the run number from the first event? (EVENT table) How often does this fail? DB - Well, for tape #43, 5 out of 26 files have "zero" or duplicate run number values in this field. HL - well, I never saw this for 2/4 GeV... (after some discussion) All - we will create a lookup table (database) containing: (Tape, File, Run, Queue, Bfield, Vdrift) for all event files. - This database will be used by the pass1 running script to "feed" the run number to pass1.kumac - DB will create the DataBase for 6/8 GeV. - ?? will create the DataBase for 2/4 GeV. (Chris?) - We may want to check (within KUIP) to make sure that the info from the first event matches the info fed to pass1.kumac from the pass1 script ML - why? what are you going to do if they don't match? NS - probably just use the values from the DataBase, but send up a flag... discussion: we need to establish a naming convention for DST files. - do we reference DSTs by Tape/File or Run/Queue ? (after some discussion) - we will reference DSTs by Run/Queue. This is less intuitive for DST production, but more intuitive for pass2; AND, since scalars are only written to Q0, this way we will always know where to look for scalars. NS - We should identify the exact list of tables that will be written during DST production. I will write a list and circulate it. CP - What about event summary tables? ML - Well, OSU is creating a _run_ summary module, which could easily take care of both jobs... We'll try to have it within a week. (discussion) What goes into the summary tables? Event Summary Table ------------------- - Hit Multiplicity - Hit Removal Efficiency - Track Multiplicity - Mean number of hits per track - Number of "good" tracks (i.e. with DCA < DCA_max) - Event number - "Loop" number Run Summary Table ----------------- - Run # - Queue # - Total number of (actual, non-zero) events - Mean track multiplicity - Mean hit multiplicity - Mean hit removal efficiency - Mean number of hits per track - dE/dx gain factor "Quality" histograms (output to RZ file and GIF for monitoring) --------------------------------------------------------------- - dE/dx vs. rigidity (2D) - Track multiplicity - Hit multiplicity - Hit removal efficiency vs. Track multiplicity (2D) - Ydev vs. X for each row (profiles, output to RZ only) ML - looking at the track density and hit density shown on the hitfinder WWW page, we are way over maximum density inside "the cone". - have talked with Howard Weiman about this, who recommends ignoring all hits inside the cone... I agree. - I will build this (switchable) into 2DH to investigate its impact on our data. - it should be faster and might be less misleading - cone should be well defined for each energy - spore generation in JTF previously selected the first "successful" spore, and ignored all other spores. CP - we testing a version which follows all spores completely, then chooses the track with the most hits. - CPU increase is ~20%, tracks do look "better" CP - T0 calibration is having problems with 2DH. It appears that the hits don't move exactly as I expect when adjusting T0. - will have to work with Mike on this one CP - Paul Chung sees some "good" pads that do not have pedestals subtracted ML - did you look at the way I am treating row 51? We could do this to other pads too. CP - I'll look into this DB - There is a fix for the misalignment between simulated and reconstructed tracks, as identified by SUNY. It is in the CVS incoming directory. CP - We have a pass1.kumac which we use, and is a good starting point for others. - it sets the Bfield and Vdrift by run number andeverything... - will submit it to CVS HL - found a bug in am_summary. it is in CVS/incoming ML - I have some upgrades to 2DH which will come out soon CP - Looking at the values of Bfield in the vector in CVS, they only vary at the parts per mil level. Our Momemtom resolution is larger than this. Do we need then to keep track of the Bfield on a run-by-run basis, or can we ignore this bfield vector and assume the field is constant, at its nominal value? All - ignore bfield.kumac and the run-wise vector. Assume constant (exact) field. Game plan for the week: - Start pass1. - Meet next Tuesday, and come with experience making DSTs.