DOC

STAR Activities in FY2008

By Jay Sanders,2014-03-26 14:05
8 views 0
In the end, both Endcap and Barrel EMC tower data were successfully Studies within this framework were able to demonstrate that 4 GEM planes were not

    A. STAR Technical Activities in FY2008

    W.W. Jacobs, J. Sowinski, S.W. Wissink, J. Balewski, S. Choudhury,

    P. Djawotho, W. He, B. Page, I. Selyuzhenkov, J. Stevens

A.1 Endcap EMC and STAR: Development and Status

A.1.1 Hardware and software upgrades to STAR L2 trigger

The data paths for both the BEMC and EEMC to STAR‟s L2 high-level triggering was

    substantially upgraded in preparation for RHIC Run 8. The increased pp luminosity for Run 6, along with the extensive and successful use of L2 data processing (and vetoing), brought up questions regarding the EMC data speed to the L2 processors, as well as overall issues of deadtime associated with detector readout and data transfer. One issue for the STAR calorimeters was how the L0-triggered data (digitized data transferred from the EMC tower FEE‟s to “data collectors”) should be made available for the next level of

    triggering decision. While provision was originally made in the collectors (designed and built at IUCF) for a separate and faster path to L2, this had not yet been implemented in Run 6, and the data took a path with less stringent timing requirements through DAQ on its way to L2. Detailed timing measurements made during Run 6 and during the summer shutdown that followed showed that the earliest arrival of EMC data to L2 was ~ 800 μs

    after the L0 decision, with a long exponential tail whose size grew with event rate. This is to be compared to a typical L2 processing time of 100-200 μs. While there is a limit on

    the raw (hardware level) L0 rate allowed by operation of the gated grid of the STAR TPC, a shorter transfer time of EMC tower data to a L2 trigger decision would certainly boost overall data collection efficiency, especially when used in conjunction with “clever” L2

    (vetoing) software, in order to produce an overall acceptable STAR DAQ event rate.

    For Run 8 we decided on a solution that would allow us to tackle several recurrent issues at once: upgrade the data transfer from the present G-Link to a new DDL protocol (modeled after ALICE at CERN), transfer the data directly to a trigger L2 machine with subsequent delivery to DAQ, and accomplish all this via a newly designed data collector output board which would be replicated in sufficient quantities to alleviate concerns over availability of spares. Gerard Visser (IUCF Engineer) was responsible for the design, construction, and final implementation of the new boards that used the SIU-DRORC plug-in data transfer components from CERNTECH (now an adopted STAR standard for new subsystems following the ALICE-based electronics protocol). The integration of this new data protocol (data path directly to L2 processers with subsequent pass of selected data to DAQ) required extensive software support from the STAR trigger group, as well as considerable commissioning and debugging with test data and with initial collisions during the startup of Run 8. In the end, both Endcap and Barrel EMC tower data were successfully transferred to the L2 processing machines in less than ~ 80 us, somewhat faster than the readout of the full STAR suite of trigger detectors. In the later stages of the run, the trigger detector readout speed was improved by engaging STP networks to speed the trigger data transfer. This capability, along with increased use of L2 decision software (see next section for more details), provided the opportunity for optimizing the use or STAR‟s limited bandwidth during the Run 8 polarized pp data taking.

A.1.2 Level-2 Trigger Software Structure and Upgrades

    In Run 6, the IU group headed the effort to provide level-2 (L2) software triggers aimed at enhancing STAR‟s efficiency for acquiring di-jet data, as well as supporting the effort

    to produce advanced L2 algorithms to enhance rare events (e.g., photons and electrons

    through modified High Tower / Trigger Patch decisions). This task paid off handsomely in terms of the quality of the data set acquired, the physics output realized, and even led to a PRL published on the basis of the L2 trigger data along, bypassing the normal (and much slower) STAR data production and analysis procedures.

    For Run 8, the hardware advances described in the previous section were driven by the desire to increase the utility and throughput of the L2 trigger, and to do so by sending the EMC data to the L2 processing CPU more rapidly. The L2 software then required a corresponding upgrade, in order to better match these new capabilities. A key task in this upgrade was to implement an overall structure under which the individual L2 physics algorithms would run, and through which the rest of the trigger operations would interact. The structure also provided common “calibrated” EMC data for each event, and provided

    common QA monitoring software, rather than having each individual L2 algorithm repeat these calculations. In fact, the favorable benchmark achieved by the EMC data arrival led us to devise a two-step L2 computation scheme, whereby the globally useful EMC calibrations are not only calculated first, but are carried out during the time the rest of the trigger data are accumulating. The individual L2 algorithms are then run later, after all of the trigger information is at hand.

    Because of the structure and new time scale, the algorithms themselves also needed to be upgraded, in order to accept the common data and be optimized for computation speed, to yield a trigger decision at the earliest possible time. Jan Balewski (a long time member of the IU group who recently transferred to a staff position at MIT) created the new structure software, streamlined several of the algorithms, and worked with the IU group to provide oversight of the upgrade contributions provided by other STAR (largely spin group) collaborators. For di-jets, we updated the code used successfully in Run 6 to select high figure of merit events through a kinematic distribution decision. The code used in Run 6 code worked by lowering the hardware (“L0”) jet trigger thresholds from the EMC‟s, and then selecting events at L2 to increase the di-jet event acquisition rate by almost an order of magnitude. The code treats the Barrel and Endcap EMC‟s as one

    contiguous object spanning pseudorapidity range -1 ? η ? 2 and full azimuthal coverage,

    and identifies two non-adjacent clusters of EM energy in definable software jet patches (sliding freely to optimize jet position as opposed to the hard wired L0 trigger). Added to these capabilities for Run 8 was the ability to adjust thresholds by kinematic region so as to maximize acquisition rate for events with the highest figure of merit. Although strained resources prevented a full upgrade of the code to the new environment, the algorithm ran successfully and worked well with the overall upgrade path described above.

    As mentioned elsewhere, Run 8 pp running was cut short, so the full physics impact of this work was not manifested in the final data set. However, the push to get the new upgrades coded, commissioned, and documented with an associated monitoring system and QA provides a solid benchmark for future application. This capability will continue to be important in the future, even as the upgrade to the STAR TPC readout (“DAQ1000”)

    is put in place, to limit data set size via L2 while enhancing physics signal. Already L2 is squarely in our plans for Run 9 triggering, and we are working to further enhance all our capabilities (e.g., the IU group will continue to improve the dijet code) in order to extract

    the highest quality physics, as well as provide on-line monitoring of EMC performance at several levels, to ensure the soundness of the data we collect.

A.1.3 Upgrades to STAR Readout for Run 9

    The STAR experiment is undergoing a number of design upgrades, one of which will significantly increase (factor of 20 from present) the maximum readout speed of the TPC in Run 9. When completed, the “DAQ1000” upgrade will enable a STAR event rate of up

    to 1 kHz with minimal deadtime, along with significant TPC data compression (70 GB/s to ~400 MB/s). The new frontend readout that makes this possible takes advantage of electronics developed for the TPC of the ALICE experiment at the LHC/CERN.

    The advent of DAQ1000 has implications for other STAR subsystems. In particular, the barrel calorimeter shower-maximum (BSMD) and preshower detectors, which use essentially the same readout electronics, will become the slowest detectors in STAR, and thus contribute the largest deadtime during data taking. While other slow detectors remain (for example, the forward TPC‟s), their deadtime impact may not be as critical as

    it can be reduced by omitting their readout from many of the triggers. The BSMD, on the other hand, is crucial to a wide range of physics and therefore must be included in many triggers, including those most critical to the spin physics program.

    The BSMD readout, as originally built, uses a stored capacitor array (SCA) in the front end electronics (FEE) located on the detector, and analog readout upon an event trigger to crates (RDO) with digitizers mounted on the STAR magnetic backlegs. It is not feasible, from a resource point of view, to replace all this electronics with modern fast readout (e.g., similar to the TPC). However, based on our direct experience with various BSMD readout issues encountered during Run 8, we have developed a plan to improve its speed and capability.

    Presently, the deadtime of the BSMD for an accepted event is a minimum of ~700 µs

    (a little less for an aborted event). We intend to attack the timing issue here at IUCF on several fronts, covering any improvements which can be made by FPGA changes alone. We do not plan to make any patches or component value changes on the readout crate boards themselves, as there don‟t appear to be easily realizable gains to be made there. The timing improvements upon initial study of the FPGA designs include reducing the current scheme of reading four time buckets (~130 µs each) to essentially a single read, along with several other steps to speed up the read and transfer process. Overall we expect to improve the livetime of the BSMD by somewhat better than a factor of two, by revising at least two of the three FPGA designs in the readout crates and changing the BSMD TCD busy timing. More specifically, we expect to improve the BSMD deadtime, at 1 kHz L0 trigger rate and 500 Hz accept rate, from about 70-75% to about 30%. A BSMD test crate and associated electronics has been shipped to IUCF, and electronics engineer Gerard Visser has begun the work on this project, which should be completed, downloaded to crates at STAR, and tested in situ before Run 9 startup in February 2009.

    The above work is independent of a separate STAR plan to replace the VME-based BSMD receiver boards and DAQ CPU with a PCI / linux version. The latter will serve to improve the deadtime at high accept rates (~700 Hz or greater, when the BSMD is presently 100% dead), but will have no influence on the basic readout crate deadtime, which dominates at lower accept rates. The best performance (for instance 30% dead at 1 kHz trigger and accept rate) will be realized by carrying out both of these upgrades.

    The move to PCI / linux at DAQ will make it possible to implement zero-suppression of the BSMD data (presently stored unsuppressed), thus saving considerably on storage space. While the change in receiver boards may be delayed due to lack of resources, a version of the zero suppression will be deployed in Run 9, and the STAR EMC group will be charged with the considerable task of getting all the protocol, software and monitoring tasks up and running.

    As a final note on the overall performance of STAR electromagnetic calorimetry, we mention the work of IUCF, and in particular the electrical engineering support we provide, in diagnosing and proposing a simple fix for the signal saturation that plagued previous

    BSMD response data. The problem was found to stem from the FEE line driver and the way in which it was terminated. The issue was successfully resolved, such that the full ADC range was recovered for our Run 8 BSMD data.

A.1.4 Ongoing Responsibilities of the IU STAR group

    The IU group retains primary responsibility for maintenance and improvement / upgrades of the hardware and software for the Endcap (EEMC) calorimeter. On the hardware side, this includes: replacement of failed HV bases and electronics between runs and during short access breaks during the run; removal and installation of the STAR west poletip at the beginning and end of the run; and all commissioning tasks early in the run, followed by 24/7 expert availability throughout the run. Fulfilling these responsibilities mandates that we maintain significant manpower on site over a period of several months. Ongoing software support, performance monitoring, and carrying out calibration procedures are also required. During the 2006 pp run, the EEMC again worked remarkably well, with over 99% of all channels (including the calorimeter towers, their pre- and post-shower layers, and the scintillator-based SMD planes) functioning and calibrated. Much of this essential, though somewhat repetitious, activity has been detailed in previous annual reports, to which we refer the interested reader.

.

     A.2 Status of the STAR Forward GEM Tracker (FGT)

    (J. Balewski, P. Djawotho, W.W. Jacobs, B. Page, J. Sowinski, and S.W. Wissink)

The technical need for additional tracking in the pseudo-rapidity region 1 < ; < 2 (which

    corresponds to 37? > >15?) in front of the EEMC is demonstrated in Fig. A.2.1. The figure shows slightly more than a quarter section of STAR, with the interaction region at the lower left surrounded by a central tracking upgrade package consisting of Si detectors. The EEMC is along the right edge and the TPC tracking volume is shown as a blue rectangle in the center. Three electrons have been simulated at different angles or pseudorapidities (; = 1, 1.5, 2). The TPC readout is on the face in front of the EEMC so that as ; increases fewer and fewer points are recorded in the TPC for tracking. The lower right corner (as shown in the figure) of the TPC is not part of the useful tracking volume because in this region there will be less than 5 hits on a track and this is not enough to find and connect the track segment to other detectors. So not only is the resolution degraded by losing points, there is essentially no tracking beyond ; =1.5. A 6

    GEM plane configuration was chosen for the FGT project, optimized for inner and outer diameters and location along the beam line for the proposal during the past year.

    Detailed studies were also performed to optimize strip geometry to limit occupancy.

Figure A.2.1. High p electrons in STAR. A quarter section of the STAR detector is T

    shown with central and forward tracking upgrades options. The beam line is horizontal near the bottom of the figure. The beams interact at the lower left, with a vertex distribution characterized by =30 cm. The EEMC is at the right edge of the figure and

    the TPC tracking volume is in the central part of the figure as a blue rectangle. Electrons are thrown from the center of the interaction diamond at 3 pseudorapidities, ; = 1, 1.5 oooand 2 ( = 37,25,15). A 6 disk option was chosen as optimal for the project.

    Figure A.2.2. Hits available for tracking for 3 vertex locations: -30 cm (left), 0 (center) and +30 cm (right). The ; of the thrown track is shown along the x axis. For a given η,

    one can follow vertically and determine how many hit points contribute to the tracking. For example, in the center plot for ; = 1.5, the hits are beamline constraint (red), 2 FGT

    hits (magenta), 6 TPC hits (blue), and the EEMC shower max detector (red).

    Figure A.2.2 shows the radius of hits in the tracking detector subsystems vs. the pseudo-rapidity of the tracked electrons originating from 3 vertex positions (z=-30cm, 0, +30cm) characterizing the expected beam vertex distribution. Vertical lines at ; =1, 1.5, and 2

    illustrate the detectors traversed for these pseudorapidities. At all angles a transverse vertex constraint (red) of 200m is used. This is typical of constraints used regularly in STAR and simulations show that a constraint up to 1mm does not seriously degrade performance of the tracking upgrade. At low ;, central tracking points (black) of 20 m

    resol. and many points of the TPC (blue) of 1mm resol. are available. In all cases the point from the EEMC shower max detector (red) with 1.5mm resol.)is used. By ; ~1.5

    the number of TPC points is considerably reduced, and fully missing for z=+30cm, but FGT (magenta) with 60 m resol. points have been added. Beyond ; ~1.5 the tracking

    is done using only the FGT, beam line constraint and points from the EEMC SMD.

    We demonstrate that the addition of the 6 GEM planes provides sufficient additional tracking in Fig. A.2.3. Here we show the charge sign discrimination probability vs. ; for

    the 3 vertices characterizing the extent of the interactions along the beam line. The 2 panels show the cases for the existing TPC and EEMC SMD and the addition of the GEM tracking planes and demonstrates that charge sign discrimination is well over 80% for the full range of the EEMC, i.e., out to ; > 2. Studies within this framework were able

    to demonstrate that 4 GEM planes were not sufficient and that the resolutions assumed had sufficient margin of error. In addition, it was demonstrated that the 6 planes could be rearranged to still provide sufficient tracking if the central tracking was not installed for early runs.

    Figure A.2.3 Charge reconstruction efficiency

    for 3 different detector configurations. At the

    upper left is shown the number of tracks

    identified with the proper charge over the

    total number only using the TPC tracks for

    electrons at 30 GeV/c. The upper right adds

    in a point from the EEMC SMD. At the lower

    left is the optimized FGT geometry with 6

    GEM disks and central tracking

    Because W production is a low cross section process, electron/hadron discrimination must provide hadron rejection approaching a factor of 1000, with low loss of electrons or positrons. Early simulations based on Pythia showed that the backgrounds could be reduced by a factor of up to 100 by making isolation cuts and vetoing on energy opposite in. We are counting on other calorimeter information to provide another factor of 10 reduction in backgrounds for the signal to dominate down to ~25 GeV in p. In the past T

    year we have performed the first full simulations based on large samples of both W and QCD hadronic background events. These simulations used the full STAR geometry and GEANT based model of detector responses. Our MIT colleagues were able to provide -1-1800 pb of W events and 300 pb of hadronic background events, a huge technological

    advance using a grid facility and prefiltering techniques.

    Figure A.2.4 shows the effects of selection cuts on the detected E spectrum in the T

    EEMC for hadrons (left) and electrons (right). The initial spectra (black) are for the energy deposited in a calorimeter 3x3 tower patch. The effect of the Monte Carlo prefiltering below 20 GeV is clearly visible. The red curve shows the effect of restricting the fiducial volume to remove the towers at the edge of the calorimeter, i.e. at η =1.08

    and 2.0. The blue curve shows the effect of a calorimeter isolation cut. The magenta

    and dash dot blue curve below it show the effects of vetoing on calorimeter energy and tracks opposite in η where a recoil jet would be expected for jets. The next curve (purple dash) is an additional isolation cut based on tracks. The remaining cuts rely on shower profile information as detected in the various segmentations of the calorimeter. These include in order, energy sharing in adjacent towers, the transverse shape at the shower max. detector, the total number of strips in the SMD and energy deposition in the post-shower detector (deepest scintillator layer). The yields after all cuts are compared in Fig. A.2.5 where one sees that a s:b ratio greater than 1:1 is achieved to below 25 GeV. Tracking is not fully implemented in the simulations as yet. Some cuts made the 0assumption that neutrals (say photons from s) do not produce tracks. In order to

    estimate the effect of conversions before the tracking we assumed that 30%, corresponding roughly to material equivalent to 0.3 rad. lengths, converted and produced tracks resulting in the plot at the right in Fig. A.2.5. The better than 1:1 signal to background ratio is preserved above ~28 GeV.

Figure A.2.4 Electron/hadron discrimination. The left plot shows the detected hadron E T

    spectrum as various cuts are successively applied. The black curve is the detected spectrum. (Note prefiltering of in the MC generation gives the cut off below 20 GeV). -1Both plots are normalized to 300 pb. The cuts are described in the text.

    In Fig. A.2.6 we show the predicted parity violating Asensitivities vs E for 1 < η < 2. L T-1The statistical errors are based on 300 pb, beam polarization of 70% and assume

    realistic backgrounds as described above. The theory curves [De05, De08, Gl01] are selected to reflect the currently allowed range of Δu-bar and Δd-bar. The backward

    asymmetries (lower row) extracted by flipping the polarization for the beam headed away from the EEMC are most sensitive to the anti-quark polarizations at low x.

     -1Figure A.2.5. Comparison of electron (red) and hadron (black) yields for 300 pb after all

    cuts are applied. The left plot show the case corresponding to Fig. 21. At right is the corresponding plot when it is assumed that 30% of neutral tracks convert before the tracking begins.

Figure A.2.6. Predicted parity violating Asensitivities vs. E for 1 < η < 2. Statistical L T

    errors and the theoretical curves are described in detail in the text.

Report this document

For any questions or suggestions please email
cust-service@docsford.com