A Data Acquisition System for a Test Beam Experiment
Test beam experiment T-917 which ran during Jan 2000 at Fermilab
provides an example of a small data acquisition (DA) system built around a
minimal number of CAMAC and NIM modules, controlled and read out with a PC
running Linux. This detector consisted of 12 scintillators located
beneath a beam line which generated approximately 150 bytes per event, and
up to 40000 events per 40 second spill cycle.
The CAMAC modules used in T-917 were
Triggers processed through NIM logic were fed to the RFD02 LAM
Trigger Module. A Jorway 73A SCSI Bus CAMAC Crate Controller was polled for a
LAM raised by the RFD02 when an event trigger was generated by the coincidence
of 4 scintillating counters. The RFD02 was then read out to determine the
trigger type. Polling for the LAM and the subsequent CAMAC block transfer
resulted in a 500 usec dead time, leading to a readout rate
up to 2kHz.
- Jorway 73A SCSI-CAMAC crate controller
- Fermilab RFD02 Programmable LAM latch module
- (2) LRS 2249A 12 channel ADCs
- (2) Jorway 84 quad scalers
- LRS 2228A channel TDC
Because the experiment wanted to use the existing Fermliab Histoscope
software product for online data analysis and the Fermilab sjyLX software
product for the Jorway 73a standard routines, the choices for the DA machine
were either a SGI workstation or a PC running Linux. Each have equivalent
support for Histoscope and sjyLX, but the PC was chosen because of
availability. The machine used was a 450Mhz PII with 128 Mb of memory,
running the Fermilab
packaged version of RedHat 5.2.1. The experiment want stored 50 runs of
data (each less than 200Mb taken over 12 hours) on local disk, so 2 fast
hard drives and an Adaptec AHA-2940UW Ultra Wide SCSI controller were added to
this machine. This same SCSI controller was also used to archive
data runs stored on disk to a 8505 Exabyte.
An additional Adaptec AHA-2940AU SCSI
controller was added for the Jorway 73A. This machine also contained a 100Mb
Zip drive and 3'' floppy, but neither were used.
The T-917 DA uses the following Fermilab supported software products
As the beam had a 40 second duty cycle every 80 seconds, the
the DA code could run as a single process.
During a spill, triggers are polled and events read from CAMAC and stored
in the PCs memory. After the spill, an external NIM pulse triggered the
readout to add an end of spill event, and write a spills worth of data
to disk. After writing the data to disk, the events were unpacked and
selected data were used to fill histograms and an ntuple in histoscope.
- UPS (Fermilab software product support tool) and
- UPD (Fermilab software distribution tool).
The DA was
started by simply running the DA program and entering the run number and run
The run data file name was derived from the run number.
Three Histoscope buttons were displayed for stopping a run and
resetting the histograms or ntuple.
The buttons were polled every spill or during periods when no triggers
Event counts, spill scalers and periods of no triggers were printed to
the window from which the rundaq program was started.
Any errors were also printed to this screen.
There is a program, offline, to read back events from disk and fill the
Two other programs, peds and mips, were used to read and analyse calibration
The DA sequence is setup as follows:
After a spill trigger:
- Initialize hardware and software
- Poll the Jorway 73A for a LAM (poll loop has a limit)
- Read the RFD02 LAM latch module for the trigger type
- Read out the event data if valid event trigger found
- Check state of histoscope run control buttons
- Build data buffer
- Write data buffer to disk
- Update histograms
- Zero spill buffer
The data buffer structure and all lengths are based on 32 bit words. Actual data
consisted of 16 bit ADC words, 24 bit scaler words and 32 bit date/time
stamps, but data packing was avoided as it was not necessary to save
disk space, and the program is easier to maintain.
The Run file is organized as follows:
Run data sets were transferred by hand using the tar command
to the Exabyte tape drive.
- Begin Run event containing
- Run number
- Version number of the event format
- UNIX time/data stamp
- Spill Buffer containing events
- Buffer length
- Events containing
- Event length
- Event type (EVENT)
- Event number
- ADC data
- Scaler data
- TDC data
- End of Spill (EOS) Event containing
- EOS event length
- Event type (SPILL)
- Spill Number
- unix time/data stamp
- Scaler information
Future wish list
With more resources, the following improvements could be made:
- The current program relies on there being enough time to write data
to disk and fill histograms during an interspill period. This may be
adequate for cosmic ray test stands, but for DC beam the program should
be modified to put "spill" buffers into shared memory.
- Upgrade the RFD02 to the latest modifications (June 8, 1991).
The module would occasionally hang at high trigger rates if a trigger
arrived while the module was
being cleared at the end of an event. An external trigger hold-off
was added to the trigger logic to prevent this.
- Histoscope, with its ability to read in configuration files was found to be
extremely useful for commissioning and online analysis. The
following improvements would be nice.
- Add circular ntuples (the program stops filling the event ntuple after
the first 200,000 events in a run).
- Add a label and color to a button and
add ability to create a button type multiple plot window to
store the run control buttons (the program uses configuration files to make the
button titles long enough to read, and to appear on the screen in the same
place every run).
- Promote log/linear selection from "Axis settings..." to higher level.
- Add ability to change the Axis settings of all the plots in
a multiple window at once (eg change Y axis, log/linear selection).
Dave Slimmer, Jon Streets