A Data Acquisition System for a Test Beam Experiment

Test beam experiment T-917 which ran during Jan 2000 at Fermilab provides an example of a small data acquisition (DA) system built around a minimal number of CAMAC and NIM modules, controlled and read out with a PC running Linux. This detector consisted of 12 scintillators located beneath a beam line which generated approximately 150 bytes per event, and up to 40000 events per 40 second spill cycle.

DA Hardware


The CAMAC modules used in T-917 were Triggers processed through NIM logic were fed to the RFD02 LAM Trigger Module. A Jorway 73A SCSI Bus CAMAC Crate Controller was polled for a LAM raised by the RFD02 when an event trigger was generated by the coincidence of 4 scintillating counters. The RFD02 was then read out to determine the trigger type. Polling for the LAM and the subsequent CAMAC block transfer resulted in a 500 usec dead time, leading to a readout rate up to 2kHz.


Because the experiment wanted to use the existing Fermliab Histoscope software product for online data analysis and the Fermilab sjyLX software product for the Jorway 73a standard routines, the choices for the DA machine were either a SGI workstation or a PC running Linux. Each have equivalent support for Histoscope and sjyLX, but the PC was chosen because of availability. The machine used was a 450Mhz PII with 128 Mb of memory, running the Fermilab packaged version of RedHat 5.2.1. The experiment want stored 50 runs of data (each less than 200Mb taken over 12 hours) on local disk, so 2 fast 8Gb SCSI hard drives and an Adaptec AHA-2940UW Ultra Wide SCSI controller were added to this machine. This same SCSI controller was also used to archive data runs stored on disk to a 8505 Exabyte. An additional Adaptec AHA-2940AU SCSI controller was added for the Jorway 73A. This machine also contained a 100Mb Zip drive and 3'' floppy, but neither were used.

DA Software

The T-917 DA uses the following Fermilab supported software products As the beam had a 40 second duty cycle every 80 seconds, the the DA code could run as a single process. During a spill, triggers are polled and events read from CAMAC and stored in the PCs memory. After the spill, an external NIM pulse triggered the readout to add an end of spill event, and write a spills worth of data to disk. After writing the data to disk, the events were unpacked and selected data were used to fill histograms and an ntuple in histoscope.

The DA was started by simply running the DA program and entering the run number and run comment. The run data file name was derived from the run number. Three Histoscope buttons were displayed for stopping a run and resetting the histograms or ntuple. The buttons were polled every spill or during periods when no triggers were detected. Event counts, spill scalers and periods of no triggers were printed to the window from which the rundaq program was started. Any errors were also printed to this screen.

There is a program, offline, to read back events from disk and fill the online histograms.

Two other programs, peds and mips, were used to read and analyse calibration runs.

The DA sequence is setup as follows:

After a spill trigger:

The data buffer structure and all lengths are based on 32 bit words. Actual data consisted of 16 bit ADC words, 24 bit scaler words and 32 bit date/time stamps, but data packing was avoided as it was not necessary to save disk space, and the program is easier to maintain.

The Run file is organized as follows:

Run data sets were transferred by hand using the tar command to the Exabyte tape drive.

Future wish list

With more resources, the following improvements could be made:

T-917 Collaborators:


Dave Slimmer, Jon Streets