The ALICE experiment at the Large Hadron Collider (LHC) will produce a data size of up to 75 MByte/event at an event rate of up to 200 Hz resulting in a data rate of ~15 GByte/s. Online processing of the data is necessary in order to select interesting events or sub-events (high-level trigger), or to compress data efficiently be modeling techniques. Both require a fast parallel pattern recognition. Processing this data at a bandwidth of 10-20 GByte/s requires a massive parallel computing system. One possible solution to process the detector data at such rates is a farm of clustered SMP-nodes based on off-the-shelf PCs, and connected by a high bandwidth, low latency network. © 2002 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Helstrup, H., Lien, J., Lindenstruth, V., Röhrich, D., Skaali, B., Steinbeck, T., … Wiebalck, A. (2002). High level trigger system for the LHC ALICE experiment. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2329 LNCS, pp. 494–502). Springer Verlag. https://doi.org/10.1007/3-540-46043-8_50
Mendeley helps you to discover research relevant for your work.