By Tim Head, Vladimir Gligorov

The data volumes (O(TB/s)) created at the Large Hadron Collider (LHC) are too large to record. Typical rejection factors are O(100-1000), and using as little CPU time as possible to reject an event is the goal. More powerful decision features take more CPU time to construct, therefore the discrimination power of decisions is limited by the available CPU time.

Current approaches rely on hand-crafted features, and hand tuned decision cascades. We propose the possibility of using deep learning techniques to construct an end-to-end system (using the raw data electronics data as input) that learns the decision function of the trigger system (the global HLT decision). Deep Neural Networks can be efficiently evaluated on dedicated hardware and can be used to preempt the decision of the full system.

For LHCb we estimate that 750GB of data contain about 10000000 candidates of which 100000 will be accepted by the trigger system.

To kickstart research into this new area of research each of the four LHC experiments should release a few seconds of HLT output as well as the corresponding input. This will allow collaboration with experts from the field of machine-learning and cooperation between the experiments to solve this challenging problem.

Comments

Please log in to add a comment.
Authors

Tim Head, Vladimir Gligorov

Metadata

Zenodo.48784

Published: 1 Apr, 2016

Cc by