Ideas tagged with machinelearning

Algorithms learning for large scale facilities

A large scale facility can be described as an object producing, as an output, datasets `D_i`, that scientists analyse to obtain results `R_i`. The ideal data analysis trajectory for an experiment is thus `D_i-->R-i-->P_i`, where `P_i` denotes the desired output: a publication. Most of the tim...

By Alberto Cereser

Learning computationally expensive functions

Analysing data from an experiment at the Large Hadron Collider at CERN requires large amounts of computing power. In addition to large amounts of experimental data, each individual analysis requires large amounts of simulated data. This production of simulated data is the single largest consu...

By Tim Head

Create standalone simulation tools to facilitate collaboration between HEP and machine learning community

Discussions at recent workshops have made it clear that one of the key barriers to collaboration between high energy physics and the machine learning community is access to training data. Recent successes in data sharing through the [HiggsML](http://doi.org/10.7483/OPENDATA.ATLAS.ZBP2.M5T8) and ...

By Kyle Cranmer, Tim Head, jean-roch vlimant, Vladimir Gligorov, Maurizio Pierini, Gilles Louppe, Andrey Ustyuzhanin, Balázs Kégl, Peter Elmer, Juan Pavez, Amir Farbin, Sergei Gleyzer, Steven Schramm, Lukas Heinrich, Michael Williams, Christian Lorenz Müller, Daniel Whiteson, Peter Sadowski, Pierre Baldi

Kickstarting research into end-to-end trigger systems

The data volumes (_**O**_(TB/s)) created at the Large Hadron Collider (LHC) are too large to record. Typical rejection factors are _**O**_(100-1000), and using as little CPU time as possible to reject an event is the goal. More powerful decision features take more CPU time to construct, therefor...

By Tim Head, Vladimir Gligorov