Reproducible Research with End-to-end Machine Inference Using Deep Learning and Bayesian Statistics

The conventional statistical inference based on hypothesis testing and *p*-value is fundamentally flawed. The general practice of data analysis involves too many post hoc decisions makings based on *p*-value, which unavoidably violates the assumptions of frequentist statistics, and worse, leaks ...

By Junpeng Lao

Kickstarting research into end-to-end trigger systems

The data volumes (_**O**_(TB/s)) created at the Large Hadron Collider (LHC) are too large to record. Typical rejection factors are _**O**_(100-1000), and using as little CPU time as possible to reject an event is the goal. More powerful decision features take more CPU time to construct, therefor...

By Tim Head, Vladimir Gligorov