Learning computationally expensive functions

Analysing data from an experiment at the Large Hadron Collider at CERN requires large amounts of computing power.

In addition to large amounts of experimental data, each individual analysis requires large amounts of simulated data. This... continue reading

Modify and run other people's research code in your browser

Science makes progress by reusing results and building on them. For research software this is pretty hard (the people writing it often do not have the time to make slick installers like big libraries do). As a result there is not as much reuse as... continue reading

Brief Ideas for the Data Science at LHC Workshop 2015

The Data Science @ LHC workshop was a resounding success. We do not plan to have traditional proceedings tied to individual talks, but we do want to capture the ideas that were generated during the workshop. With that in mind, we want to try... continue reading

Create standalone simulation tools to facilitate collaboration between HEP and machine learning community

Discussions at recent workshops have made it clear that one of the key barriers to collaboration between high energy physics and the machine learning community is access to training data. Recent successes in data sharing through the... continue reading

Kickstarting research into end-to-end trigger systems

The data volumes (O(TB/s)) created at the Large Hadron Collider (LHC) are too large to record. Typical rejection factors are O(100-1000), and using as little CPU time as possible to reject an event is the goal. More powerful decision... continue reading

Welcome

This is the Journal of Brief Ideas - citable ideas in fewer than 200 words.

Before you can create a new idea, you'll need to log in using the link above. You also can't vote on existing ideas without signing in too.

Voting

Click on the icon to vote on an idea. You can't vote on your own ideas.