High Energy Physics simulations

Computer simulations of high-energy particle collisions provide a detailed theoretical reference for the measurements performed at accelerators like the Large Hadron Collider (LHC), against which models of both known and 'new' physics can be tested, down to the level of individual particles.

By looking for discrepancies between the simulations and the data, we are searching for any sign of disagreement between the current theories and the physical universe. Ultimately, such a disagreement could lead us to the discovery of new phenomena, which may be associated with new fundamental principles of Nature.

Less spectacular discrepancies also help guide us towards the most accurate possible description of the Standard Model of Particle Physics and its phenomena - refining the simulations of the known physical laws, by pointing to areas where current simulations succeed and where they fail.

How do you know it's new?

Solving multi-particle dynamics in relativistic quantum field theory is - almost - as hard as constructing the accelerators and experiments that perform the collisions in the real world. Doing a small part of such a calculation can be the topic of an entire PhD thesis for a gifted student - and doing a full calculation usually entails a multi-year effort by a collaboration of many theoretical high energy physicists working together. Sophisticated calculations typically also require enormous computing resources, for instance to probe anywhere near a reasonable number of the infinitely many quantum histories that can contribute to every single "event".

Most often, therefore, observed discrepancies point us - not to a breakdown of the theory itself - but to a problem with our ability to model all aspects of it with the extremely high accuracy achievable by the intense beams and sensitive detectors that are used to do the real-world measurements to which the calculations are compared.

It is therefore crucial to be able to distinguish between the breakdown of a model of a physical theory, and the breakdown of the theory behind it. We are looking for three possible sources of discrepancy:

  1. Tuning: A discrepancy is found, but the same model can still be made to describe all the available data by a readjustment of its parameters. Thus, while no new phenomenon has been uncovered, the model has been better constrained, and the improved constraints will factor into future tests of the same model.
  2. Modeling: A discrepancy is found that no parameter set of the model is able to describe. A phenomenon not included in the model has been (re)discovered. A careful analysis of the approximations used in the model must then be brought to bear to determine whether the model could be improved by including previously ignored parts of the same underlying theory, or ...
  3. Eureka!: A discrepancy is found which fundamentally contradicts the underlying theory. In this case, a truly new phenomenon of nature has been discovered, whose origin must then be puzzled out by further tests and theorizing.

 

What's What?

It all comes down to accuracy. When a discrepancy is found, the one central question is: is it within, or outside, the uncertainty allowed by the inaccuracy of the calculation? Should we build a better computer model? Or do we need a better theory of physics? And the cycle repeats: improved models again need to be tested, constrained, vetted by the fires of empirical testing, and eventually broken and discarded to give way to even more detailed descriptions of Nature.

The achievable accuracy of particle physics simulations depends both on the sophistication of the simulation itself, driven by the development and implementation of new theoretical ideas, but it also depends crucially on the available constraints on the free parameters of the models. Using existing data to constrain the latter is referred to as 'tuning'.