The tuning of particle physics simulations to deliver the best possible description of all available experimental data can be extremely computationally intensive. This is due to the large and varied data sets which are available from previous collider experiments and to the long simulation runs required to properly explore tails of distributions. More excitingly, however, new data from measurements carried out at the LHC is being continually added to these sets, with publications of experimental data sets adding new chunks of knowledge almost on a weekly basis.
To refine the models, the calculations must therefore continually be subjected to a barrage of tests against the measured data, for hundreds of different combinations of physical observables, beam energies, and model settings.
The process is highly adapted to distributed computing since it only makes use of public data and since each 'run' can be arbitrarily subdivided, down to the simulation of a single collision event. This is where the volunteers connected to the vLHCathome project come in. Each volunteer computer is assigned chunks of events to simulate and process. Note, it is not just the processing of the events but also the actual simulation that is done on the volunteer computer. So we really mean it when we say we ship you a virtual LHC and fire it up inside your computer.