The first stage of this project was to allow enthusiasts in all parts of the world to contribute to the development of the most accurate possible theory references for collider measurements by donating computing time to running collider simulations. The results of those runs is used to generate the tens of thousands of plots that can be found on the mcplots.cern.ch web site, which scientists use on a daily basis to compare different models of the underlying physics to each other - and to the data.
Stage 1 Alpha
Completed July 2011. Alpha testing of the system internally at CERN began in October 2010. This first test phase was quite technical, concentrating on the requirements on the virtual-machine architecture, on the stability and steady supply of jobs, and on development of the simulation packages themselves. During 2011, a small number of external volunteers were gradually connected as well, many of whom participated actively in testing and debugging this first edition of the system (thanks guys!). By the end of the alpha stage, in July 2011, the system operated smoothly and continously with about 100 machines connected from around the globe. Approximately 5 billion collider events had been generated during this testing stage.
Stage 1 Beta
Completed February 2012. During beta testing, the main line of attack was the scalability of the system. With the alpha stage completed, we knew it could handle 100 machines. Would the same system be capable of dealing with 1000, or with 10000 machines? During a first beta trial in August 2011, volunteers wishing to participate could sign up for participation codes which were issued incrementally. This brought the number of connected participants up to around 2500. During its second phase, the participation-code restriction was removed, and the number of successfully connected hosts then gradually increased to about 6500 machines by the end of the beta trial (with a noticeable spike around the CERN Press Release on Dec 13, 2011 concerning the possible hints of a Higgs boson). A public call for naming of the project was also intended to be put out in that connection, to replace the operational Test4Theory name, but the development name seems to have stuck and so will be retained for the time being. A common abbreviation is T4T.
Stage 1 Wrapper Standardization
Completed summer 2013. Starting in mid-2011, the official BOINC development team led by Rom Walton began to develop a standard "Vboxwrapper" beginning from the code of the T4T "CernVM Wrapper" developed primarily by Daniel Lombrana Gonzalez. Starting in late 2012, tests of the new wrapper were started in the T4T project, leading to an extended debugging period before adoption of this wrapper as the T4T standard in summer 2013.
Also complete! In a second stage of the project, whose initial stages were developed concurrently with beta testing of stage one, we aimed to give the users at home a more direct visualization of what is being calculated on their computers, coupled with more explanations on these web pages, describing the distributions and parameters that are used by the experts to do the tuning. This was an ongoing development at CERN and elsewhere. The first display capabilities began testing in Jan/Feb 2012 and the system is now essentially complete.
The LHC Challenge
Ongoing. On completion of the beta trial, the LHC challenge began: creating a "Virtual LHC". This would require the connected machines to generate an average of up to 40 million collider events per second over some time period, the equivalent of the design performance of the real-world Large Hadron Collider, thus demonstrating the readiness of the system to tackle truly large-scale efforts. This is an open-ended challenge, to run throughout the project lifetime, though watch out for special announcements related to the challenge.
Starting! Finally, the third stage of the project will build on the two first and is envisioned to combine the at-home displays with the explanations of plots and parameters to allow users at home to interactively join the tuning efforts alongside the experts and perhaps themselves find the best theory model for describing a set of collider data or help to explore and document interesting model variations for use in estimating the uncertainties on the resulting models. This is also being actively pursued by a CERN-based team, concurrently with the development of stage two.