The main motivations for this are that multicore provides better CPU and memory efficiency, and we only have to manage a single stream of tasks instead of two.
If you do not want to run multicore you can disable it in your ATLAS@Home preferences in the option "Run only the selected applications"
By default the tasks will use as many cores as you have allocated to BOINC. For tuning the number of cores that the tasks will use you can create an app_config.xml file as described in this thread
At some point we plan to discontinue the singlecore app and run only multicore. The multicore app will also run on 1 core so you can still continue with ATLAS even with a single core.
The ATLAS@Home team
- support for VirtualBox 5.1
- support for OS X (finally!)
- graphics (same as single core app)
Thank you for your continued support of ATLAS!
Further announcements and testimonials from our scientists will be posted later!
Please allow the task to run for up to a minute before launching the graphics as it requires some processes to start inside the virtual machine. The interface was created by Giulio Isacchini, a student at the University of Oslo in collaboration with the rest of the ATLAS@Home team. Please ask questions, give comments and suggestions for improvements, extra features etc on the Graphics forum. The graphics are currently only available on the single-core version, we will add it to multi-core soon.
EDIT:There is a small configuration issue stopping the pages loading on Mac hosts and possibly Windows hosts too. This problem will go away in the next 3-4 hours.
Since 2012 we have started performing measurements with beam dedicated to probing what we call the “dynamic aperture” (DA). This is the region in phase space where particles can move without experiencing a large increase of the amplitude of their motion. For large machines like the LHC this is an essential parameter for granting beam stability and allowing long data taking at the giant LHC detectors. The measurements will be benchmarked against numerical simulations, and this is the point where you play an important role! Currently we are finalising a first simulation campaign and we are in the process of writing up the results in a final document. As a next step we are going to analyse the second half of the measured data, for which a new tracking campaign will be needed. …so, stay tuned!
Magnets are the main components of an accelerator, and non-linearities in their fields have direct impact on the beam dynamics. The studies we are carrying out with your help are focussed not only on the current operation of the LHC but also on its upgrade, i.e. the High Luminosity LHC (HL-LHC). The design of the new components of the machine is at its final steps, and it is essential to make sure that the quality of the magnetic fields of the newly built components allow to reach the highly demanding goals of the project. Two aspects are mostly relevant:
- specifications for field quality of the new magnets. The criterion to assess whether the magnets’ filed quality is acceptable is based on the computation of the DA, which should larger than a pre-defined lower bound. The various magnet classes are included in the simulations one by one and the impact on DA is evaluated and the expected field quality is varied until the acceptance criterion of the DA is met.
- dynamic aperture under various optics conditions, analysis of non-linear correction system, and optics optimisation are essential steps to determine the field quality goals for the magnet designers, as well as evaluate and optimise the beam performance.
The studies involve accelerator physicists from both CERN and SLAC.
Long story made short, the tracking simulations we perform require significant computer resources, and BOINC is very helpful in carrying out the studies. Thanks a lot for your help!
The SixTrack team
R. de Maria, M. Giovannozzi, E. McIntosh (CERN), Y. Cai, Y. Nosochkov, M-H. Wang (SLAC), DYNAMIC APERTURE STUDIES FOR THE LHC HIGH LUMINOSITY LATTICE, Presented at IPAC 2015.
Y. Nosochkov, Y. Cai, M-H. Wang (SLAC), S. Fartoukh, M. Giovannozzi, R. de Maria, E. McIntosh (CERN), SPECIFICATION OF FIELD QUALITY IN THE INTERACTION REGION MAGNETS OF THE HIGH LUMINOSITY LHC BASED ON DYNAMIC APERTURE, Presented at IPAC 2014
Y. Nosochkov, Dynamic Aperture and Field Quality, DOE review of LARP, FNAL, USA, July 2016
Y. Nosochkov , Field Quality and Dynamic Aperture Optimization, LARP HiLumi LHC collaboration meeting, SLAC, USA, May 2016
M. Giovannozzi, Field quality update and recent tracking results, HiLumi LHC LARP annual meeting, CERN, October 2015
Y. Nosochkov, Dynamic Aperture for the Operational Scenario Before Collision, LARP HiLumi LHC collaboration meeting, FNAL, USA, May 2015
We are investigating… Please be patient.
You can limit the MultiCore-App by using an app_config.xml.
Below is an example to limit ATLAS_MCORE to use only 4 Cores:
<app_config> <app_version> <app_name>ATLAS_MCORE</app_name> <avg_ncpus>4.000000</avg_ncpus> <plan_class>vbox_64_mt_mcore</plan_class> <cmdline>--memory_size_mb 5300</cmdline> </app_version> </app_config>
You should change these two lines to your needs:
<avg_ncpus>4.000000</avg_ncpus> <cmdline>--memory_size_mb 5300</cmdline>
Memory usage calculated by the ATLAS_MCORE app is by this formula:
memory = 1300 + (1000* NumerOfCores)
so it is 5300MB for 4 cores.
Thanks to Yeti for giving this recipe.
If you are interested in testing it on your machine, you need to allow test task in your project preference to receive jobs from this new application.
This can be enabled by login to your account on the atlasathome webpage, then click on "Your account", then click on "ATLAS@Home preferences", then check the box next to "Run test applications".
The multi-core version will check the BOINC client for both the available CPU cores(the number of CPU cores the client is configured to give to BOINC to use) and available memory size(the number of memory size the client is configured to give to BOINC to use) to decide how many cores will be allocated to one virtual machine which runs the ATLAS job.
For the ATLAS multi-core job, the relation between memory size and number of CPU cores is defined in this formula:
For example, a 1 core job requires 2300 MB memory, and a 2 core job requires 3300 MB memory.
The number of CPU cores which will be allocated to the virtual machine is also calculated according to this formula.
The minimum number between Number_of_CPU_cores and the available CPU cores from the client is used to allocate to the virtual machine.
Currently, this test app can utilize from 2 to 8 cores of the client depending on the available CPU cores and memory size from the client.
By using multi-core jobs, we expect the runtime of jobs is close to runtime_of_single_core_job/number_of_cores, and it can significantly saves the usage of memory for clients which offers multi-cores to BOINC.
- The need for Hardware-assisted virtualization for 64bit
- Additional checks for issues that were not previously detected e.g. idling VMs.
- A technology change that still requires some hardening for all the issues that we experience when running on random machines around the world.
In the past 24 hours 71.79% of hosts returned a successful task using v2619.31 but 35.38% failed tasks with 7.18% returning both successful and unsuccessful tasks. This suggests that things are working but we still have some improvements to make.
To help the situation, please could everyone periodically (no more than once per day) visit your account page and click on the Task View link in the Computing and credit section to view the results of your tasks. If you do see a computation error, click on the task to see the exit status code and the logging output.
The breakdown of failures by exit code is as follows.
- 35.14% -1073740791
- 27.17% 206
- 18.84% 194
- 6.16% 207
- 4.35% 255
- 3.26% -2135228415
- 2.54% 1
- 2.17% 203
- 2.17% 5
- 1.45% -186
- 0.72% 197
- 0.36% -2147467259
- 0.36% -185
We will address each one of these in subsequent posts and once we have arrived at a final solution, the issue will either be fixed to stop it occurring or an entry will be added to the FAQ to explain what is going on.
To make more effective use of volunteers running ramdisks we have reduced the disk space limit from 6.5GB to 4GB. See thread here
We see that most tasks use at most 2.2GB so this should be safe, but please notify us of any problems related to this change.[/url]
Similar updates will follow later for other applications.
Many thanks to Rom Walton for providing these changes to the BOINC code.