Test4Theory

New version v302.10 for docker

18 hours 49 minutes ago
In reply to Saturn911's message of 15 Feb 2026:
State now is "Waiting to run" because of some Atlas WUs.
But runRivet.log is updating again and again.

From your answer I conclude that it should work. So I need to find the cause.The difference between your host and mine: Your host runs Linux OS and mine Windows OS.

Task at 100% and still running

1 day 12 hours ago
In reply to [TA]Assimilator1's message of 14 Feb 2026:
Yea I've had loads that have appeared to be stuck at 100% and I aborted quite a few too!
Would've been useful if the LHC guys had sent us a BOINC message warning about the long runtimes and the inaccuracy of the BOINC % for VM tasks, would've saved wasted crunching time :(.

My 'show graphics' button is greyed out, any idea why?Read here: https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6448&postid=53018

theory sur docker

2 days 21 hours ago
apres essais ,l'émulation linux pour docker me fais perdre environ 20% de performances globales sur mon pc en me basant sur les temps de calcul des autres projets non docker.C'est sans interet.autant resté sous linux.

after trying, the Linux emulation for docker makes me lose about 20% of overall performance on my PC based on the computation times of other non-docker projects. It is of no interest. as much remained under linux.

Never ending task. Started many times over many days from 0 and at 100% still there at PC shutdown...

3 days 7 hours ago
Cool, thanks for that last tip about the runRivet.log file. At least there I can keep an eye on it.
Very useful.



In reply to Crystal Pellet's message of 12 Feb 2026:
Yeah, Advice is ... let it run.
Theory tasks can run from 5 minutes to over 10 days, so don't look at the % done or remaining time. That's of no use.

The Show Graphics button is only available for VBox-tasks, not for the docker version.

I see your task is running in slot 0
In that slot is a folder named 'shared'.
In the shared folder is a file called 'runRivet.log'.
The process running in the container is writing the progress of the event processing into that file.[/quote]

All me Theory tasks finish with the same error

6 days 19 hours ago
In reply to Crystal Pellet's message of 9 Feb 2026:
Your i7-8700 can't make a connection to CERN:

Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... Failed!
Probing /cvmfs/cvmfs-config.cern.ch... Failed!
Probing /cvmfs/grid.cern.ch... Failed!
Probing /cvmfs/sft.cern.ch... Failed!
Probing CVMFS repositories failed


Thank you...

No Tasks

1 week ago
Although "Unsent" still zero, I got this task: Theory_2922-4870672-586_0 batch 19655072 created 8 Feb 2026, 14:41:36 UTC

Edit: number of Unsent tasks slowly increasing now

196 (0x000000C4) EXIT_DISK_LIMIT_EXCEEDED - how come?

1 week ago
This workunit containing the job "boinc pp z1j 8000 - - sherpa 2.2.8 default 100000 500" starves because of EXIT_DISK_LIMIT_EXCEEDED
https://lhcathome.cern.ch/lhcathome/workunit.php?wuid=238981772
The 20 hours running docker task with peak disk usage of 7.48 GB contains:

got abort request from client
running docker command: kill boinc__lhcathome.cern.ch_lhcathome__theory_2922-4899195-500_1
program: podman
command output:
boinc__lhcathome.cern.ch_lhcathome__theory_2922-4899195-500_1
EOM
.
.
.
stderr end
running docker command: container rm boinc__lhcathome.cern.ch_lhcathome__theory_2922-4899195-500_1
program: podman
command output:
boinc__lhcathome.cern.ch_lhcathome__theory_2922-4899195-500_1
EOM
running docker command: image rm boinc__lhcathome.cern.ch_lhcathome__theory_2922-4899195-500
program: podman
command output:
Untagged: localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4899195-500:latest
Deleted: 79376ef46d917bc296637af0b05b32bfd9343f28f7fbcb0b1de6c1c506d72d39
Deleted: 9259743e983aaef18ae52b23e457320fa4e849e4e352edb3ad1bd4eece38cec6
Deleted: a42a951158504bf9a4debe713e6aa7365d4651bd8f02aa676adef32a66e324da
Deleted: e06ed2ad55322792d5d90223aced1f2c12f101443f88bc2e586a122413c7ebe0
Deleted: c4f9331961caded74fc715fe1b0e5a576df596340e8a2d50385e0fbdc1cd9ea6

I aborted mine.

Theory in containers

1 week 1 day ago
In reply to homer__simpsons's message of 6 Feb 2026:
Some of the tasks I received completed in less than 20 minutes, some took 2-6 hours (displaying at 99.987% or 100% for hours). This (21h+) is displaying at 100%.

The task: https://lhcathome.cern.ch/lhcathome/result.php?resultid=432278682
The workunit: https://lhcathome.cern.ch/lhcathome/workunit.php?wuid=238897618It is very usual when a Theory task is running for more than 1 day. Up to >10 days (very rare)
Your task with the herwig7 generator can last longer than 1 day. Let it run.
You may show the progress when using "Show Graphics" from BOINC Manager.
Highlight the running task and press the "Show Graphics" button. A local webpage will popup. Press the Logs link and then the running.log.

6+ day task?

3 weeks ago
When task duration exceeds certain limits, it is easy to get lost among thousands of seconds...
I made a simple spreadsheet for my own clarification.
It looks as follows:

would task restart from scratch after a machine crash?

4 weeks ago
Would the machine crash have caused the Theory task to completely restart?When a PC crashed and/or BOINC is not properly shutted down, VirtualBox is not able to save the VM-state to disk.
After BOINC-startup the task errors out or when you're lucky/unlucky, the VM is starting the job from the beginning.
The progress of the task in runRivet.log on disk is not updated any longer, but the progress can still be seen with BOINC Manager's "Show Graphics" .

BTW: https://lhcathome.cern.ch/lhcathome/workunit.php?wuid=238232604 was a resend and the original client returned a valid result a bit too late. I would abort that task.

This gonna be long

1 month 1 week ago
Your 2 linked tasks: The first one was too late but on time before your resend turned in. The second task may have restarted task's VM several times maybe starting from scratch.


Both tasks are still running and it's not clear whether:
- the first task, which already had a valid result, will grant me credits if I finish it?
- if I won't finish the second task in 11 days, will it get cancelled and will I lose all 11 days of running time? If a third replicate gets sent out after 10 days I don't think anyone will finish it within 24 hours (before my hard deadline) so that shouldn't be an issue.

Edit: after re-reading your reply several times I think I figured out the misunderstanding. In both WUs that I linked, I'm running the resends, not the initial tasks.

Hung Theory task?

2 months 3 weeks ago
There's no 'obvious error' reported back to the project.
In cases like that there is no log file from the scientific app sent back to the project.
Hence, there is nothing to analyse and the task is either marked as 'failed' or 'lost' after the due date.

Even the log snippets you posted do not clearly explain if/why the tasks got stuck.

So, how should the project decide what caused the failure.
It could be either (may be incomplete):
- hardware
- the OS
- VirtualBox
- BOINC
- vboxwrapper
- data from CVMFS
- scientific app

From the project's perspective there's only the overall task failure rate for the computer itself.
As already mentioned for this computer it is less than 1 % covering all possible reasons.

Theory CPU Scheduling oddness

3 months 2 weeks ago
This is a bug in VirtualBox 7.2.4.

On a computer with AMD CPU there's no known workaround so far.
...
After more testing...
Looks like the downgrade left the 7.2.4 kernel module on the system.
It now works after a cleanup and a fresh 7.2.2 installation (package from VirtualBox).

The kvm_amd module must remain blacklisted.
Checked
Test4Theory
LHC@home: Theory Application
Subscribe to Test4Theory feed