For the cases `request_cpus > 16`, I manually modified the `.submit` files to only request 16 cpus. In this sense, everything above `16` is instructing the sampler to use more cores than there are available.
For the cases `request_cpus > 16`, I manually modified the `.submit` files to only request 16 cpus. In this sense, everything above `16` is instructing the sampler to use more cores than requested.
\ No newline at end of file
This is the speedup (the effective time per likelihood evaluation relative to the 1-core job).
* The speed-up continue above n=16, clearly there are more cores available and bilby is stealing resources it did not request.
* There is some variability between results which is likely due to variance in the performance of the randomly-selected nodes
### Scaling test using node 1997 on CIT
At Stuart's recommendation, I ran the job above again, but restricting the submit to only node1997, which has 8 available cores. The run script to generate the jobs and overwrite the submit files with the requirements etc is: