In the review, we compared against pycbc and we had agreement that, for IMRPhenomPv2 and IMRPhenomD_NRT things where sufficiently good. However, Matt recommended we eventually move this test into the C.I. to make results robust against changes in the waveform. This issue has been created to track progress on this.
Designs
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related or that one is blocking others.
Learn more.
Ah yes, sorry @khun.phukon I forgot I'd say I'd do that. Okay here is an outline.
Background
In the bilby review, @matthew-pitkin asked for a check of the waveform against either pycbc or LALInference to check that "the same input parameters produced the same waveform". This wiki page highlights the results of that effort.
Future checks
While the check against pycbc was useful, we ultimately don't want to check against pycbc or lalinference, but rather against lalsimulation itself. The task in hand is to make such a check and then add it to the C.I. (continuous integration), this will ensure we are always checking that we handle the waveforms correctly.
Break down
Here are my suggested steps, you don't have to follow this, but it is a rough recipe to get this issue closed.
Write a simple python script which:
Defines a waveform and a set of parameters (you may want to just use the script linked in the wiki page above)
Generates a waveform using a waveform_generator (again as in the link above)
Generates a waveform without using bilby, using only lalsimulation. For this step, you may wish to dig into bilby to see how it does it, but this needs to be checked that it is "correct".
Checks that those two waveforms are the same to within some tolerance, I'd use the "overlap" calculated in the scripts above, but waveform experts might be better placed to say what is best here
Get a waveforms expert (or if you are a waveforms expert, yourself) to check that the use of lalsimulation is standard and not doing anything weird
Post that script here with any comments
Once you have the script, we can then work on getting it into the C.I. If you are comfortable doing this from the outset, please do so, if not I'm happy to help please just ping me :)
Hi @khun.phukon - I think the easiest thing would be to modify this script to return a single number: i.e the overlap and then with an assert statement, make sure the test passes if the overlap is close to 1 within some margin. A script that does a similar sort of test for noise realisation is in test/noise_realisation_test.py