Agenda
- Fiducial BBH 128s
- analytic waveforms
- 15D Gaussians
- Known events
- 150914
- 151216
- 170814
- Comparison page for S190521r. Looking at this to compare bilby and L.I. using the O3 waveform interface and check that phi_1 agrees: phi_1. This confirms the differences seen in GW150914 for phi_1 are due to the waveforms interface.
Minutes
Fiducial BBH 128s page
Greg: General introduction to the issue
Simon: Not anything extra to say above my email, but it is hard to know what to say about this
Matt: We just have to trust that the sampler is sampling the posterior and that the. I wouldn't hold up the review due to waiting on the L.I results, but it would be nice to see
Greg: suggest the reviewers write a comment on the wiki page and Greg to email the PE group to ask for inputs
Matt: If you plot spin amplitudes against components masses: the broad base of the mass posteriors comes from low spins will the other part comes form the high spin. Physically I don't know why this would be the case, but perhaps someone has some insight? (I.e. compare a_1, a_2, mass_1, and mass_2).
ACTION ITEM: Matt to write a comment on the 128s fiducial BBH page, Greg to email the PE group for comments.
Analytic waveforms
Colm: Nik started working on this and we saw some issues between generating waveforms in pycbc and bilby which came down to transcription errors. We found one difference: the waveforms disagree at the percent level due to differences in G, Newton's constant. There are known differences in the constants definitions between LALsuite and astropy (on which bilby constants are based). Manually correcting for the differences the waveforms then match.
Colm: The waveforms produced by pycbc and bilby are arbitrarily close
Matt: as the person who asked for this test, it was mainly just to check that parameters are being passed through in the same way between different samplers (either L.I. or pycbc). I was more thinking of cases where a spin parameter or a tidal parameter has a transcription error where the parameters are the wrong way around. Would prefer to see the same test with tidal parameters. I was thinking of whether this could be incorporated into the C.I. and then running over all the waveforms.
ACTION ITEM: If a similar thing can be done with tidal parameters that would be nice.
Colm: Don't need L.I. comparison at this stage.
ACTION ITEM: Greg to create an issue about making a C.I. check for all waveforms
15D Gaussian
Greg: 15d Gaussian next, but I don't see Moritz
Matt: Happy to sign this one off as it is similar to what is seen elsewhere and when you add enough runs together the systematic errors disappear.
Known events
Greg: 150914 summary: ACTION ITEM from last week was to look at phi_1 and phi_2 which showed some differences
Colm: If you look at an O3 run on an event (see Agenda item S190521r), things look very similar between bilby and L.I. For 150914 comparison, the bilby posteriors was flat for phi_1 and phi_2 while L.I. had structure; for the O3 events (which use the new waveform interface) both are flat. This suggests the structure seen in the 150914 comparison was due to the waveforms interface in the O2 runs.
Simon: The waveform conventions is causing us headaches. Does this mean we are happy with the known event comparison for 150914?
Matt We can sign that one off
Greg: Mini celebration
Greg: let's look at 151226
Colm: L.I. distances are bugged
Matt: Things look good to me (for the publication tab). Something we may have discussed before, looking at the 1D pdfs and the matched filter SNR, have we resolved this issue?
Charlie (in chat): this is a PESummary issue which isn't yet resolved. Will be fixed soon.
Simon: There are several solar mass differences between the upper limit on masses: https://ldas-jobs.ligo.caltech.edu/~charlie.hoy/pe/bilby_lalinference_comparison/GW151226/html/Comparison_mass_1.html bilby finds 19, MCMC finds 22. I assume this is within the sampling uncertainties? These are considerable variations.
Matt: in papers in general we have already done a vague statement about systematic uncertainties in errors. Maybe we should quote errors on our errors.
Simon: I think there is at least one paper where they used SEOBNRv3 on 150914 where I remember seeing uncertainties on percentiles.
Greg: it is perhaps an aspirational goal for the PE group to quote uncertainties on our uncertainties (can be done in bilby or LI)
Matt: there is a python package from the Cambridge group which does this (requires multiple runs). Something we can do.
Colm: Summary: from our initial look at these it seems like things will work out nicely for this event. Differences seem to be consistent apart from the known issues (waveforms interface, matched filter SNR etc).
Matt: I don't think we need much longer to look over them. I think with 151226 we can sign this off.
ACTION ITEM: Find MCMC runs for 170814 ACTION ITEM: Assemble PE group to do the rest of the runs ACTION ITEM: Prepare calibration discussion for next week
Greg: In summary, we are close to wrapping up this review with the only major things left being to run on all events and check the calibration.
Chat
<08:04:13> "Greg Ashton [h]": https://git.ligo.org/lscsoft/bilby_pipe/wikis/O3-review/minutes/190813/ <08:05:27> "Charlie Hoy": https://ldas-jobs.ligo.caltech.edu/~charlie.hoy/LVC/projects/summary_pages/fiducial_bbh_128s_ROQ/html/fiducial_bbh_128s_ROQ_fiducial_bbh_128s_ROQ_mass_1.html <08:20:08> "Charlie Hoy": If I remember correctly, Vivien said a few months ago that there exists a function in lalinference that allows you to generate a waveform for a set of samples. I can try and track this function down... <08:20:20> "Colm Talbot": That would be great Charlie <08:24:25> "Charlie Hoy": No problem  <08:29:26> "Greg Ashton [h]": https://ldas-jobs.ligo.caltech.edu/~cbc/pe/bilby_lalinference_comparison/GW150914/nest_vs_mcmc_vs_bilby/home.html <08:30:06> "Simon Stevenson": doesn't seem to be loading for me either <08:30:12> "Matt Pitkin [h]": I think CIT is just being slow (maintainance!?) <08:30:14> "Charlie Hoy": John Veitch's webdir <08:31:14> "Isobel Romero-Shaw": Just updated the agenda on the wiki with links to all the PESummary pages for each event (although I can't open them either) <08:31:20> "Charlie Hoy": The location of the nest and mcmc samples can be found on this page: https://git.ligo.org/lscsoft/bilby_pipe/wikis/O3-review/known-events <08:31:45> "Colm Talbot": Hand up <08:33:19> "Simon Stevenson": That looks good to me <08:33:49> "Matt Pitkin [h]": Cool <08:34:34> "Colm Talbot": We're happy on our end <08:35:36> "Greg Ashton [h]": https://git.ligo.org/lscsoft/bilby_pipe/wikis/O3-review/known-events <08:36:07> "Matt Pitkin [h]": yep <08:36:13> "Isobel Romero-Shaw": yes! <08:36:14> "Simon Stevenson": yup <08:36:34> "Charlie Hoy": I could only find nest runs for 170814. I can ask Jacob Lange for details as I think he was the person who was running this event. <08:38:28> "Colm Talbot": Hand up <08:39:41> "Simon Stevenson": What percentile do these contours represent Charlie? <08:39:50> "Charlie Hoy": 90% <08:39:50> "Isobel Romero-Shaw": The bug is much clearer in the plots for GW170814 <08:39:58> "Simon Stevenson": OK thanks <08:40:52> "Charlie Hoy": Yes, this is a problem with PESummary <08:41:06> "Charlie Hoy": Sorry I have not got round to fix this yet! <08:41:15> "Charlie Hoy": I will do it tomorrow  <08:41:29> "Simon Stevenson": There are several solar masses difference in the 90% upper limit for m1 <08:41:47> "Simon Stevenson": I assume this is within sampling uncertainty <08:42:10> "Isobel Romero-Shaw": Matched filter SNR for GW151226 read from bilby results file: https://ldas-jobs.ligo.caltech.edu/~isobel.romero-shaw/known_events/GW151226/results_constraints_calibration_v0-5-4/result/matched_filter_snr.png <08:42:15> "Simon Stevenson": https://ldas-jobs.ligo.caltech.edu/~charlie.hoy/pe/bilby_lalinference_comparison/GW151226/html/Comparison_mass_1.html <08:45:58> "Simon Stevenson": see for example fig 1 in https://arxiv.org/abs/1606.01210 <08:49:29> "Simon Stevenson": I agree with Greg, this looks ok to me <08:50:15> "Matt Pitkin [h]": The package I mentioned is https://github.com/ejhigson/nestcheck <08:51:04> "Matt Pitkin [h]": and here for sampling error estimates https://nestcheck.readthedocs.io/en/latest/demos/quickstart_demo.html#Bootstrap-sampling-error-estimates <08:53:01> "Isobel Romero-Shaw": Just that I have a matched filter SNR plot for this event too https://ldas-jobs.ligo.caltech.edu/~isobel.romero-shaw/known_events/GW170814/results_v0-5-4/result/matched_filter_snr.png <08:55:20> "Simon Stevenson": Sounds good <08:56:03> "Simon Stevenson": Bye!