Agenda
- Fiducial injections:
- PP tests
- Known event comparison
Minutes
Minute taker: Greg
Attendance: Charlie Hoy, Colm Talbot, Greg Ashton, Paul Lasky, Shanika Galaudage, Simon Stevenson, Sylvia Biscoveanu, Nikhil Sarin.
Paul: Skipping Analytic waveforms as Matt requested this so will wait for him to go through in detail
Fiducial waveforms
Paul: Starting on high mass fiducial injections looking at PDFs and CDFs
Simon: lots of plots to stare at, unclear what is the best use of time
Greg: to me things look sensible, but don't use these to show bias
Colm: what version of the code is this run with
Greg: as it says on the main review page, 0.5.4 bilby
Paul: My recommendation is we stare at these offline
Simon: Agree, things generally look sensible. For some parameters the injected values are in the tail, but that is probably due to the noise realisation. We can have a quick look at the other ones and convince myself the others ones are also sensible.
Paul: is the 128s run with increased settings?
Greg: No, that run is ongoing. PP tests are ongoing which show that an increase in the number of walks resolves a bias for the 128s bias high spin - this is the effect we are seeing here.
Simon: Makes sense to me
Simon: At a preliminary level, I'm happy to say the 4s and high mass runs are okay. Will wait on the 128s
pp tests
Greg: Overview of the plots: main take away things look good for the high mass case with default settings. Need an increased number of walks for the low mass case
Simon: Overall things look, p-values are useful, sometimes it is surprising when you look by eye. The study of changing values is useful and a good exercise. Just changing the reference frequency not quite good enough. Question: would the idea be to change the default settings to these more conservative default settings which ensures the 128s run passes for everything?
Greg: Having the defaults be sufficient to pass for the 128s becomes prohibitive for the high mass events. Instead, we, like L.I. need to have "expert knowledge" about what settings to use for different runs. However, we should not have that expert knowledge wrapped up in peoples heads - it needs to be written down and visible.
Simon: I agree it doesn't make sense to double runs times for high mass events and having some nice explanation written down is sensible.
Paul: Are we happy with this? One thing to add is there will be minor modifications to the code following this, but we expect to have reruns.
Simon: happy to tick the 4s and the "doubled walks" 128s PP tests
Known event comparisons
Paul: Lets look at 150914 - the pesummary pages comparing https://ldas-jobs.ligo.caltech.edu/~gregory.ashton/bilby_review/bilby0.5.4_bilby_pipe0.2.2/outdir_fiducial_bbh_128s_ROQ/result/fiducial_bbh_128s_ROQ_190731_1613_combined_1d/ Isobel: Have we not looked at these before? Greg: No these are the bilby vs mcmc vs nest
Charlie: bilby looks close to mcmc rather than nest
Paul: The most pertinent thing is not which case bilby is close to, but rather that the variation in posteriors from MCMC and Nest is as large as the variation with bilby. I.e., bilby doesn't show any systematic offset when compared.
Colm (chat): distance looks bugged in L.I.
Simon: Charlie, there was some discussion on the email thread about the waveform plots on the home page. There is an apparent feature where the phase of the waveform seems to be out for some cases, do you understand why this is?
Charlie: I'm not 100% sure. Could be the maxL waveform - plan to compare to pycbc to check.
Simon: we have put plots like this in papers before so somebody somewhere. Perhaps looking at multiple draws would be useful. This is aside though from the main point of these pages which is to compare bilby with the two different L.I. samplers. I completely agree with Paul (and the point made by Matt last week). There are larger differences between the two L.I samplers than with bilby - it would be nice to know which is right at some stage, but I think for now we just have to accept this.
Colm: I agree there isn't one that is an outlier. On the waveform plots - something we should discuss with pesummary is to show uncertainty in the waveform. Something else I've noticed is that the matched filter SNR is coming out bizarrely in bilby - number is kind of nutty. Charlie is this something in PESummary or something in bilby?
Charlie: this is a problem in PESummary due to a conversion error.
Charlie: I wanted to draw your attention to something. Phi1 comparison (https://ldas-jobs.ligo.caltech.edu/~cbc/pe/bilby_lalinference_comparison/GW150914/nest_vs_mcmc_vs_bilby/html/Comparison_phi_1.html). There is a difference. I've dug into things (see code points in the chat). Unsure what is going on.
Colm: I wonder if this is the spin convention again? Linking to the O3 run (which has the same spin convention)
Charlie: other than this, everything looks consistent between the codes.
Paul: Agree this sounds like the waveform comparison.
Greg: Can we look at this now?
Colm: no, phi1 and phi2 are not computed for this comparison.
Paul: we'll continue offline. Question for reviewers: how do you want to proceed with reviewing the known events?
Simon: most of the things I've looked at look good. It really is just these really obvious discrepancies - ensure we understand where they are coming from. If we are happy with those I'd be happy with everything else. Only looked at the 150914 one so far, but no reason to think the others won't look okay.
Paul: Suggestion: offline we will sort out the "big discrepancy" with phi1 and phi2 and email you both. We have deliberately run only 150914 to get things settled. Lets first tick that off (once we have resolved the "big discrepancy") then move onto other event. Runs are nearing completion for 151226 and 170814, then 170817. Short term action item: resolve the spins question
Simon: looks like a good plan going forwards. If everything else looks as good as the runs you have shown today we should be able to sign things off reasonably easily.
Paul: just to flag that once we are satisfied with the known event comparisons done to date, and specifically 170817, then we will be asking the PE chairs about using bilby for 190425.
Paul: AOB? Very productive, nice number of ticks and good progress!
Chat
> <08:02:44> "Paul Lasky": https://git.ligo.org/lscsoft/bilby_pipe/wikis/O3-review/minutes/190807
<08:03:03> "Greg Ashton [h]": I can take minutes if you like (since I failed to do so last week) <08:04:27> "Paul Lasky": https://git.ligo.org/lscsoft/bilby_pipe/wikis/O3-review/minutes/190807 <08:07:55> "Greg Ashton [h]": Hand up <08:09:39> "Colm Talbot": Hand up <08:14:54> "Greg Ashton [h]": https://ldas-jobs.ligo.caltech.edu/~gregory.ashton/bilby_review/bilby0.5.4_bilby_pipe0.2.2/outdir_fiducial_bbh_128s_ROQ/result/fiducial_bbh_128s_ROQ_190731_1613_combined_1d/ <08:17:59> "Paul Lasky": https://git.ligo.org/lscsoft/bilby_pipe/wikis/O3-review/PP-tests <08:30:16> "Paul Lasky": hand up <08:30:37> "Paul Lasky": hand down <08:33:59> "Paul Lasky": https://ldas-jobs.ligo.caltech.edu/~cbc/pe/bilby_lalinference_comparison/GW150914/nest_vs_mcmc_vs_bilby/home.html <08:34:44> "Sylvia Biscoveanu": Sorry have to run, bye! <08:34:46> "Charlie Hoy": Yes, but not comparison to Nest vs MCMC <08:35:40> "Charlie Hoy": I can walk through if you would like? <08:36:15> "Charlie Hoy": https://ldas-jobs.ligo.caltech.edu/~cbc/pe/bilby_lalinference_comparison/GW150914/nest_vs_mcmc_vs_bilby/html/Comparison_chi_p.html <08:36:30> "Paul Lasky": hand up <08:36:32> "Charlie Hoy": https://ldas-jobs.ligo.caltech.edu/~cbc/pe/bilby_lalinference_comparison/GW150914/nest_vs_mcmc_vs_bilby/html/Comparison_luminosity_distance.html <08:37:16> "Colm Talbot": +1 Paul <08:37:24> "Colm Talbot": Also the LI distance looks bugged to me <08:37:54> "Charlie Hoy": ohh yes, so these samples do not have the distance fix. I think John has only done the distance bug for the combined samples <08:37:58> "Charlie Hoy": not for individual runs <08:38:14> "Simon Stevenson": OK that's good to keep in mind <08:39:34> "Greg Ashton [h]": Hand up on this <08:34:20> "Charlie Hoy": yes, I noticed this last week. I have been meaning to get around to fixt his <08:45:12> "Charlie Hoy": https://ldas-jobs.ligo.caltech.edu/~charlie.hoy/pe/bilby_lalinference_comparison/GW151226/html/Comparison_phi_1.html <08:45:29> "Charlie Hoy": https://ldas-jobs.ligo.caltech.edu/~cbc/pe/bilby_lalinference_comparison/GW150914/nest_vs_mcmc_vs_bilby/html/Comparison_phi_1.html <08:46:02> "Simon Stevenson": Yeah Bilby is just uniform, there is structure to the LALInf posteriors <08:46:05> "Charlie Hoy": https://git.ligo.org/lscsoft/lalsuite/blob/master/lalinference/python/lalinference/bayespputils.py#L3879 <08:46:11> "Charlie Hoy": https://git.ligo.org/lscsoft/lalsuite/blob/master/lalinference/python/lalinference/bayespputils.py#L3499 <08:46:27> "Charlie Hoy": https://git.ligo.org/lscsoft/bilby/blob/master/bilby/gw/conversion.py#L865 <08:47:35> "Charlie Hoy": ohh possibly <08:47:39> "Charlie Hoy": we should try this for the O3 run <08:47:48> "Simon Stevenson": That sounds plausible <08:47:51> "Charlie Hoy": Thanks <08:48:45> "Charlie Hoy": Other comparisons can be seen here: <08:48:45> "Charlie Hoy": https://git.ligo.org/lscsoft/bilby_pipe/wikis/O3-review/known-events <08:49:13> "Greg Ashton [h]": Can we look at that now? <08:49:14> "Charlie Hoy": Bare in mind that these do not have the distance fix. I can speak to john to see if he fixed it for the individual runs or only for the combined. <08:49:30> "Colm Talbot": https://ldas-jobs.ligo.caltech.edu/~colm.talbot/bilby_review/0.5.3/190521r/EXP0/comparison/html/Comparison.html <08:49:49> "Simon Stevenson":  <08:54:37> "Charlie Hoy": Just out of curiosity, what is the difference in run time between bilby and lalinference for GW150914 for example? <08:55:32> "Simon Stevenson": BTW, I really need to leave at 9 <08:57:31> "Simon Stevenson": See ya! <08:57:36> "Greg Ashton [h]": Cheers9:42> "Simon Stevenson": That's ok, just wondered if you'd gotten to the bottom of it <08:41:42> "Charlie Hoy": That makes sense <08:42:48> "Colm Talbot": Hand up at some stage <08:42:54> "Charlie Hoy": go ahead <08:43:46> "Charlie Hoy": awesome. That makes sense. Thanks <08:43:58> "Colm Talbot": https://ldas-jobs.ligo.caltech.edu/~cbc/pe/bilby_lalinference_comparison/GW150914/nest_vs_mcmc_vs_bilby/html/Comparison_network_matched_filter_snr.html <08:44:20> "Charlie Hoy": yes, I noticed this last week. I have been meaning to get around to fixt his <08:45:12> "Charlie Hoy": https://ldas-jobs.ligo.caltech.edu/~charlie.hoy/pe/bilby_lalinference_comparison/GW151226/html/Comparison_phi_1.html <08:45:29> "Charlie Hoy": https://ldas-jobs.ligo.caltech.edu/~cbc/pe/bilby_lalinference_comparison/GW150914/nest_vs_mcmc_vs_bilby/html/Comparison_phi_1.html <08:46:02> "Simon Stevenson": Yeah Bilby is just uniform, there is structure to the LALInf posteriors <08:46:05> "Charlie Hoy": https://git.ligo.org/lscsoft/lalsuite/blob/master/lalinference/python/lalinference/bayespputils.py#L3879 <08:46:11> "Charlie Hoy": https://git.ligo.org/lscsoft/lalsuite/blob/master/lalinference/python/lalinference/bayespputils.py#L3499 <08:46:27> "Charlie Hoy": https://git.ligo.org/lscsoft/bilby/blob/master/bilby/gw/conversion.py#L865 <08:47:35> "Charlie Hoy": ohh possibly <08:47:39> "Charlie Hoy": we should try this for the O3 run <08:47:48> "Simon Stevenson": That sounds plausible <08:47:51> "Charlie Hoy": Thanks <08:48:45> "Charlie Hoy": Other comparisons can be seen here: <08:48:45> "Charlie Hoy": https://git.ligo.org/lscsoft/bilby_pipe/wikis/O3-review/known-events <08:49:13> "Greg Ashton [h]": Can we look at that now? <08:49:14> "Charlie Hoy": Bare in mind that these do not have the distance fix. I can speak to john to see if he fixed it for the individual runs or only for the combined. <08:49:30> "Colm Talbot": https://ldas-jobs.ligo.caltech.edu/~colm.talbot/bilby_review/0.5.3/190521r/EXP0/comparison/html/Comparison.html <08:49:49> "Simon Stevenson":  <08:54:37> "Charlie Hoy": Just out of curiosity, what is the difference in run time between bilby and lalinference for GW150914 for example? <08:55:32> "Simon Stevenson": BTW, I really need to leave at 9 <08:57:31> "Simon Stevenson": See ya! <08:57:36> "Greg Ashton [h]": Cheers