Skip to content

Use the `PriorSet` from the first step of PE as the `sampling_prior` for hyper-pe

This addresses the issue in #196 (closed), which was recently closed. The problem was the the sampling_prior argument to the hyper-pe likelihood could only be a regular function or a hyper.model, so if you already had the .prior file from the first step of PE using bilby, you'd have to re-code this up in a different way. I've added functionality so that you can pass the bilby.core.prior object loaded from the prior file directly to the hyper-pe likelihood, or if you already have the log_prior stored as a column in your posterior samples, you don't need to pass a sampling_prior at all. This also makes the hyper-pe likelihood calculation more efficient, since the data attribute now stores the prior for each of the posterior samples, and it doesn't need to be calculated every time the likelihood is called. I have tested this for the three cases below, and they all return the same value of the hyper-pe likelihood.

  1. The old style with a regular function for the sampling_prior
  2. Including the log_prior column in the posterior samples and not passing a sampling_prior
  3. Passing the bilby.core.prior.PriorDict() object from the first step of PE as the sampling_prior

Merge request reports

Loading