Adds a generic reweighting method for arbitrary likelihood/priors
This method seeks to implement generic reweighting given a new likelihood and priors. Test example are described in the MR comments.
Merge request reports
Activity
As an example, I ran an example where the likelihood didn't vary, but the priors did.
Prior A: Uniform
PriorB: Power Law (alpha=4)
I then reweighted the A posterior to the B prior using the default settings. This is the comparison figure (here C is the reweighted posteriors)
It is worth noting that if
fraction=1
, the reweighting completely fails. This is worrisome and caused me to add a note to the docstring warning about potential failures.The script for future reference:
#!/usr/bin/env python """ An example of how to use bilby to perform paramater estimation for non-gravitational wave data consisting of a Gaussian with a mean and variance """ from __future__ import division import bilby import numpy as np from scipy.stats import gaussian_kde import matplotlib.pyplot as plt # A few simple setup steps label = 'gaussian-example' outdir = 'outdir' # Here is minimum requirement for a Likelihood class to run with bilby. In this # case, we setup a GaussianLikelihood, which needs to have a log_likelihood # method. Note, in this case we will NOT make use of the `bilby` # waveform_generator to make the signal. # Making simulated data: in this case, we consider just a Gaussian sigma = 1 data = np.random.normal(1, sigma, 100) class SimpleGaussianLikelihood(bilby.Likelihood): def __init__(self, data): """ A very simple Gaussian likelihood Parameters ---------- data: array_like The data to analyse """ super().__init__(parameters={'mu': None, 'sigma': None}) self.data = data self.N = len(data) def log_likelihood(self): mu = self.parameters['mu'] sigma = self.parameters['sigma'] res = self.data - mu return -0.5 * (np.sum((res / sigma)**2) + self.N * np.log(2 * np.pi * sigma**2)) likelihood = SimpleGaussianLikelihood(data) priorsA = dict( mu=bilby.core.prior.PowerLaw(alpha=0, minimum=0, maximum=2, name='mu'), sigma=sigma) priorsB = dict( mu=bilby.core.prior.PowerLaw(alpha=4, minimum=0, maximum=2, name='mu'), sigma=sigma) priorsB = bilby.core.prior.PriorDict(priorsB) # And run sampler resultA = bilby.run_sampler( likelihood=likelihood, priors=priorsA, sampler='dynesty', npoints=5000, walks=10, outdir=outdir, label=label + "-A") resultA.plot_corner(priors=True) resultB = bilby.run_sampler( likelihood=likelihood, priors=priorsB, sampler='dynesty', npoints=5000, walks=10, outdir=outdir, label=label + "-B") resultB.plot_corner(priors=True) resultC = bilby.core.result.reweight(resultA, new_prior=priorsB) xx = np.linspace(0.8, 1.5, 1000) for res, label, ls in zip([resultA, resultB, resultC], ["A", "B", "C"], ["-", "--", ":"]): kde = gaussian_kde(res.posterior["mu"]) plt.plot(xx, kde(xx), label=label, ls=ls) plt.legend() plt.savefig("test")
added 1 commit
- 725beea3 - Move weights calculation to separate function
mentioned in merge request !655 (closed)
added 1 commit
- 8ed6875b - Move Kish effective sample size calculation to separate function
changed milestone to %0.6.8
- Resolved by Gregory Ashton
mentioned in merge request !778 (merged)
- Resolved by Gregory Ashton
hey @gregory.ashton am I missing something? in your script prior A is also a power law and exactly the same as prior B? This is a really cool feature btw
priorsA = dict( mu=bilby.core.prior.PowerLaw(alpha=0, minimum=0, maximum=2, name='mu'), sigma=sigma) priorsB = dict( mu=bilby.core.prior.PowerLaw(alpha=4, minimum=0, maximum=2, name='mu'), sigma=sigma)
Edited by Nikhil Sarin
added 1 commit
- b44a9918 - Update to using rejection sampling for reweighting
- Resolved by Matthew Pitkin
added 1 commit
- efd886c2 - Improve numerical stability of evidence shift
added 23 commits
-
efd886c2...8f9caa5e - 22 commits from branch
master
- a7cf1206 - Merge branch 'master' into new-reweighting-method
-
efd886c2...8f9caa5e - 22 commits from branch
mentioned in commit 07892848