Skip to content
Snippets Groups Projects

Adds a generic reweighting method for arbitrary likelihood/priors

Merged Gregory Ashton requested to merge new-reweighting-method into master
All threads resolved!

This method seeks to implement generic reweighting given a new likelihood and priors. Test example are described in the MR comments.

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • As an example, I ran an example where the likelihood didn't vary, but the priors did.

    Prior A: Uniform

    gaussian-example-A_corner

    PriorB: Power Law (alpha=4)

    gaussian-example-B_corner

    I then reweighted the A posterior to the B prior using the default settings. This is the comparison figure (here C is the reweighted posteriors)

    test

    It is worth noting that if fraction=1, the reweighting completely fails. This is worrisome and caused me to add a note to the docstring warning about potential failures.

  • The script for future reference:

    #!/usr/bin/env python
    """
    An example of how to use bilby to perform paramater estimation for
    non-gravitational wave data consisting of a Gaussian with a mean and variance
    """
    from __future__ import division
    import bilby
    import numpy as np
    from scipy.stats import gaussian_kde
    import matplotlib.pyplot as plt
    
    # A few simple setup steps
    label = 'gaussian-example'
    outdir = 'outdir'
    
    # Here is minimum requirement for a Likelihood class to run with bilby. In this
    # case, we setup a GaussianLikelihood, which needs to have a log_likelihood
    # method. Note, in this case we will NOT make use of the `bilby`
    # waveform_generator to make the signal.
    
    # Making simulated data: in this case, we consider just a Gaussian
    
    sigma = 1
    data = np.random.normal(1, sigma, 100)
    
    
    class SimpleGaussianLikelihood(bilby.Likelihood):
        def __init__(self, data):
            """
            A very simple Gaussian likelihood
    
            Parameters
            ----------
            data: array_like
                The data to analyse
            """
            super().__init__(parameters={'mu': None, 'sigma': None})
            self.data = data
            self.N = len(data)
    
        def log_likelihood(self):
            mu = self.parameters['mu']
            sigma = self.parameters['sigma']
            res = self.data - mu
            return -0.5 * (np.sum((res / sigma)**2) +
                           self.N * np.log(2 * np.pi * sigma**2))
    
    
    likelihood = SimpleGaussianLikelihood(data)
    priorsA = dict(
        mu=bilby.core.prior.PowerLaw(alpha=0, minimum=0, maximum=2, name='mu'),
        sigma=sigma)
    
    priorsB = dict(
        mu=bilby.core.prior.PowerLaw(alpha=4, minimum=0, maximum=2, name='mu'),
        sigma=sigma)
    
    priorsB = bilby.core.prior.PriorDict(priorsB)
    
    # And run sampler
    resultA = bilby.run_sampler(
        likelihood=likelihood, priors=priorsA, sampler='dynesty', npoints=5000,
        walks=10, outdir=outdir, label=label + "-A")
    resultA.plot_corner(priors=True)
    resultB = bilby.run_sampler(
        likelihood=likelihood, priors=priorsB, sampler='dynesty', npoints=5000,
        walks=10, outdir=outdir, label=label + "-B")
    resultB.plot_corner(priors=True)
    
    resultC = bilby.core.result.reweight(resultA, new_prior=priorsB)
    
    xx = np.linspace(0.8, 1.5, 1000)
    for res, label, ls in zip([resultA, resultB, resultC], ["A", "B", "C"], ["-", "--", ":"]):
        kde = gaussian_kde(res.posterior["mu"])
        plt.plot(xx, kde(xx), label=label, ls=ls)
    
    plt.legend()
    plt.savefig("test")
  • Gregory Ashton added 1 commit

    added 1 commit

    • 725beea3 - Move weights calculation to separate function

    Compare with previous version

  • Gregory Ashton mentioned in merge request !655 (closed)

    mentioned in merge request !655 (closed)

  • Gregory Ashton added 1 commit

    added 1 commit

    • 8ed6875b - Move Kish effective sample size calculation to separate function

    Compare with previous version

  • Gregory Ashton changed milestone to %0.6.8

    changed milestone to %0.6.8

  • Colm Talbot mentioned in merge request !778 (merged)

    mentioned in merge request !778 (merged)

    • Resolved by Gregory Ashton

      hey @gregory.ashton am I missing something? in your script prior A is also a power law and exactly the same as prior B? This is a really cool feature btw

      priorsA = dict(
          mu=bilby.core.prior.PowerLaw(alpha=0, minimum=0, maximum=2, name='mu'),
          sigma=sigma)
      
      priorsB = dict(
          mu=bilby.core.prior.PowerLaw(alpha=4, minimum=0, maximum=2, name='mu'),
          sigma=sigma)
      Edited by Nikhil Sarin
  • Gregory Ashton added 1 commit

    added 1 commit

    • b44a9918 - Update to using rejection sampling for reweighting

    Compare with previous version

  • Gregory Ashton resolved all threads

    resolved all threads

  • I updated to rejection sampling only. It is much better and doesn't need tuning so I've removed the option for reweighting in place. I ran my test script again and got this plot:

    test

    Here B is nicely reweighting to the C prior without tuning parameters :D

  • Matthew Pitkin
  • Gregory Ashton added 1 commit

    added 1 commit

    • efd886c2 - Improve numerical stability of evidence shift

    Compare with previous version

  • Gregory Ashton added 23 commits

    added 23 commits

    Compare with previous version

  • Matthew Pitkin resolved all threads

    resolved all threads

  • Matthew Pitkin approved this merge request

    approved this merge request

  • Colm Talbot approved this merge request

    approved this merge request

  • Gregory Ashton mentioned in commit 07892848

    mentioned in commit 07892848

  • Please register or sign in to reply
    Loading