Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • john-veitch/bilby
  • duncanmmacleod/bilby
  • colm.talbot/bilby
  • lscsoft/bilby
  • matthew-pitkin/bilby
  • salvatore-vitale/tupak
  • charlie.hoy/bilby
  • bfarr/bilby
  • virginia.demilio/bilby
  • vivien/bilby
  • eric-howell/bilby
  • sebastian-khan/bilby
  • rhys.green/bilby
  • moritz.huebner/bilby
  • joseph.mills/bilby
  • scott.coughlin/bilby
  • matthew.carney/bilby
  • hyungwon.lee/bilby
  • monica.rizzo/bilby
  • christopher-berry/bilby
  • lindsay.demarchi/bilby
  • kaushik.rao/bilby
  • charles.kimball/bilby
  • andrew.matas/bilby
  • juan.calderonbustillo/bilby
  • patrick-meyers/bilby
  • hannah.middleton/bilby
  • eve.chase/bilby
  • grant.meadors/bilby
  • khun.phukon/bilby
  • sumeet.kulkarni/bilby
  • daniel.reardon/bilby
  • cjhaster/bilby
  • sylvia.biscoveanu/bilby
  • james-clark/bilby
  • meg.millhouse/bilby
  • joshua.willis/bilby
  • nikhil.sarin/bilby
  • paul.easter/bilby
  • youngmin/bilby
  • daniel-williams/bilby
  • shanika.galaudage/bilby
  • bruce.edelman/bilby
  • avi.vajpeyi/bilby
  • isobel.romero-shaw/bilby
  • andrew.kim/bilby
  • dominika.zieba/bilby
  • jonathan.davies/bilby
  • marc.arene/bilby
  • srishti.tiwari/bilby-tidal-heating-eccentric
  • aditya.vijaykumar/bilby
  • michael.williams/bilby
  • cecilio.garcia-quiros/bilby
  • rory-smith/bilby
  • maite.mateu-lucena/bilby
  • wushichao/bilby
  • kaylee.desoto/bilby
  • brandon.piotrzkowski/bilby
  • rossella.gamba/bilby
  • hunter.gabbard/bilby
  • deep.chatterjee/bilby
  • tathagata.ghosh/bilby
  • arunava.mukherjee/bilby
  • philip.relton/bilby
  • reed.essick/bilby
  • pawan.gupta/bilby
  • francisco.hernandez/bilby
  • rhiannon.udall/bilby
  • leo.tsukada/bilby
  • will-farr/bilby
  • vijay.varma/bilby
  • jeremy.baier/bilby
  • joshua.brandt/bilby
  • ethan.payne/bilby
  • ka-lok.lo/bilby
  • antoni.ramos-buades/bilby
  • oliviastephany.wilk/bilby
  • jack.heinzel/bilby
  • samson.leong/bilby-psi4
  • viviana.caceres/bilby
  • nadia.qutob/bilby
  • michael-coughlin/bilby
  • hemantakumar.phurailatpam/bilby
  • boris.goncharov/bilby
  • sama.al-shammari/bilby
  • siqi.zhong/bilby
  • jocelyn-read/bilby
  • marc.penuliar/bilby
  • stephanie.letourneau/bilby
  • alexandresebastien.goettel/bilby
  • alec.gunny/bilby
  • serguei.ossokine/bilby
  • pratyusava.baral/bilby
  • sophie.hourihane/bilby
  • eunsub/bilby
  • james.hart/bilby
  • pratyusava.baral/bilby-tg
  • zhaozc/bilby
  • pratyusava.baral/bilby_SoG
  • tomasz.baka/bilby
  • nicogerardo.bers/bilby
  • soumen.roy/bilby
  • isaac.mcmahon/healpix-redundancy
  • asamakai.baker/bilby-frequency-dependent-antenna-pattern-functions
  • anna.puecher/bilby
  • pratyusava.baral/bilby-x-g
  • thibeau.wouters/bilby
  • christian.adamcewicz/bilby
  • raffi.enficiaud/bilby
109 results
Show changes
Commits on Source (112)
Showing
with 1710 additions and 1087 deletions
......@@ -51,7 +51,7 @@ python-3.7:
- flake8 .
# Run tests and collect coverage data
- pytest --cov=bilby
- pytest --cov=bilby --durations 10
- coverage html
- coverage-badge -o coverage_badge.svg -f
......@@ -66,6 +66,15 @@ python-3.7:
- coverage_badge.svg
- docs/_build/html/
# test example on python 3.8
python-3.8:
stage: test
image: bilbydev/v2-dockerfile-test-suite-python38
script:
- python -m pip install .
- pytest
# test example on python 3.6
python-3.6:
stage: test
......@@ -75,10 +84,29 @@ python-3.6:
- pytest
# test samplers on python 3.7
python-3.7-samplers:
stage: test
image: bilbydev/v2-dockerfile-test-suite-python37
script:
- python -m pip install .
- pytest test/sampler_test.py --durations 10
- pytest test/sample_from_the_prior_test.py
# test samplers on python 3.6
python-3.6-samplers:
stage: test
image: bilbydev/v2-dockerfile-test-suite-python36
script:
- python -m pip install .
- pytest test/sampler_test.py
# Tests run at a fixed schedule rather than on push
scheduled-python-3.7:
stage: test
image: bilbydev/bilby-test-suite-python37
image: bilbydev/v2-dockerfile-test-suite-python37
only:
- schedules
script:
......@@ -89,6 +117,17 @@ scheduled-python-3.7:
- pytest test/gw_example_test.py
- pytest test/sample_from_the_prior_test.py
plotting:
stage: test
image: bilbydev/bilby-test-suite-python37
only:
- schedules
script:
- python -m pip install .
- python -m pip install ligo.skymap
- pytest test/gw_plot_test.py
pages:
stage: deploy
dependencies:
......@@ -104,3 +143,17 @@ pages:
expire_in: 30 days
only:
- master
deploy_release:
stage: deploy
image: bilbydev/v2-dockerfile-test-suite-python37
variables:
TWINE_USERNAME: $PYPI_USERNAME
TWINE_PASSWORD: $PYPI_PASSWORD
before_script:
- pip install twine
- python setup.py sdist
script:
- twine upload dist/*
only:
- tags
# All notable changes will be documented in this file
## [0.6.7] 2020-04-15
### Changes
- Allow dynesty to run with multiprocessing (!754)
- Rewrite ptemcee implementation (!750)
- Change 'source frame' to 'detector frame' in L34-35 of compare_samplers tutorial (!745)
- Allow lal dictionary to be passed through to '_base_lal_cbc_fd_waveform' (!752)
## [0.6.6] 2020-03-06
### Changes
- Fix bug where injected values are not present for corner plot (!749)
- Significant backwards-incompatible improvements to `dynesty` checkpointing (!746)
- Improve checkpoint interval calculation with `dynesty` (!741)
- Fix reading of `PriorDict` class from result file (!739)
- Fix definition of time for time-domain `lalsimulation` waveforms (!736)
- LaTeX text formatting for plots by default (!702)
### Added
- Normalisation dynamically computed when using prior constraints (!704)
## [0.6.5] 2020-02-14
### Changes
- Fix for time reconstruction bug (!714)
- Resolved errors Waveform longer than the frequency array (!710)
- Prior reading clean-up (!715)
- More efficient dynesty restarting (!713)
- PP tests show 123 sigma bounds by default (!726)
### Added
- HealPixPrior (!651)
- GW prior documentation (!720)
- Multiple contours to PP tests plots (!721)
- Distance marginalization for non-luminosity-distance parameters (!719)
### Removed
- Pipenv (!724)
## [0.6.4] 2020-01-30
### Changes
- Discontinue python2.7 support (!697)
......
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
future = "*"
corner = "*"
numpy = "==1.15.2"
ligotimegps = "<=1.2.3"
matplotlib = "<3"
scipy = ">=0.16"
pandas = "==0.23.0"
deepdish = "==0.3.6"
mock = "*"
astropy = "<3"
gwpy = "*"
theano = "*"
lalsuite = "*"
dill = "*"
# cpnest = "*"
dynesty = "*"
emcee = "*"
nestle = "*"
ptemcee = "*"
pymc3 = "*"
[requires]
[dev-packages]
[pipenv]
allow_prereleases = false
This diff is collapsed.
......@@ -3,6 +3,7 @@ import copy
import numpy as np
from scipy.special import gammaln
from scipy.stats import multivariate_normal
from .utils import infer_parameters_from_function
......@@ -401,6 +402,69 @@ class StudentTLikelihood(Analytical1DLikelihood):
self._nu = nu
class AnalyticalMultidimensionalCovariantGaussian(Likelihood):
"""
A multivariate Gaussian likelihood
with known analytic solution.
Parameters
----------
mean: array_like
Array with the mean values of distribution
cov: array_like
The ndim*ndim covariance matrix
"""
def __init__(self, mean, cov):
self.cov = np.atleast_2d(cov)
self.mean = np.atleast_1d(mean)
self.sigma = np.sqrt(np.diag(self.cov))
self.pdf = multivariate_normal(mean=self.mean, cov=self.cov)
parameters = {"x{0}".format(i): 0 for i in range(self.dim)}
super(AnalyticalMultidimensionalCovariantGaussian, self).__init__(parameters=parameters)
@property
def dim(self):
return len(self.cov[0])
def log_likelihood(self):
x = np.array([self.parameters["x{0}".format(i)] for i in range(self.dim)])
return self.pdf.logpdf(x)
class AnalyticalMultidimensionalBimodalCovariantGaussian(Likelihood):
"""
A multivariate Gaussian likelihood
with known analytic solution.
Parameters
----------
mean_1: array_like
Array with the mean value of the first mode
mean_2: array_like
Array with the mean value of the second mode
cov: array_like
"""
def __init__(self, mean_1, mean_2, cov):
self.cov = np.atleast_2d(cov)
self.sigma = np.sqrt(np.diag(self.cov))
self.mean_1 = np.atleast_1d(mean_1)
self.mean_2 = np.atleast_1d(mean_2)
self.pdf_1 = multivariate_normal(mean=self.mean_1, cov=self.cov)
self.pdf_2 = multivariate_normal(mean=self.mean_2, cov=self.cov)
parameters = {"x{0}".format(i): 0 for i in range(self.dim)}
super(AnalyticalMultidimensionalBimodalCovariantGaussian, self).__init__(parameters=parameters)
@property
def dim(self):
return len(self.cov[0])
def log_likelihood(self):
x = np.array([self.parameters["x{0}".format(i)] for i in range(self.dim)])
return -np.log(2) + np.logaddexp(self.pdf_1.logpdf(x), self.pdf_2.logpdf(x))
class JointLikelihood(Likelihood):
def __init__(self, *likelihoods):
"""
......
from importlib import import_module
from io import open as ioopen
import json
import numpy as np
import os
from future.utils import iteritems
from matplotlib.cbook import flatten
import numpy as np
# keep 'import *' to make eval() statement further down work consistently
from bilby.core.prior.analytical import * # noqa
from bilby.core.prior.analytical import DeltaFunction
from bilby.core.prior.base import Prior, Constraint
from bilby.core.prior.joint import JointPrior
......@@ -41,6 +39,7 @@ class PriorDict(dict):
self.from_file(filename)
elif dictionary is not None:
raise ValueError("PriorDict input dictionary not understood")
self._cached_normalizations = {}
self.convert_floats_to_delta_functions()
......@@ -141,7 +140,6 @@ class PriorDict(dict):
comments = ['#', '\n']
prior = dict()
mvgdict = dict(inf=np.inf) # evaluate inf as np.inf
with ioopen(filename, 'r', encoding='unicode_escape') as f:
for line in f:
if line[0] in comments:
......@@ -150,39 +148,8 @@ class PriorDict(dict):
elements = line.split('=')
key = elements[0].replace(' ', '')
val = '='.join(elements[1:]).strip()
cls = val.split('(')[0]
args = '('.join(val.split('(')[1:])[:-1]
try:
prior[key] = DeltaFunction(peak=float(cls))
logger.debug("{} converted to DeltaFunction prior".format(
key))
continue
except ValueError:
pass
if "." in cls:
module = '.'.join(cls.split('.')[:-1])
cls = cls.split('.')[-1]
else:
module = __name__.replace('.' + os.path.basename(__file__).replace('.py', ''), '')
cls = getattr(import_module(module), cls, cls)
if key.lower() in ["conversion_function", "condition_func"]:
setattr(self, key, cls)
elif (cls.__name__ in ['MultivariateGaussianDist',
'MultivariateNormalDist']):
if key not in mvgdict:
mvgdict[key] = eval(val, None, mvgdict)
elif (cls.__name__ in ['MultivariateGaussian',
'MultivariateNormal']):
prior[key] = eval(val, None, mvgdict)
else:
try:
prior[key] = cls.from_repr(args)
except TypeError as e:
raise TypeError(
"Unable to parse dictionary file {}, bad line: {} "
"= {}. Error message {}".format(
filename, key, val, e))
self.update(prior)
prior[key] = val
self.from_dictionary(prior)
@classmethod
def _get_from_json_dict(cls, prior_dict):
......@@ -217,22 +184,61 @@ class PriorDict(dict):
return obj
def from_dictionary(self, dictionary):
eval_dict = dict(inf=np.inf)
for key, val in iteritems(dictionary):
if isinstance(val, str):
if isinstance(val, Prior):
continue
elif isinstance(val, (int, float)):
dictionary[key] = DeltaFunction(peak=val)
elif isinstance(val, str):
cls = val.split('(')[0]
args = '('.join(val.split('(')[1:])[:-1]
try:
prior = eval(val)
if isinstance(prior, (Prior, float, int, str)):
val = prior
except (NameError, SyntaxError, TypeError):
logger.debug(
"Failed to load dictionary value {} correctly"
.format(key))
dictionary[key] = DeltaFunction(peak=float(cls))
logger.debug("{} converted to DeltaFunction prior".format(key))
continue
except ValueError:
pass
if "." in cls:
module = '.'.join(cls.split('.')[:-1])
cls = cls.split('.')[-1]
else:
module = __name__.replace(
'.' + os.path.basename(__file__).replace('.py', ''), ''
)
cls = getattr(import_module(module), cls, cls)
if key.lower() in ["conversion_function", "condition_func"]:
setattr(self, key, cls)
elif isinstance(cls, str):
if "(" in val:
raise TypeError("Unable to parse prior class {}".format(cls))
else:
continue
elif (cls.__name__ in ['MultivariateGaussianDist',
'MultivariateNormalDist']):
if key not in eval_dict:
eval_dict[key] = eval(val, None, eval_dict)
elif (cls.__name__ in ['MultivariateGaussian',
'MultivariateNormal']):
dictionary[key] = eval(val, None, eval_dict)
else:
try:
dictionary[key] = cls.from_repr(args)
except TypeError as e:
raise TypeError(
"Unable to parse prior, bad entry: {} "
"= {}. Error message {}".format(key, val, e)
)
elif isinstance(val, dict):
logger.warning(
'Cannot convert {} into a prior object. '
'Leaving as dictionary.'.format(key))
self[key] = val
else:
raise TypeError(
"Unable to parse prior, bad entry: {} "
"= {} of type {}".format(key, val, type(val))
)
self.update(dictionary)
def convert_floats_to_delta_functions(self):
""" Convert all float parameters to delta functions """
......@@ -306,6 +312,26 @@ class PriorDict(dict):
"""
return self.sample_subset_constrained(keys=list(self.keys()), size=size)
def sample_subset_constrained_as_array(self, keys=iter([]), size=None):
""" Return an array of samples
Parameters
----------
keys: list
A list of keys to sample in
size: int
The number of samples to draw
Returns
-------
array: array_like
An array of shape (len(key), size) of the samples (ordered by keys)
"""
samples_dict = self.sample_subset_constrained(keys=keys, size=size)
samples_dict = {key: np.atleast_1d(val) for key, val in samples_dict.items()}
samples_list = [samples_dict[key] for key in keys]
return np.array(samples_list)
def sample_subset(self, keys=iter([]), size=None):
"""Draw samples from the prior set for parameters which are not a DeltaFunction
......@@ -358,6 +384,27 @@ class PriorDict(dict):
if not isinstance(self[key], Constraint)}
return all_samples
def normalize_constraint_factor(self, keys):
if keys in self._cached_normalizations.keys():
return self._cached_normalizations[keys]
else:
min_accept = 1000
sampling_chunk = 5000
samples = self.sample_subset(keys=keys, size=sampling_chunk)
keep = np.atleast_1d(self.evaluate_constraints(samples))
if len(keep) == 1:
return 1
all_samples = {key: np.array([]) for key in keys}
while np.count_nonzero(keep) < min_accept:
samples = self.sample_subset(keys=keys, size=sampling_chunk)
for key in samples:
all_samples[key] = np.hstack(
[all_samples[key], samples[key].flatten()])
keep = np.array(self.evaluate_constraints(all_samples), dtype=bool)
factor = len(keep) / np.count_nonzero(keep)
self._cached_normalizations[keys] = factor
return factor
def prob(self, sample, **kwargs):
"""
......@@ -376,6 +423,7 @@ class PriorDict(dict):
prob = np.product([self[key].prob(sample[key])
for key in sample], **kwargs)
ratio = self.normalize_constraint_factor(tuple(sample.keys()))
if np.all(prob == 0.):
return prob
else:
......@@ -387,7 +435,7 @@ class PriorDict(dict):
else:
constrained_prob = np.zeros_like(prob)
keep = np.array(self.evaluate_constraints(sample), dtype=bool)
constrained_prob[keep] = prob[keep]
constrained_prob[keep] = prob[keep] * ratio
return constrained_prob
def ln_prob(self, sample, axis=None):
......@@ -409,6 +457,7 @@ class PriorDict(dict):
ln_prob = np.sum([self[key].ln_prob(sample[key])
for key in sample], axis=axis)
ratio = self.normalize_constraint_factor(tuple(sample.keys()))
if np.all(np.isinf(ln_prob)):
return ln_prob
else:
......@@ -420,7 +469,7 @@ class PriorDict(dict):
else:
constrained_ln_prob = -np.inf * np.ones_like(ln_prob)
keep = np.array(self.evaluate_constraints(sample), dtype=bool)
constrained_ln_prob[keep] = ln_prob[keep]
constrained_ln_prob[keep] = ln_prob[keep] + np.log(ratio)
return constrained_ln_prob
def rescale(self, keys, theta):
......
......@@ -48,11 +48,6 @@ class BaseJointPriorDist(object):
raise ValueError("Bounds are not properly set")
else:
raise TypeError("Bound must be a list")
logger.warning("If using bounded ranges on the multivariate "
"Gaussian this will lead to biased posteriors "
"for nested sampling routines that require "
"a prior transform.")
else:
bounds = [(-np.inf, np.inf) for _ in self.names]
self.bounds = {name: val for name, val in zip(self.names, bounds)}
......@@ -289,7 +284,7 @@ class BaseJointPriorDist(object):
An vector sample drawn from the multivariate Gaussian
distribution.
"""
samp = np.asarray(value)
samp = np.array(value)
if len(samp.shape) == 1:
samp = samp.reshape(1, self.num_vars)
......@@ -365,6 +360,13 @@ class MultivariateGaussianDist(BaseJointPriorDist):
+/- infinity.
"""
super(MultivariateGaussianDist, self).__init__(names=names, bounds=bounds)
for name in self.names:
bound = self.bounds[name]
if bound[0] != -np.inf or bound[1] != np.inf:
logger.warning("If using bounded ranges on the multivariate "
"Gaussian this will lead to biased posteriors "
"for nested sampling routines that require "
"a prior transform.")
self.distname = 'mvg'
self.mus = []
self.covs = []
......@@ -376,10 +378,6 @@ class MultivariateGaussianDist(BaseJointPriorDist):
self.sqeigvalues = [] # square root of the eigenvalues
self.mvn = [] # list of multivariate normal distributions
self._current_sample = {} # initialise empty sample
self._uncorrelated = None
self._current_lnprob = None
# put values in lists if required
if nmodes == 1:
if mus is not None:
......
......@@ -17,10 +17,13 @@ import scipy.stats
from scipy.special import logsumexp
from . import utils
from .utils import (logger, infer_parameters_from_function,
check_directory_exists_and_if_not_mkdir,)
from .utils import (
logger, infer_parameters_from_function,
check_directory_exists_and_if_not_mkdir,
latex_plot_format, safe_save_figure,
)
from .utils import BilbyJsonEncoder, decode_bilby_json
from .prior import Prior, PriorDict, DeltaFunction, ConditionalPriorDict
from .prior import Prior, PriorDict, DeltaFunction
def result_file_name(outdir, label, extension='json', gzip=False):
......@@ -276,15 +279,15 @@ class Result(object):
if getattr(self, 'posterior', None) is not None:
if getattr(self, 'log_noise_evidence', None) is not None:
return ("nsamples: {:d}\n"
"log_noise_evidence: {:6.3f}\n"
"log_evidence: {:6.3f} +/- {:6.3f}\n"
"log_bayes_factor: {:6.3f} +/- {:6.3f}\n"
"ln_noise_evidence: {:6.3f}\n"
"ln_evidence: {:6.3f} +/- {:6.3f}\n"
"ln_bayes_factor: {:6.3f} +/- {:6.3f}\n"
.format(len(self.posterior), self.log_noise_evidence, self.log_evidence,
self.log_evidence_err, self.log_bayes_factor,
self.log_evidence_err))
else:
return ("nsamples: {:d}\n"
"log_evidence: {:6.3f} +/- {:6.3f}\n"
"ln_evidence: {:6.3f} +/- {:6.3f}\n"
.format(len(self.posterior), self.log_evidence, self.log_evidence_err))
else:
return ''
......@@ -299,7 +302,7 @@ class Result(object):
@priors.setter
def priors(self, priors):
if isinstance(priors, dict):
if isinstance(priors, ConditionalPriorDict):
if isinstance(priors, PriorDict):
self._priors = priors
else:
self._priors = PriorDict(priors)
......@@ -389,6 +392,22 @@ class Result(object):
def posterior(self, posterior):
self._posterior = posterior
@property
def log_10_bayes_factor(self):
return self.log_bayes_factor / np.log(10)
@property
def log_10_evidence(self):
return self.log_evidence / np.log(10)
@property
def log_10_evidence_err(self):
return self.log_evidence_err / np.log(10)
@property
def log_10_noise_evidence(self):
return self.log_noise_evidence / np.log(10)
@property
def version(self):
return self._version
......@@ -639,6 +658,7 @@ class Result(object):
fmt(summary.median), fmt(summary.minus), fmt(summary.plus))
return summary
@latex_plot_format
def plot_single_density(self, key, prior=None, cumulative=False,
title=None, truth=None, save=True,
file_base_name=None, bins=50, label_fontsize=16,
......@@ -718,7 +738,7 @@ class Result(object):
file_name = file_base_name + key + '_cdf'
else:
file_name = file_base_name + key + '_pdf'
fig.savefig(file_name, dpi=dpi)
safe_save_figure(fig=fig, filename=file_name, dpi=dpi)
plt.close(fig)
else:
return fig
......@@ -803,6 +823,7 @@ class Result(object):
bins=bins, label_fontsize=label_fontsize, dpi=dpi,
title_fontsize=title_fontsize, quantiles=quantiles)
@latex_plot_format
def plot_corner(self, parameters=None, priors=None, titles=True, save=True,
filename=None, dpi=300, **kwargs):
""" Plot a corner-plot
......@@ -902,8 +923,10 @@ class Result(object):
cond2 = parameters is None
cond3 = bool(kwargs.get("truths", True))
if cond1 and cond2 and cond3:
parameters = {key: self.injection_parameters[key] for key in
self.search_parameter_keys}
parameters = {
key: self.injection_parameters.get(key, np.nan)
for key in self.search_parameter_keys
}
# If parameters is a dictionary, use the keys to determine which
# parameters to plot and the values as truths.
......@@ -960,11 +983,12 @@ class Result(object):
outdir = self._safe_outdir_creation(kwargs.get('outdir'), self.plot_corner)
filename = '{}/{}_corner.png'.format(outdir, self.label)
logger.debug('Saving corner plot to {}'.format(filename))
fig.savefig(filename, dpi=dpi)
safe_save_figure(fig=fig, filename=filename, dpi=dpi)
plt.close(fig)
return fig
@latex_plot_format
def plot_walkers(self, **kwargs):
""" Method to plot the trace of the walkers in an ensemble MCMC plot """
if hasattr(self, 'walkers') is False:
......@@ -992,9 +1016,10 @@ class Result(object):
outdir = self._safe_outdir_creation(kwargs.get('outdir'), self.plot_walkers)
filename = '{}/{}_walkers.png'.format(outdir, self.label)
logger.debug('Saving walkers plot to {}'.format('filename'))
fig.savefig(filename)
safe_save_figure(fig=fig, filename=filename)
plt.close(fig)
@latex_plot_format
def plot_with_data(self, model, x, y, ndraws=1000, npoints=1000,
xlabel=None, ylabel=None, data_label='data',
data_fmt='o', draws_label=None, filename=None,
......@@ -1066,7 +1091,7 @@ class Result(object):
if filename is None:
outdir = self._safe_outdir_creation(outdir, self.plot_with_data)
filename = '{}/{}_plot_with_data'.format(outdir, self.label)
fig.savefig(filename, dpi=dpi)
safe_save_figure(fig=fig, filename=filename, dpi=dpi)
plt.close(fig)
@staticmethod
......@@ -1443,6 +1468,7 @@ class ResultList(list):
raise ResultListError("Inconsistent samplers between results")
@latex_plot_format
def plot_multiple(results, filename=None, labels=None, colours=None,
save=True, evidences=False, **kwargs):
""" Generate a corner plot overlaying two sets of results
......@@ -1522,12 +1548,14 @@ def plot_multiple(results, filename=None, labels=None, colours=None,
filename = default_filename
if save:
fig.savefig(filename)
safe_save_figure(fig=fig, filename=filename)
return fig
def make_pp_plot(results, filename=None, save=True, confidence_interval=0.9,
@latex_plot_format
def make_pp_plot(results, filename=None, save=True, confidence_interval=[0.68, 0.95, 0.997],
lines=None, legend_fontsize='x-small', keys=None, title=True,
confidence_interval_alpha=0.1,
**kwargs):
"""
Make a P-P plot for a set of runs with injected signals.
......@@ -1540,8 +1568,8 @@ def make_pp_plot(results, filename=None, save=True, confidence_interval=0.9,
The name of the file to save, the default is "outdir/pp.png"
save: bool, optional
Whether to save the file, default=True
confidence_interval: float, optional
The confidence interval to be plotted, defaulting to 0.9 (90%)
confidence_interval: (float, list), optional
The confidence interval to be plotted, defaulting to 1-2-3 sigma
lines: list
If given, a list of matplotlib line formats to use, must be greater
than the number of parameters.
......@@ -1549,6 +1577,8 @@ def make_pp_plot(results, filename=None, save=True, confidence_interval=0.9,
The font size for the legend
keys: list
A list of keys to use, if None defaults to search_parameter_keys
confidence_interval_alpha: float, list, optional
The transparency for the background condifence interval
kwargs:
Additional kwargs to pass to matplotlib.pyplot.plot
......@@ -1576,18 +1606,26 @@ def make_pp_plot(results, filename=None, save=True, confidence_interval=0.9,
x_values = np.linspace(0, 1, 1001)
# Putting in the confidence bands
N = len(credible_levels)
edge_of_bound = (1. - confidence_interval) / 2.
lower = scipy.stats.binom.ppf(1 - edge_of_bound, N, x_values) / N
upper = scipy.stats.binom.ppf(edge_of_bound, N, x_values) / N
# The binomial point percent function doesn't always return 0 @ 0,
# so set those bounds explicitly to be sure
lower[0] = 0
upper[0] = 0
fig, ax = plt.subplots()
ax.fill_between(x_values, lower, upper, alpha=0.2, color='k')
if isinstance(confidence_interval, float):
confidence_interval = [confidence_interval]
if isinstance(confidence_interval_alpha, float):
confidence_interval_alpha = [confidence_interval_alpha] * len(confidence_interval)
elif len(confidence_interval_alpha) != len(confidence_interval):
raise ValueError(
"confidence_interval_alpha must have the same length as confidence_interval")
for ci, alpha in zip(confidence_interval, confidence_interval_alpha):
edge_of_bound = (1. - ci) / 2.
lower = scipy.stats.binom.ppf(1 - edge_of_bound, N, x_values) / N
upper = scipy.stats.binom.ppf(edge_of_bound, N, x_values) / N
# The binomial point percent function doesn't always return 0 @ 0,
# so set those bounds explicitly to be sure
lower[0] = 0
upper[0] = 0
ax.fill_between(x_values, lower, upper, alpha=alpha, color='k')
pvalues = []
logger.info("Key: KS-test p-value")
......@@ -1617,14 +1655,14 @@ def make_pp_plot(results, filename=None, save=True, confidence_interval=0.9,
len(results), pvals.combined_pvalue))
ax.set_xlabel("C.I.")
ax.set_ylabel("Fraction of events in C.I.")
ax.legend(linewidth=1, labelspacing=0.25, fontsize=legend_fontsize)
ax.legend(linewidth=1, handlelength=2, labelspacing=0.25, fontsize=legend_fontsize)
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
fig.tight_layout()
if save:
if filename is None:
filename = 'outdir/pp.png'
fig.savefig(filename, dpi=500)
safe_save_figure(fig=fig, filename=filename, dpi=500)
return fig, pvals
......
......@@ -5,7 +5,7 @@ import numpy as np
from pandas import DataFrame
from ..utils import logger, command_line_args, Counter
from ..prior import Prior, PriorDict, ConditionalPriorDict, DeltaFunction, Constraint
from ..prior import Prior, PriorDict, DeltaFunction, Constraint
from ..result import Result, read_in_result
......@@ -251,19 +251,13 @@ class Sampler(object):
AttributeError
prior can't be sampled.
"""
if isinstance(self.priors, ConditionalPriorDict):
for key in self.priors:
if isinstance(self.priors[key], Constraint):
continue
try:
self.likelihood.parameters = self.priors.sample()
self.priors[key].sample()
except AttributeError as e:
logger.warning('Cannot sample from prior, {}'.format(e))
else:
for key in self.priors:
if isinstance(self.priors[key], Constraint):
continue
try:
self.likelihood.parameters[key] = self.priors[key].sample()
except AttributeError as e:
logger.warning('Cannot sample from {}, {}'.format(key, e))
logger.warning('Cannot sample from {}, {}'.format(key, e))
def _verify_parameters(self):
""" Evaluate a set of parameters drawn from the prior
......@@ -281,13 +275,8 @@ class Sampler(object):
raise IllegalSamplingSetError(
"Your sampling set contains redundant parameters.")
self._check_if_priors_can_be_sampled()
if isinstance(self.priors, ConditionalPriorDict):
theta = self.priors.sample()
theta = [theta[key] for key in self._search_parameter_keys]
else:
theta = [self.priors[key].sample()
for key in self._search_parameter_keys]
theta = self.priors.sample_subset_constrained_as_array(
self.search_parameter_keys, size=1)[:, 0]
try:
self.log_likelihood(theta)
except TypeError as e:
......@@ -308,12 +297,8 @@ class Sampler(object):
t1 = datetime.datetime.now()
for _ in range(n_evaluations):
if isinstance(self.priors, ConditionalPriorDict):
theta = self.priors.sample()
theta = [theta[key] for key in self._search_parameter_keys]
else:
theta = [self.priors[key].sample()
for key in self._search_parameter_keys]
theta = self.priors.sample_subset_constrained_as_array(
self._search_parameter_keys, size=1)[:, 0]
self.log_likelihood(theta)
total_time = (datetime.datetime.now() - t1).total_seconds()
self._log_likelihood_eval_time = total_time / n_evaluations
......@@ -454,7 +439,10 @@ class Sampler(object):
return np.array(unit_cube), np.array(parameters), np.array(likelihood)
def check_draw(self, theta, warning=True):
""" Checks if the draw will generate an infinite prior or likelihood
"""
Checks if the draw will generate an infinite prior or likelihood
Also catches the output of `numpy.nan_to_num`.
Parameters
----------
......@@ -467,11 +455,12 @@ class Sampler(object):
True if the likelihood and prior are finite, false otherwise
"""
if np.isinf(self.log_prior(theta)):
bad_values = [np.inf, np.nan_to_num(np.inf), np.nan]
if abs(self.log_prior(theta)) in bad_values:
if warning:
logger.warning('Prior draw {} has inf prior'.format(theta))
return False
if np.isinf(self.log_likelihood(theta)):
if abs(self.log_likelihood(theta)) in bad_values:
if warning:
logger.warning('Prior draw {} has inf likelihood'.format(theta))
return False
......
......@@ -38,7 +38,7 @@ class Cpnest(NestedSampler):
{self.outdir}/cpnest_{self.label}/
"""
default_kwargs = dict(verbose=1, nthreads=1, nlive=500, maxmcmc=1000,
default_kwargs = dict(verbose=3, nthreads=1, nlive=500, maxmcmc=1000,
seed=None, poolsize=100, nhamiltonian=0, resume=True,
output=None, proposals=None, n_periodic_checkpoint=8000)
......
......@@ -44,9 +44,6 @@ class DynamicDynesty(Dynesty):
Other Parameters
----------------
npoints: int, (250)
The number of live points, note this can also equivalently be given as
one of [nlive, nlives, n_live_points]
bound: {'none', 'single', 'multi', 'balls', 'cubes'}, ('multi')
Method used to select new points
sample: {'unif', 'rwalk', 'slice', 'rslice', 'hslice'}, ('rwalk')
......
This diff is collapsed.
......@@ -160,8 +160,8 @@ class Kombine(Emcee):
self.result.nburn = self.nburn
if self.result.nburn > self.nsteps:
raise SamplerError(
"The run has finished, but the chain is not burned in: "
"`nburn < nsteps`. Try increasing the number of steps.")
"The run has finished, but the chain is not burned in: `nburn < nsteps` ({} < {}). Try increasing the "
"number of steps.".format(self.result.nburn, self.nsteps))
tmp_chain = self.sampler.chain[self.nburn:, :, :].copy()
self.result.samples = tmp_chain.reshape((-1, self.ndim))
blobs = np.array(self.sampler.blobs)
......
This diff is collapsed.
from __future__ import absolute_import
import os
import tempfile
import shutil
import signal
import numpy as np
......@@ -34,79 +37,185 @@ class Pymultinest(NestedSampler):
If true, resume run from checkpoint (if available)
"""
default_kwargs = dict(importance_nested_sampling=False, resume=True,
verbose=True, sampling_efficiency='parameter',
n_live_points=500, n_params=None,
n_clustering_params=None, wrapped_params=None,
multimodal=True, const_efficiency_mode=False,
evidence_tolerance=0.5,
n_iter_before_update=100, null_log_evidence=-1e90,
max_modes=100, mode_tolerance=-1e90,
outputfiles_basename=None, seed=-1,
context=0, write_output=True, log_zero=-1e100,
max_iter=0, init_MPI=False, dump_callback=None)
def __init__(self, likelihood, priors, outdir='outdir', label='label', use_ratio=False, plot=False,
skip_import_verification=False, **kwargs):
super(Pymultinest, self).__init__(likelihood=likelihood, priors=priors, outdir=outdir, label=label,
use_ratio=use_ratio, plot=plot,
skip_import_verification=skip_import_verification,
**kwargs)
default_kwargs = dict(
importance_nested_sampling=False,
resume=True,
verbose=True,
sampling_efficiency="parameter",
n_live_points=500,
n_params=None,
n_clustering_params=None,
wrapped_params=None,
multimodal=True,
const_efficiency_mode=False,
evidence_tolerance=0.5,
n_iter_before_update=100,
null_log_evidence=-1e90,
max_modes=100,
mode_tolerance=-1e90,
outputfiles_basename=None,
seed=-1,
context=0,
write_output=True,
log_zero=-1e100,
max_iter=0,
init_MPI=False,
dump_callback=None,
)
def __init__(
self,
likelihood,
priors,
outdir="outdir",
label="label",
use_ratio=False,
plot=False,
exit_code=77,
skip_import_verification=False,
**kwargs
):
super(Pymultinest, self).__init__(
likelihood=likelihood,
priors=priors,
outdir=outdir,
label=label,
use_ratio=use_ratio,
plot=plot,
skip_import_verification=skip_import_verification,
**kwargs
)
self._apply_multinest_boundaries()
self.exit_code = exit_code
signal.signal(signal.SIGTERM, self.write_current_state_and_exit)
signal.signal(signal.SIGINT, self.write_current_state_and_exit)
signal.signal(signal.SIGALRM, self.write_current_state_and_exit)
def _translate_kwargs(self, kwargs):
if 'n_live_points' not in kwargs:
if "n_live_points" not in kwargs:
for equiv in self.npoints_equiv_kwargs:
if equiv in kwargs:
kwargs['n_live_points'] = kwargs.pop(equiv)
kwargs["n_live_points"] = kwargs.pop(equiv)
def _verify_kwargs_against_default_kwargs(self):
"""
Test the length of the directory where multinest will write the output.
This is an issue with MultiNest that we can't solve here.
https://github.com/JohannesBuchner/PyMultiNest/issues/115
"""
if not self.kwargs['outputfiles_basename']:
self.kwargs['outputfiles_basename'] = \
'{}/pm_{}/'.format(self.outdir, self.label)
if self.kwargs['outputfiles_basename'].endswith('/') is False:
self.kwargs['outputfiles_basename'] = '{}/'.format(
self.kwargs['outputfiles_basename'])
if len(self.kwargs['outputfiles_basename']) > (100 - 22):
logger.warning(
'The length of {} exceeds 78 characters. '
' Post-processing will fail because the file names will be cut'
' off. Please choose a shorter "outdir" or "label".'
.format(self.kwargs['outputfiles_basename']))
check_directory_exists_and_if_not_mkdir(
self.kwargs['outputfiles_basename'])
""" Check the kwargs """
self.outputfiles_basename = self.kwargs.pop("outputfiles_basename", None)
# for PyMultiNest >=2.9 the n_params kwarg cannot be None
if self.kwargs["n_params"] is None:
self.kwargs["n_params"] = self.ndim
NestedSampler._verify_kwargs_against_default_kwargs(self)
def _apply_multinest_boundaries(self):
if self.kwargs['wrapped_params'] is None:
self.kwargs['wrapped_params'] = []
if self.kwargs["wrapped_params"] is None:
self.kwargs["wrapped_params"] = []
for param, value in self.priors.items():
if value.boundary == 'periodic':
self.kwargs['wrapped_params'].append(1)
if value.boundary == "periodic":
self.kwargs["wrapped_params"].append(1)
else:
self.kwargs['wrapped_params'].append(0)
self.kwargs["wrapped_params"].append(0)
@property
def outputfiles_basename(self):
return self._outputfiles_basename
@outputfiles_basename.setter
def outputfiles_basename(self, outputfiles_basename):
if outputfiles_basename is None:
outputfiles_basename = "{}/pm_{}".format(self.outdir, self.label)
if outputfiles_basename.endswith("/") is True:
outputfiles_basename = outputfiles_basename.rstrip("/")
check_directory_exists_and_if_not_mkdir(self.outdir)
self._outputfiles_basename = outputfiles_basename
@property
def temporary_outputfiles_basename(self):
return self._temporary_outputfiles_basename
@temporary_outputfiles_basename.setter
def temporary_outputfiles_basename(self, temporary_outputfiles_basename):
if temporary_outputfiles_basename.endswith("/") is False:
temporary_outputfiles_basename = "{}/".format(
temporary_outputfiles_basename
)
self._temporary_outputfiles_basename = temporary_outputfiles_basename
if os.path.exists(self.outputfiles_basename):
shutil.copytree(
self.outputfiles_basename, self.temporary_outputfiles_basename
)
if os.path.islink(self.outputfiles_basename):
os.unlink(self.outputfiles_basename)
else:
shutil.rmtree(self.outputfiles_basename)
def write_current_state_and_exit(self, signum=None, frame=None):
""" Write current state and exit on exit_code """
logger.info(
"Run interrupted by signal {}: checkpoint and exit on {}".format(
signum, self.exit_code
)
)
# self.copy_temporary_directory_to_proper_path()
os._exit(self.exit_code)
def copy_temporary_directory_to_proper_path(self):
logger.info(
"Overwriting {} with {}".format(
self.outputfiles_basename, self.temporary_outputfiles_basename
)
)
# First remove anything in the outputfiles_basename for overwriting
if os.path.exists(self.outputfiles_basename):
if os.path.islink(self.outputfiles_basename):
os.unlink(self.outputfiles_basename)
else:
shutil.rmtree(self.outputfiles_basename, ignore_errors=True)
shutil.copytree(self.temporary_outputfiles_basename, self.outputfiles_basename)
def run_sampler(self):
import pymultinest
self._verify_kwargs_against_default_kwargs()
temporary_outputfiles_basename = tempfile.TemporaryDirectory().name
self.temporary_outputfiles_basename = temporary_outputfiles_basename
logger.info("Using temporary file {}".format(temporary_outputfiles_basename))
check_directory_exists_and_if_not_mkdir(temporary_outputfiles_basename)
self.kwargs["outputfiles_basename"] = self.temporary_outputfiles_basename
# Symlink the temporary directory with the target directory: ensures data is stored on exit
os.symlink(
os.path.abspath(self.temporary_outputfiles_basename),
os.path.abspath(self.outputfiles_basename),
target_is_directory=True,
)
out = pymultinest.solve(
LogLikelihood=self.log_likelihood, Prior=self.prior_transform,
n_dims=self.ndim, **self.kwargs)
LogLikelihood=self.log_likelihood,
Prior=self.prior_transform,
n_dims=self.ndim,
**self.kwargs
)
self.copy_temporary_directory_to_proper_path()
# Clean up
shutil.rmtree(temporary_outputfiles_basename)
post_equal_weights = os.path.join(
self.kwargs['outputfiles_basename'], 'post_equal_weights.dat')
self.outputfiles_basename, "post_equal_weights.dat"
)
post_equal_weights_data = np.loadtxt(post_equal_weights)
self.result.log_likelihood_evaluations = post_equal_weights_data[:, -1]
self.result.sampler_output = out
self.result.samples = post_equal_weights_data[:, :-1]
self.result.log_evidence = out['logZ']
self.result.log_evidence_err = out['logZerr']
self.result.log_evidence = out["logZ"]
self.result.log_evidence_err = out["logZerr"]
self.calc_likelihood_count()
self.result.outputfiles_basename = self.kwargs['outputfiles_basename']
self.result.outputfiles_basename = self.outputfiles_basename
return self.result
from __future__ import division
from distutils.spawn import find_executable
import logging
import os
from math import fmod
import argparse
import traceback
import inspect
import functools
import types
import subprocess
import multiprocessing
......@@ -13,6 +15,7 @@ from importlib import import_module
import json
import warnings
from distutils.version import StrictVersion
import numpy as np
from scipy.interpolate import interp2d
from scipy.special import logsumexp
......@@ -963,7 +966,10 @@ else:
for backend in non_gui_backends:
try:
logger.debug("Trying backend {}".format(backend))
matplotlib.use(backend, warn=False)
if StrictVersion(matplotlib.__version__) >= StrictVersion("3.1"):
matplotlib.use(backend)
else:
matplotlib.use(backend, warn=False)
plt.switch_backend(backend)
break
except Exception:
......@@ -974,9 +980,10 @@ class BilbyJsonEncoder(json.JSONEncoder):
def default(self, obj):
from .prior import MultivariateGaussianDist, Prior, PriorDict
from ..gw.prior import HealPixMapPriorDist
if isinstance(obj, PriorDict):
return {'__prior_dict__': True, 'content': obj._get_json_dict()}
if isinstance(obj, (MultivariateGaussianDist, Prior)):
if isinstance(obj, (MultivariateGaussianDist, HealPixMapPriorDist, Prior)):
return {'__prior__': True, '__module__': obj.__module__,
'__name__': obj.__class__.__name__,
'kwargs': dict(obj.get_instantiation_dict())}
......@@ -989,7 +996,7 @@ class BilbyJsonEncoder(json.JSONEncoder):
if isinstance(obj, units.PrefixUnit):
return str(obj)
except ImportError:
logger.info("Cannot import astropy, cannot write cosmological priors")
logger.debug("Cannot import astropy, cannot write cosmological priors")
if isinstance(obj, np.ndarray):
return {'__array__': True, 'content': obj.tolist()}
if isinstance(obj, complex):
......@@ -1051,8 +1058,8 @@ def decode_astropy_cosmology(dct):
del dct['__cosmology__'], dct['__name__']
return cosmo_cls(**dct)
except ImportError:
logger.info("Cannot import astropy, cosmological priors may not be "
"properly loaded.")
logger.debug("Cannot import astropy, cosmological priors may not be "
"properly loaded.")
return dct
......@@ -1065,8 +1072,8 @@ def decode_astropy_quantity(dct):
del dct['__astropy_quantity__']
return units.Quantity(**dct)
except ImportError:
logger.info("Cannot import astropy, cosmological priors may not be "
"properly loaded.")
logger.debug("Cannot import astropy, cosmological priors may not be "
"properly loaded.")
return dct
......@@ -1098,6 +1105,62 @@ def reflect(u):
return u
def safe_file_dump(data, filename, module):
""" Safely dump data to a .pickle file
Parameters
----------
data:
data to dump
filename: str
The file to dump to
module: pickle, dill
The python module to use
"""
temp_filename = filename + ".temp"
with open(temp_filename, "wb") as file:
module.dump(data, file)
os.rename(temp_filename, filename)
def latex_plot_format(func):
"""
Wrap a plotting function to set rcParams so that text renders nicely with
latex and Computer Modern Roman font.
"""
@functools.wraps(func)
def wrapper_decorator(*args, **kwargs):
from matplotlib import rcParams
_old_tex = rcParams["text.usetex"]
_old_serif = rcParams["font.serif"]
_old_family = rcParams["font.family"]
if find_executable("latex"):
rcParams["text.usetex"] = True
else:
rcParams["text.usetex"] = False
rcParams["font.serif"] = "Computer Modern Roman"
rcParams["font.family"] = "serif"
value = func(*args, **kwargs)
rcParams["text.usetex"] = _old_tex
rcParams["font.serif"] = _old_serif
rcParams["font.family"] = _old_family
return value
return wrapper_decorator
def safe_save_figure(fig, filename, **kwargs):
from matplotlib import rcParams
try:
fig.savefig(fname=filename, **kwargs)
except RuntimeError:
logger.debug(
"Failed to save plot with tex labels turning off tex."
)
rcParams["text.usetex"] = False
fig.savefig(fname=filename, **kwargs)
class IllegalDurationAndSamplingFrequencyException(Exception):
pass
......
......@@ -56,7 +56,7 @@ def luminosity_distance_to_comoving_distance(distance, cosmology=None):
def bilby_to_lalsimulation_spins(
theta_jn, phi_jl, tilt_1, tilt_2, phi_12, a_1, a_2, mass_1, mass_2,
reference_frequency, phase):
if tilt_1 in [0, np.pi] and tilt_2 in [0, np.pi]:
if (a_1 == 0 or tilt_1 in [0, np.pi]) and (a_2 == 0 or tilt_2 in [0, np.pi]):
spin_1x = 0
spin_1y = 0
spin_1z = a_1 * np.cos(tilt_1)
......
......@@ -9,8 +9,8 @@ try:
import lal
import lalsimulation as lalsim
except ImportError:
logger.warning("You do not have lalsuite installed currently. You will"
" not be able to use some of the prebuilt functions.")
logger.debug("You do not have lalsuite installed currently. You will"
" not be able to use some of the prebuilt functions.")
def get_safe_signal_duration(mass_1, mass_2, a_1, a_2, tilt_1, tilt_2, flow=10):
......
......@@ -16,8 +16,8 @@ try:
import gwpy
import gwpy.signal
except ImportError:
logger.warning("You do not have gwpy installed currently. You will "
" not be able to use some of the prebuilt functions.")
logger.debug("You do not have gwpy installed currently. You will "
" not be able to use some of the prebuilt functions.")
class Interferometer(object):
......
......@@ -11,14 +11,14 @@ try:
import gwpy
import gwpy.signal
except ImportError:
logger.warning("You do not have gwpy installed currently. You will "
" not be able to use some of the prebuilt functions.")
logger.debug("You do not have gwpy installed currently. You will "
" not be able to use some of the prebuilt functions.")
try:
import lal
except ImportError:
logger.warning("You do not have lalsuite installed currently. You will"
" not be able to use some of the prebuilt functions.")
logger.debug("You do not have lalsuite installed currently. You will"
" not be able to use some of the prebuilt functions.")
class InterferometerStrainData(object):
......