Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • john-veitch/bilby
  • duncanmmacleod/bilby
  • colm.talbot/bilby
  • lscsoft/bilby
  • matthew-pitkin/bilby
  • salvatore-vitale/tupak
  • charlie.hoy/bilby
  • bfarr/bilby
  • virginia.demilio/bilby
  • vivien/bilby
  • eric-howell/bilby
  • sebastian-khan/bilby
  • rhys.green/bilby
  • moritz.huebner/bilby
  • joseph.mills/bilby
  • scott.coughlin/bilby
  • matthew.carney/bilby
  • hyungwon.lee/bilby
  • monica.rizzo/bilby
  • christopher-berry/bilby
  • lindsay.demarchi/bilby
  • kaushik.rao/bilby
  • charles.kimball/bilby
  • andrew.matas/bilby
  • juan.calderonbustillo/bilby
  • patrick-meyers/bilby
  • hannah.middleton/bilby
  • eve.chase/bilby
  • grant.meadors/bilby
  • khun.phukon/bilby
  • sumeet.kulkarni/bilby
  • daniel.reardon/bilby
  • cjhaster/bilby
  • sylvia.biscoveanu/bilby
  • james-clark/bilby
  • meg.millhouse/bilby
  • joshua.willis/bilby
  • nikhil.sarin/bilby
  • paul.easter/bilby
  • youngmin/bilby
  • daniel-williams/bilby
  • shanika.galaudage/bilby
  • bruce.edelman/bilby
  • avi.vajpeyi/bilby
  • isobel.romero-shaw/bilby
  • andrew.kim/bilby
  • dominika.zieba/bilby
  • jonathan.davies/bilby
  • marc.arene/bilby
  • srishti.tiwari/bilby-tidal-heating-eccentric
  • aditya.vijaykumar/bilby
  • michael.williams/bilby
  • cecilio.garcia-quiros/bilby
  • rory-smith/bilby
  • maite.mateu-lucena/bilby
  • wushichao/bilby
  • kaylee.desoto/bilby
  • brandon.piotrzkowski/bilby
  • rossella.gamba/bilby
  • hunter.gabbard/bilby
  • deep.chatterjee/bilby
  • tathagata.ghosh/bilby
  • arunava.mukherjee/bilby
  • philip.relton/bilby
  • reed.essick/bilby
  • pawan.gupta/bilby
  • francisco.hernandez/bilby
  • rhiannon.udall/bilby
  • leo.tsukada/bilby
  • will-farr/bilby
  • vijay.varma/bilby
  • jeremy.baier/bilby
  • joshua.brandt/bilby
  • ethan.payne/bilby
  • ka-lok.lo/bilby
  • antoni.ramos-buades/bilby
  • oliviastephany.wilk/bilby
  • jack.heinzel/bilby
  • samson.leong/bilby-psi4
  • viviana.caceres/bilby
  • nadia.qutob/bilby
  • michael-coughlin/bilby
  • hemantakumar.phurailatpam/bilby
  • boris.goncharov/bilby
  • sama.al-shammari/bilby
  • siqi.zhong/bilby
  • jocelyn-read/bilby
  • marc.penuliar/bilby
  • stephanie.letourneau/bilby
  • alexandresebastien.goettel/bilby
  • alec.gunny/bilby
  • serguei.ossokine/bilby
  • pratyusava.baral/bilby
  • sophie.hourihane/bilby
  • eunsub/bilby
  • james.hart/bilby
  • pratyusava.baral/bilby-tg
  • zhaozc/bilby
  • pratyusava.baral/bilby_SoG
  • tomasz.baka/bilby
  • nicogerardo.bers/bilby
  • soumen.roy/bilby
  • isaac.mcmahon/healpix-redundancy
  • asamakai.baker/bilby-frequency-dependent-antenna-pattern-functions
  • anna.puecher/bilby
  • pratyusava.baral/bilby-x-g
  • thibeau.wouters/bilby
  • christian.adamcewicz/bilby
  • raffi.enficiaud/bilby
109 results
Show changes
Commits on Source (47)
Showing
with 303 additions and 161 deletions
......@@ -33,6 +33,7 @@ stages:
- python -c "import bilby.gw.sampler"
- python -c "import bilby.hyper"
- python -c "import cli_bilby"
- python test/import_test.py
- for script in $(pip show -f bilby | grep "bin\/" | xargs -I {} basename {}); do
${script} --help;
done
......@@ -114,7 +115,6 @@ python-3.7-samplers:
- python -m pip install .
- pytest test/integration/sampler_run_test.py --durations 10
- pytest test/integration/sample_from_the_prior_test.py
# test samplers on python 3.6
python-3.6-samplers:
......@@ -148,7 +148,6 @@ scheduled-python-3.7:
# Run tests which are only done on schedule
- pytest test/integration/example_test.py
- pytest test/integration/sample_from_the_prior_test.py
plotting:
stage: test
......
# All notable changes will be documented in this file
## [1.1.2] 2021-05-05
Version 1.1.2 release of bilby
### Added
- Added MCMC combine method and improved shuffle behaviour when combining results (!945)
- Added an extras requires to enable downstream packages to depend on `bilby.gw` (!939)
- Added a dynesty unit plot (!954)
### Changes
- Removed a number of deprecated functions and classes (!936)
- Removed the pin on the numpy version (!934)
- Added requirements to MANIFEST (!929)
- Sped up the ROQ weight calculation with IFFT (!903)
- Streamlined hdf5 improvements (!925)
- Sped up `import bilby` by reducing internal imports (!933)
- Reduced time required for the sampler tests (!949)
- Resolved an unclear error message (!935)
- Encapsulated GMST method in `gw.utils` (!947)
- Improvements to `gw.utils` (!948)
- Improvements to `core.prior` (!944)
- Suppresses error message when creating injections (!938)
- Fixed loading meta data, booleans, string lists with hdf5 (!941)
- Made tables an optional requirement (!930)
- Added `exists_ok` to `mkdir` calls (!946)
- Increased the default dynesty checkpoint time to 30 minutes (!940)
- Resolved issue with sampling from prior test (!950)
- Added errstate ignore to the gw.conversion module (!952)
- Fixed issues with pickle saving and loading (!932)
- Fixed an issue with the `_base_roq_waveform` (!959)
## [1.1.1] 2021-03-16
Version 1.1.1 release of bilby
......
......@@ -29,7 +29,7 @@ __version__ = utils.get_version_information()
if sys.version_info < (3,):
raise ImportError(
"""You are running bilby 0.6.4 on Python 2
"""You are running bilby >= 0.6.4 on Python 2
Bilby 0.6.4 and above are no longer compatible with Python 2, and you still
ended up with this version installed. That's unfortunate; sorry about that.
......
import numpy as np
import os
import json
import os
from collections import OrderedDict
import numpy as np
from .prior import Prior, PriorDict
from .utils import (logtrapzexp, check_directory_exists_and_if_not_mkdir,
logger)
from .utils import BilbyJsonEncoder, load_json, move_old_file
from .utils import (
logtrapzexp, check_directory_exists_and_if_not_mkdir, logger,
BilbyJsonEncoder, load_json, move_old_file
)
from .result import FileMovedError
......
......@@ -4,7 +4,7 @@ from scipy.special._ufuncs import xlogy, erf, log1p, stdtrit, gammaln, stdtr, \
btdtri, betaln, btdtr, gammaincinv, gammainc
from .base import Prior
from bilby.core.utils import logger
from ..utils import logger
class DeltaFunction(Prior):
......@@ -148,8 +148,11 @@ class PowerLaw(Prior):
normalising = (1 + self.alpha) / (self.maximum ** (1 + self.alpha) -
self.minimum ** (1 + self.alpha))
return (self.alpha * np.nan_to_num(np.log(val)) + np.log(normalising)) + np.log(
1. * self.is_in_prior_range(val))
with np.errstate(divide='ignore', invalid='ignore'):
ln_in_range = np.log(1. * self.is_in_prior_range(val))
ln_p = self.alpha * np.nan_to_num(np.log(val)) + np.log(normalising)
return ln_p + ln_in_range
def cdf(self, val):
if self.alpha == -1:
......@@ -713,7 +716,7 @@ class LogNormal(Prior):
_prob = np.exp(-(np.log(val) - self.mu) ** 2 / self.sigma ** 2 / 2)\
/ np.sqrt(2 * np.pi) / val / self.sigma
else:
_prob = np.zeros(len(val))
_prob = np.zeros(val.size)
idx = (val > self.minimum)
_prob[idx] = np.exp(-(np.log(val[idx]) - self.mu) ** 2 / self.sigma ** 2 / 2)\
/ np.sqrt(2 * np.pi) / val[idx] / self.sigma
......@@ -737,7 +740,7 @@ class LogNormal(Prior):
_ln_prob = -(np.log(val) - self.mu) ** 2 / self.sigma ** 2 / 2\
- np.log(np.sqrt(2 * np.pi) * val * self.sigma)
else:
_ln_prob = -np.inf * np.ones(len(val))
_ln_prob = -np.inf * np.ones(val.size)
idx = (val > self.minimum)
_ln_prob[idx] = -(np.log(val[idx]) - self.mu) ** 2\
/ self.sigma ** 2 / 2 - np.log(np.sqrt(2 * np.pi) * val[idx] * self.sigma)
......@@ -750,7 +753,7 @@ class LogNormal(Prior):
else:
_cdf = 0.5 + erf((np.log(val) - self.mu) / self.sigma / np.sqrt(2)) / 2
else:
_cdf = np.zeros(len(val))
_cdf = np.zeros(val.size)
_cdf[val > self.minimum] = 0.5 + erf((
np.log(val[val > self.minimum]) - self.mu) / self.sigma / np.sqrt(2)) / 2
return _cdf
......@@ -807,7 +810,7 @@ class Exponential(Prior):
else:
_prob = np.exp(-val / self.mu) / self.mu
else:
_prob = np.zeros(len(val))
_prob = np.zeros(val.size)
_prob[val >= self.minimum] = np.exp(-val[val >= self.minimum] / self.mu) / self.mu
return _prob
......@@ -828,7 +831,7 @@ class Exponential(Prior):
else:
_ln_prob = -val / self.mu - np.log(self.mu)
else:
_ln_prob = -np.inf * np.ones(len(val))
_ln_prob = -np.inf * np.ones(val.size)
_ln_prob[val >= self.minimum] = -val[val >= self.minimum] / self.mu - np.log(self.mu)
return _ln_prob
......@@ -839,7 +842,7 @@ class Exponential(Prior):
else:
_cdf = 1. - np.exp(-val / self.mu)
else:
_cdf = np.zeros(len(val))
_cdf = np.zeros(val.size)
_cdf[val >= self.minimum] = 1. - np.exp(-val[val >= self.minimum] / self.mu)
return _cdf
......@@ -1010,7 +1013,7 @@ class Beta(Prior):
return _ln_prob
return -np.inf
else:
_ln_prob_sub = -np.inf * np.ones(len(val))
_ln_prob_sub = -np.inf * np.ones(val.size)
idx = np.isfinite(_ln_prob) & (val >= self.minimum) & (val <= self.maximum)
_ln_prob_sub[idx] = _ln_prob[idx]
return _ln_prob_sub
......@@ -1076,7 +1079,7 @@ class Logistic(Prior):
else:
rescaled = self.mu + self.scale * np.log(val / (1. - val))
else:
rescaled = np.inf * np.ones(len(val))
rescaled = np.inf * np.ones(val.size)
rescaled[val == 0] = -np.inf
rescaled[(val > 0) & (val < 1)] = self.mu + self.scale\
* np.log(val[(val > 0) & (val < 1)] / (1. - val[(val > 0) & (val < 1)]))
......@@ -1263,7 +1266,7 @@ class Gamma(Prior):
else:
_ln_prob = xlogy(self.k - 1, val) - val / self.theta - xlogy(self.k, self.theta) - gammaln(self.k)
else:
_ln_prob = -np.inf * np.ones(len(val))
_ln_prob = -np.inf * np.ones(val.size)
idx = (val >= self.minimum)
_ln_prob[idx] = xlogy(self.k - 1, val[idx]) - val[idx] / self.theta\
- xlogy(self.k, self.theta) - gammaln(self.k)
......@@ -1276,7 +1279,7 @@ class Gamma(Prior):
else:
_cdf = gammainc(self.k, val / self.theta)
else:
_cdf = np.zeros(len(val))
_cdf = np.zeros(val.size)
_cdf[val >= self.minimum] = gammainc(self.k, val[val >= self.minimum] / self.theta)
return _cdf
......
......@@ -5,7 +5,6 @@ import re
import numpy as np
import scipy.stats
from scipy.integrate import cumtrapz
from scipy.interpolate import interp1d
from bilby.core.utils import infer_args_from_method, BilbyJsonEncoder, decode_bilby_json, logger, \
......@@ -162,6 +161,7 @@ class Prior(object):
def cdf(self, val):
""" Generic method to calculate CDF, can be overwritten in subclass """
from scipy.integrate import cumtrapz
if np.any(np.isinf([self.minimum, self.maximum])):
raise ValueError(
"Unable to use the generic CDF calculation for priors with"
......@@ -185,7 +185,8 @@ class Prior(object):
np.nan
"""
return np.log(self.prob(val))
with np.errstate(divide='ignore'):
return np.log(self.prob(val))
def is_in_prior_range(self, val):
"""Returns True if val is in the prior boundaries, zero otherwise
......@@ -313,6 +314,10 @@ class Prior(object):
def maximum(self, maximum):
self._maximum = maximum
@property
def width(self):
return self.maximum - self.minimum
def get_instantiation_dict(self):
subclass_args = infer_args_from_method(self.__init__)
dict_with_properties = get_dict_with_properties(self)
......
import numpy as np
from .base import Prior, PriorException
from bilby.core.prior.interpolated import Interped
from bilby.core.prior.analytical import DeltaFunction, PowerLaw, Uniform, LogUniform, \
from .interpolated import Interped
from .analytical import DeltaFunction, PowerLaw, Uniform, LogUniform, \
SymmetricLogUniform, Cosine, Sine, Gaussian, TruncatedGaussian, HalfGaussian, \
LogNormal, Exponential, StudentT, Beta, Logistic, Cauchy, Gamma, ChiSquared, FermiDirac
from bilby.core.utils import infer_args_from_method, infer_parameters_from_function
from ..utils import infer_args_from_method, infer_parameters_from_function
def conditional_prior_factory(prior_class):
......
from importlib import import_module
from io import open as ioopen
import json
import os
from importlib import import_module
from io import open as ioopen
from matplotlib.cbook import flatten
import numpy as np
from bilby.core.prior.analytical import DeltaFunction
from bilby.core.prior.base import Prior, Constraint
from bilby.core.prior.joint import JointPrior
from bilby.core.utils import logger, check_directory_exists_and_if_not_mkdir, BilbyJsonEncoder, decode_bilby_json
from .analytical import DeltaFunction
from .base import Prior, Constraint
from .joint import JointPrior
from ..utils import logger, check_directory_exists_and_if_not_mkdir, BilbyJsonEncoder, decode_bilby_json
class PriorDict(dict):
......@@ -155,19 +154,21 @@ class PriorDict(dict):
@classmethod
def _get_from_json_dict(cls, prior_dict):
try:
cls = getattr(
class_ = getattr(
import_module(prior_dict["__module__"]),
prior_dict["__name__"])
except ImportError:
logger.debug("Cannot import prior module {}.{}".format(
prior_dict["__module__"], prior_dict["__name__"]
))
class_ = cls
except KeyError:
logger.debug("Cannot find module name to load")
class_ = cls
for key in ["__module__", "__name__", "__prior_dict__"]:
if key in prior_dict:
del prior_dict[key]
obj = cls(dict())
obj = class_(dict())
obj.from_dictionary(prior_dict)
return obj
......@@ -207,7 +208,15 @@ class PriorDict(dict):
module = __name__.replace(
'.' + os.path.basename(__file__).replace('.py', ''), ''
)
cls = getattr(import_module(module), cls, cls)
try:
cls = getattr(import_module(module), cls, cls)
except ModuleNotFoundError:
logger.error(
"Cannot import prior class {} for entry: {}={}".format(
cls, key, val
)
)
raise
if key.lower() in ["conversion_function", "condition_func"]:
setattr(self, key, cls)
elif isinstance(cls, str):
......@@ -368,6 +377,28 @@ class PriorDict(dict):
logger.debug('{} not a known prior.'.format(key))
return samples
@property
def non_fixed_keys(self):
keys = self.keys()
keys = [k for k in keys if isinstance(self[k], Prior)]
keys = [k for k in keys if self[k].is_fixed is False]
keys = [k for k in keys if k not in self.constraint_keys]
return keys
@property
def fixed_keys(self):
return [
k for k, p in self.items()
if (p.is_fixed and k not in self.constraint_keys)
]
@property
def constraint_keys(self):
return [
k for k, p in self.items()
if isinstance(p, Constraint)
]
def sample_subset_constrained(self, keys=iter([]), size=None):
if size is None or size == 1:
while True:
......@@ -433,6 +464,9 @@ class PriorDict(dict):
prob = np.product([self[key].prob(sample[key])
for key in sample], **kwargs)
return self.check_prob(sample, prob)
def check_prob(self, sample, prob):
ratio = self.normalize_constraint_factor(tuple(sample.keys()))
if np.all(prob == 0.):
return prob
......@@ -466,7 +500,9 @@ class PriorDict(dict):
"""
ln_prob = np.sum([self[key].ln_prob(sample[key])
for key in sample], axis=axis)
return self.check_ln_prob(sample, ln_prob)
def check_ln_prob(self, sample, ln_prob):
ratio = self.normalize_constraint_factor(tuple(sample.keys()))
if np.all(np.isinf(ln_prob)):
return ln_prob
......@@ -496,6 +532,7 @@ class PriorDict(dict):
=======
list: List of floats containing the rescaled sample
"""
from matplotlib.cbook import flatten
return list(flatten([self[key].rescale(sample) for key, sample in zip(keys, theta)]))
def test_redundancy(self, key, disable_logging=False):
......@@ -530,14 +567,6 @@ class PriorDict(dict):
return self.__class__(dictionary=dict(self))
class PriorSet(PriorDict):
def __init__(self, dictionary=None, filename=None):
""" DEPRECATED: USE PriorDict INSTEAD"""
logger.warning("The name 'PriorSet' is deprecated use 'PriorDict' instead")
super(PriorSet, self).__init__(dictionary, filename)
class PriorDictException(Exception):
""" General base class for all prior dict exceptions """
......@@ -656,7 +685,8 @@ class ConditionalPriorDict(PriorDict):
for key, value in sample.items():
self[key].least_recently_sampled = value
res = [self[key].prob(sample[key], **self.get_required_variables(key)) for key in sample]
return np.product(res, **kwargs)
prob = np.product(res, **kwargs)
return self.check_prob(sample, prob)
def ln_prob(self, sample, axis=None):
"""
......@@ -677,7 +707,8 @@ class ConditionalPriorDict(PriorDict):
for key, value in sample.items():
self[key].least_recently_sampled = value
res = [self[key].ln_prob(sample[key], **self.get_required_variables(key)) for key in sample]
return np.sum(res, axis=axis)
ln_prob = np.sum(res, axis=axis)
return self.check_ln_prob(sample, ln_prob)
def rescale(self, keys, theta):
"""Rescale samples from unit cube to prior
......
import numpy as np
from scipy.integrate import cumtrapz
from scipy.interpolate import interp1d
from .base import Prior
from bilby.core.utils import logger
from ..utils import logger
class Interped(Prior):
......@@ -164,6 +163,7 @@ class Interped(Prior):
self._initialize_attributes()
def _initialize_attributes(self):
from scipy.integrate import cumtrapz
if np.trapz(self._yy, self.xx) != 1:
logger.debug('Supplied PDF for {} is not normalised, normalising.'.format(self.name))
self._yy /= np.trapz(self._yy, self.xx)
......
......@@ -3,7 +3,7 @@ import scipy.stats
from scipy.special import erfinv
from .base import Prior, PriorException
from bilby.core.utils import logger, infer_args_from_method, get_dict_with_properties
from ..utils import logger, infer_args_from_method, get_dict_with_properties
class BaseJointPriorDist(object):
......
import numpy as np
from bilby.core.prior.base import Prior
from bilby.core.utils import logger
from .base import Prior
from ..utils import logger
class SlabSpikePrior(Prior):
......
import inspect
import json
import os
from collections import OrderedDict, namedtuple
from copy import copy
from distutils.version import LooseVersion
from importlib import import_module
from itertools import product
from tqdm import tqdm
import corner
import h5py
import json
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import lines as mpllines
import numpy as np
import pandas as pd
import scipy.stats
from scipy.special import logsumexp
from . import utils
from .utils import (
......@@ -28,6 +20,7 @@ from .utils import (
decode_bilby_json, docstring,
recursively_save_dict_contents_to_group,
recursively_load_dict_contents_from_group,
recursively_decode_bilby_json,
)
from .prior import Prior, PriorDict, DeltaFunction
......@@ -42,7 +35,7 @@ def result_file_name(outdir, label, extension='json', gzip=False):
label: str
Naming scheme of the output file
extension: str, optional
Whether to save as `hdf5` or `json`
Whether to save as `hdf5`, `json`, or `pickle`
gzip: bool, optional
Set to True to append `.gz` to the extension for saving in gzipped format
......@@ -50,7 +43,9 @@ def result_file_name(outdir, label, extension='json', gzip=False):
=======
str: File name of the output file
"""
if extension in ['json', 'hdf5']:
if extension == 'pickle':
extension = 'pkl'
if extension in ['json', 'hdf5', 'pkl']:
if extension == 'json' and gzip:
return os.path.join(outdir, '{}_result.{}.gz'.format(label, extension))
else:
......@@ -126,6 +121,7 @@ def get_weights_for_reweighting(
n_checkpoint: int
Number of samples to reweight before writing a resume file
"""
from tqdm.auto import tqdm
nposterior = len(result.posterior)
......@@ -257,6 +253,7 @@ def reweight(result, label=None, new_likelihood=None, new_prior=None,
An array of the natural-log priors from the old likelihood
"""
from scipy.special import logsumexp
result = copy(result)
......@@ -329,7 +326,7 @@ class Result(object):
num_likelihood_evaluations=None, walkers=None,
max_autocorrelation_time=None, use_ratio=None,
parameter_labels=None, parameter_labels_with_unit=None,
gzip=False, version=None):
version=None):
""" A class to store the results of the sampling run
Parameters
......@@ -375,8 +372,6 @@ class Result(object):
likelihood was used during sampling
parameter_labels, parameter_labels_with_unit: list
Lists of the latex-formatted parameter labels
gzip: bool
Set to True to gzip the results file (if using json format)
version: str,
Version information for software used to generate the result. Note,
this information is generated when the result object is initialized
......@@ -511,6 +506,7 @@ class Result(object):
@classmethod
@docstring(_load_doctstring.format(format="hdf5"))
def from_hdf5(cls, filename=None, outdir=None, label=None):
import h5py
filename = _determine_file_name(filename, outdir, label, 'hdf5', False)
with h5py.File(filename, "r") as ff:
data = recursively_load_dict_contents_from_group(ff, '/')
......@@ -565,6 +561,17 @@ class Result(object):
else:
return ''
@property
def meta_data(self):
return self._meta_data
@meta_data.setter
def meta_data(self, meta_data):
if meta_data is None:
meta_data = dict()
meta_data = recursively_decode_bilby_json(meta_data)
self._meta_data = meta_data
@property
def priors(self):
if self._priors is not None:
......@@ -730,11 +737,11 @@ class Result(object):
default=False
outdir: str, optional
Path to the outdir. Default is the one stored in the result object.
extension: str, optional {json, hdf5, True}
extension: str, optional {json, hdf5, pkl, pickle, True}
Determines the method to use to store the data (if True defaults
to json)
gzip: bool, optional
If true, and outputing to a json file, this will gzip the resulting
If true, and outputting to a json file, this will gzip the resulting
file and add '.gz' to the file extension.
"""
......@@ -770,6 +777,7 @@ class Result(object):
with open(filename, 'w') as file:
json.dump(dictionary, file, indent=2, cls=BilbyJsonEncoder)
elif extension == 'hdf5':
import h5py
dictionary["__module__"] = self.__module__
dictionary["__name__"] = self.__class__.__name__
with h5py.File(filename, 'w') as h5file:
......@@ -984,6 +992,7 @@ class Result(object):
figure: matplotlib.pyplot.figure
A matplotlib figure object
"""
import matplotlib.pyplot as plt
logger.info('Plotting {} marginal distribution'.format(key))
label = self.get_latex_labels_from_parameter_keys([key])[0]
fig, ax = plt.subplots()
......@@ -1153,6 +1162,8 @@ class Result(object):
A matplotlib figure instance
"""
import corner
import matplotlib.pyplot as plt
# If in testing mode, not corner plots are generated
if utils.command_line_args.bilby_test_mode:
......@@ -1165,12 +1176,7 @@ class Result(object):
truth_color='tab:orange', quantiles=[0.16, 0.84],
levels=(1 - np.exp(-0.5), 1 - np.exp(-2), 1 - np.exp(-9 / 2.)),
plot_density=False, plot_datapoints=True, fill_contours=True,
max_n_ticks=3)
if LooseVersion(matplotlib.__version__) < "2.1":
defaults_kwargs['hist_kwargs'] = dict(normed=True)
else:
defaults_kwargs['hist_kwargs'] = dict(density=True)
max_n_ticks=3, hist_kwargs=dict(density=True))
if 'lionize' in kwargs and kwargs['lionize'] is True:
defaults_kwargs['truth_color'] = 'tab:blue'
......@@ -1282,6 +1288,7 @@ class Result(object):
@latex_plot_format
def plot_walkers(self, **kwargs):
""" Method to plot the trace of the walkers in an ensemble MCMC plot """
import matplotlib.pyplot as plt
if hasattr(self, 'walkers') is False:
logger.warning("Cannot plot_walkers as no walkers are saved")
return
......@@ -1345,6 +1352,7 @@ class Result(object):
Path to the outdir. Default is the one store in the result object.
"""
import matplotlib.pyplot as plt
# Determine model_posterior, the subset of the full posterior which
# should be passed into the model
......@@ -1744,7 +1752,7 @@ class ResultList(list):
else:
raise TypeError("Could not append a non-Result type")
def combine(self):
def combine(self, shuffle=False):
"""
Return the combined results in a :class:bilby.core.result.Result`
object.
......@@ -1767,11 +1775,20 @@ class ResultList(list):
# check which kind of sampler was used: MCMC or Nested Sampling
if result._nested_samples is not None:
posteriors, result = self._combine_nested_sampled_runs(result)
elif result.sampler in ["bilbymcmc"]:
posteriors, result = self._combine_mcmc_sampled_runs(result)
else:
posteriors = [res.posterior for res in self]
combined_posteriors = pd.concat(posteriors, ignore_index=True)
result.posterior = combined_posteriors.sample(len(combined_posteriors)) # shuffle
if shuffle:
result.posterior = combined_posteriors.sample(len(combined_posteriors))
else:
result.posterior = combined_posteriors
logger.info(f"Combined results have {len(result.posterior)} samples")
return result
def _combine_nested_sampled_runs(self, result):
......@@ -1794,6 +1811,7 @@ class ResultList(list):
result: bilby.core.result.Result
The result object with the combined evidences.
"""
from scipy.special import logsumexp
self.check_nested_samples()
# Combine evidences
......@@ -1820,6 +1838,47 @@ class ResultList(list):
result.sampler_kwargs = None
return posteriors, result
def _combine_mcmc_sampled_runs(self, result):
"""
Combine multiple MCMC sampling runs.
Currently this keeps all posterior samples from each run
Parameters
----------
result: bilby.core.result.Result
The result object to put the new samples in.
Returns
-------
posteriors: list
A list of pandas DataFrames containing the reduced sample set from
each run.
result: bilby.core.result.Result
The result object with the combined evidences.
"""
from scipy.special import logsumexp
# Combine evidences
log_evidences = np.array([res.log_evidence for res in self])
result.log_evidence = logsumexp(log_evidences, b=1. / len(self))
result.log_bayes_factor = result.log_evidence - result.log_noise_evidence
# Propogate uncertainty in combined evidence
log_errs = [res.log_evidence_err for res in self if np.isfinite(res.log_evidence_err)]
if len(log_errs) > 0:
result.log_evidence_err = 0.5 * logsumexp(2 * np.array(log_errs), b=1. / len(self))
else:
result.log_evidence_err = np.nan
# Combined posteriors with a weighting
posteriors = list()
for res in self:
posteriors.append(res.posterior)
return posteriors, result
def check_nested_samples(self):
for res in self:
try:
......@@ -1886,6 +1945,8 @@ def plot_multiple(results, filename=None, labels=None, colours=None,
A matplotlib figure instance
"""
import matplotlib.pyplot as plt
import matplotlib.lines as mpllines
kwargs['show_titles'] = False
kwargs['truths'] = None
......@@ -1977,6 +2038,7 @@ def make_pp_plot(results, filename=None, save=True, confidence_interval=[0.68, 0
matplotlib figure and a NamedTuple with attributes `combined_pvalue`,
`pvalues`, and `names`.
"""
import matplotlib.pyplot as plt
if keys is None:
keys = results[0].search_parameter_keys
......
......@@ -6,7 +6,6 @@ from collections import OrderedDict
import bilby
from ..utils import command_line_args, logger, loaded_modules_dict
from ..prior import PriorDict, DeltaFunction
from .base_sampler import Sampler, SamplingMarginalisedParameterError
from .cpnest import Cpnest
from .dynamic_dynesty import DynamicDynesty
......@@ -93,9 +92,10 @@ def run_sampler(likelihood, priors=None, label='label', outdir='outdir',
saving. For example, if `meta_data={dtype: 'signal'}`. Warning: in case
of conflict with keys saved by bilby, the meta_data keys will be
overwritten.
save: bool
save: bool, str
If true, save the priors and results to disk.
If hdf5, save as an hdf5 file instead of json.
If pickle or pkl, save as an pickle file instead of json.
gzip: bool
If true, and save is true, gzip the saved results file.
result_class: bilby.core.result.Result, or child of
......
import os
import dill as pickle
import signal
import numpy as np
......@@ -168,18 +167,20 @@ class DynamicDynesty(Dynesty):
def write_current_state(self):
"""
"""
import dill
check_directory_exists_and_if_not_mkdir(self.outdir)
with open(self.resume_file, 'wb') as file:
pickle.dump(self, file)
dill.dump(self, file)
def read_saved_state(self, continuing=False):
"""
"""
import dill
logger.debug("Reading resume file {}".format(self.resume_file))
if os.path.isfile(self.resume_file):
with open(self.resume_file, 'rb') as file:
self = pickle.load(file)
self = dill.load(file)
else:
logger.debug(
"Failed to read resume file {}".format(self.resume_file))
......
import datetime
import dill
import os
import sys
import pickle
import signal
import time
import warnings
from tqdm.auto import tqdm
import matplotlib.pyplot as plt
import numpy as np
from pandas import DataFrame
......@@ -20,11 +17,6 @@ from ..utils import (
from .base_sampler import Sampler, NestedSampler
from ..result import rejection_sample
from numpy import linalg
from dynesty.utils import unitcheck
import warnings
_likelihood = None
_priors = None
_search_parameter_keys = None
......@@ -130,7 +122,7 @@ class Dynesty(NestedSampler):
"""
default_kwargs = dict(bound='multi', sample='rwalk',
verbose=True, periodic=None, reflective=None,
check_point_delta_t=600, nlive=1000,
check_point_delta_t=1800, nlive=1000,
first_update=None, walks=100,
npdim=None, rstate=None, queue_size=1, pool=None,
use_pool=None, live_points=None,
......@@ -218,6 +210,7 @@ class Dynesty(NestedSampler):
kwargs['queue_size'] = kwargs.pop(equiv)
def _verify_kwargs_against_default_kwargs(self):
from tqdm.auto import tqdm
if not self.kwargs['walks']:
self.kwargs['walks'] = self.ndim * 10
if not self.kwargs['update_interval']:
......@@ -323,6 +316,7 @@ class Dynesty(NestedSampler):
def run_sampler(self):
import dynesty
import dill
logger.info("Using dynesty version {}".format(dynesty.__version__))
if self.kwargs.get("sample", "rwalk") == "rwalk":
......@@ -376,7 +370,7 @@ class Dynesty(NestedSampler):
check_directory_exists_and_if_not_mkdir(self.outdir)
dynesty_result = "{}/{}_dynesty.pickle".format(self.outdir, self.label)
with open(dynesty_result, 'wb') as file:
pickle.dump(out, file)
dill.dump(out, file)
self._generate_result(out)
self.calc_likelihood_count()
......@@ -482,6 +476,7 @@ class Dynesty(NestedSampler):
"""
from ... import __version__ as bilby_version
from dynesty import __version__ as dynesty_version
import dill
versions = dict(bilby=bilby_version, dynesty=dynesty_version)
if os.path.isfile(self.resume_file):
logger.info("Reading resume file {}".format(self.resume_file))
......@@ -560,6 +555,7 @@ class Dynesty(NestedSampler):
from ... import __version__ as bilby_version
from dynesty import __version__ as dynesty_version
import dill
check_directory_exists_and_if_not_mkdir(self.outdir)
end_time = datetime.datetime.now()
if hasattr(self, 'start_time'):
......@@ -605,6 +601,7 @@ class Dynesty(NestedSampler):
df.to_csv(filename, index=False, header=True, sep=' ')
def plot_current_state(self):
import matplotlib.pyplot as plt
if self.check_point_plot:
import dynesty.plotting as dyplot
labels = [label.replace('_', ' ') for label in self.search_parameter_keys]
......@@ -618,6 +615,19 @@ class Dynesty(NestedSampler):
logger.warning('Failed to create dynesty state plot at checkpoint')
finally:
plt.close("all")
try:
filename = "{}/{}_checkpoint_trace_unit.png".format(self.outdir, self.label)
from copy import deepcopy
temp = deepcopy(self.sampler.results)
temp["samples"] = temp["samples_u"]
fig = dyplot.traceplot(temp, labels=labels)[0]
fig.tight_layout()
fig.savefig(filename)
except (RuntimeError, np.linalg.linalg.LinAlgError, ValueError, OverflowError, Exception) as e:
logger.warning(e)
logger.warning('Failed to create dynesty unit state plot at checkpoint')
finally:
plt.close("all")
try:
filename = "{}/{}_checkpoint_run.png".format(self.outdir, self.label)
fig, axs = dyplot.runplot(
......@@ -704,6 +714,7 @@ class Dynesty(NestedSampler):
def sample_rwalk_bilby(args):
""" Modified bilby-implemented version of dynesty.sampling.sample_rwalk """
from dynesty.utils import unitcheck
# Unzipping.
(u, loglstar, axes, scale,
......@@ -737,7 +748,7 @@ def sample_rwalk_bilby(args):
# Propose a direction on the unit n-sphere.
drhat = rstate.randn(n)
drhat /= linalg.norm(drhat)
drhat /= np.linalg.norm(drhat)
# Scale based on dimensionality.
dr = drhat * rstate.rand() ** (1.0 / n)
......
from collections import namedtuple
import os
import signal
import shutil
from shutil import copyfile
import sys
from collections import namedtuple
from distutils.version import LooseVersion
from shutil import copyfile
import numpy as np
from pandas import DataFrame
from distutils.version import LooseVersion
import dill as pickle
from ..utils import logger, check_directory_exists_and_if_not_mkdir
from .base_sampler import MCMCSampler, SamplerError
......@@ -254,13 +253,14 @@ class Emcee(MCMCSampler):
def checkpoint(self):
""" Writes a pickle file of the sampler to disk using dill """
import dill
logger.info("Checkpointing sampler to file {}"
.format(self.checkpoint_info.sampler_file))
with open(self.checkpoint_info.sampler_file, 'wb') as f:
# Overwrites the stored sampler chain with one that is truncated
# to only the completed steps
self.sampler._chain = self.sampler_chain
pickle.dump(self._sampler, f)
dill.dump(self._sampler, f)
def checkpoint_and_exit(self, signum, frame):
logger.info("Recieved signal {}".format(signum))
......@@ -283,10 +283,11 @@ class Emcee(MCMCSampler):
if hasattr(self, '_sampler'):
pass
elif self.resume and os.path.isfile(self.checkpoint_info.sampler_file):
import dill
logger.info("Resuming run from checkpoint file {}"
.format(self.checkpoint_info.sampler_file))
with open(self.checkpoint_info.sampler_file, 'rb') as f:
self._sampler = pickle.load(f)
self._sampler = dill.load(f)
self._set_pos0_for_resume()
else:
self._initialise_sampler()
......
import numpy as np
from .base_sampler import Sampler
from ..result import read_in_result
......
from ..utils import logger
import numpy as np
import os
import numpy as np
from .emcee import Emcee
from ..utils import logger
class Kombine(Emcee):
......
import os
import datetime
import copy
import datetime
import logging
import os
import signal
import sys
import time
import dill
from collections import namedtuple
import logging
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.signal
from ..utils import logger, check_directory_exists_and_if_not_mkdir
from .base_sampler import SamplerError, MCMCSampler
......@@ -372,6 +368,7 @@ class Ptemcee(MCMCSampler):
import ptemcee
if os.path.isfile(self.resume_file) and self.resume is True:
import dill
logger.info("Resume data {} found".format(self.resume_file))
with open(self.resume_file, "rb") as file:
data = dill.load(file)
......@@ -833,10 +830,12 @@ def check_iteration(
def get_max_gradient(x, axis=0, window_length=11, polyorder=2, smooth=False):
from scipy.signal import savgol_filter
if smooth:
x = scipy.signal.savgol_filter(
x, axis=axis, window_length=window_length, polyorder=3)
return np.max(scipy.signal.savgol_filter(
x = savgol_filter(
x, axis=axis, window_length=window_length, polyorder=3
)
return np.max(savgol_filter(
x, axis=axis, window_length=window_length, polyorder=polyorder,
deriv=1))
......@@ -963,6 +962,7 @@ def checkpoint(
Q_list,
time_per_check,
):
import dill
logger.info("Writing checkpoint and diagnostics")
ndim = sampler.dim
......@@ -1002,6 +1002,7 @@ def checkpoint(
def plot_walkers(walkers, nburn, thin, parameter_labels, outdir, label,
discard=0):
""" Method to plot the trace of the walkers in an ensemble MCMC plot """
import matplotlib.pyplot as plt
nwalkers, nsteps, ndim = walkers.shape
if np.isnan(nburn):
nburn = nsteps
......@@ -1053,6 +1054,7 @@ def plot_walkers(walkers, nburn, thin, parameter_labels, outdir, label,
def plot_tau(
tau_list_n, tau_list, search_parameter_keys, outdir, label, tau, autocorr_tau,
):
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
for i, key in enumerate(search_parameter_keys):
ax.plot(tau_list_n, np.array(tau_list)[:, i], label=key)
......@@ -1065,6 +1067,7 @@ def plot_tau(
def plot_mean_log_posterior(mean_log_posterior, outdir, label):
import matplotlib.pyplot as plt
ntemps, nsteps = mean_log_posterior.shape
ymax = np.max(mean_log_posterior)
......@@ -1085,6 +1088,7 @@ def plot_mean_log_posterior(mean_log_posterior, outdir, label):
def compute_evidence(sampler, log_likelihood_array, outdir, label, discard, nburn, thin,
iteration, make_plots=True):
""" Computes the evidence using thermodynamic integration """
import matplotlib.pyplot as plt
betas = sampler.betas
# We compute the evidence without the burnin samples, but we do not thin
lnlike = log_likelihood_array[:, :, discard + nburn : iteration]
......
from distutils.spawn import find_executable
import logging
import os
import shutil
import sys
from math import fmod
import argparse
import inspect
import functools
import inspect
import json
import logging
import multiprocessing
import os
import types
import shutil
import subprocess
import multiprocessing
import sys
from distutils.spawn import find_executable
from importlib import import_module
from numbers import Number
import json
import warnings
import numpy as np
import pandas as pd
from scipy.interpolate import interp2d
from scipy.special import logsumexp
import pandas as pd
import matplotlib.pyplot as plt
logger = logging.getLogger('bilby')
......@@ -323,37 +319,6 @@ def theta_phi_to_ra_dec(theta, phi, gmst):
return ra, dec
def gps_time_to_gmst(gps_time):
"""
Convert gps time to Greenwich mean sidereal time in radians
This method assumes a constant rotation rate of earth since 00:00:00, 1 Jan. 2000
A correction has been applied to give the exact correct value for 00:00:00, 1 Jan. 2018
Error accumulates at a rate of ~0.0001 radians/decade.
Parameters
==========
gps_time: float
gps time
Returns
=======
float: Greenwich mean sidereal time in radians
"""
warnings.warn(
"Function gps_time_to_gmst deprecated, use "
"lal.GreenwichMeanSiderealTime(time) instead",
DeprecationWarning)
omega_earth = 2 * np.pi * (1 / 365.2425 + 1) / 86400.
gps_2000 = 630720013.
gmst_2000 = (6 + 39. / 60 + 51.251406103947375 / 3600) * np.pi / 12
correction_2018 = -0.00017782487379358614
sidereal_time = omega_earth * (gps_time - gps_2000) + gmst_2000 + correction_2018
gmst = fmod(sidereal_time, 2 * np.pi)
return gmst
def create_white_noise(sampling_frequency, duration):
""" Create white_noise which is then coloured by a given PSD
......@@ -543,11 +508,7 @@ def check_directory_exists_and_if_not_mkdir(directory):
"""
if directory == "":
return
elif not os.path.exists(directory):
os.makedirs(directory)
logger.debug('Making directory {}'.format(directory))
else:
logger.debug('Directory {} exists'.format(directory))
os.makedirs(directory, exist_ok=True)
def set_up_command_line_arguments():
......@@ -1143,6 +1104,28 @@ def decode_astropy_quantity(dct):
return dct
def recursively_decode_bilby_json(dct):
"""
Recursively call `bilby_decode_json`
Parameters
----------
dct: dict
The dictionary to decode
Returns
-------
dct: dict
The original dictionary with all the elements decode if possible
"""
dct = decode_bilby_json(dct)
if isinstance(dct, dict):
for key in dct:
if isinstance(dct[key], dict):
dct[key] = recursively_decode_bilby_json(dct[key])
return dct
def reflect(u):
"""
Iteratively reflect a number until it is contained in [0, 1].
......@@ -1208,6 +1191,7 @@ def latex_plot_format(func):
"""
@functools.wraps(func)
def wrapper_decorator(*args, **kwargs):
import matplotlib.pyplot as plt
from matplotlib import rcParams
if "BILBY_STYLE" in kwargs:
......@@ -1341,10 +1325,14 @@ def decode_from_hdf5(item):
elif isinstance(item, (bytes, bytearray)):
output = item.decode()
elif isinstance(item, np.ndarray):
if "|S" in str(item.dtype) or isinstance(item[0], bytes):
if item.size == 0:
output = item
elif "|S" in str(item.dtype) or isinstance(item[0], bytes):
output = [it.decode() for it in item]
else:
output = item
elif isinstance(item, np.bool_):
output = bool(item)
else:
output = item
return output
......@@ -1398,9 +1386,11 @@ def encode_for_hdf5(item):
elif isinstance(item, pd.Series):
output = item.to_dict()
elif inspect.isfunction(item) or inspect.isclass(item):
output = dict(__module__=item.__module__, __name__=item.__name__)
output = dict(__module__=item.__module__, __name__=item.__name__, __class__=True)
elif isinstance(item, dict):
output = item.copy()
elif isinstance(item, tuple):
output = {str(ii): elem for ii, elem in enumerate(item)}
else:
raise ValueError(f'Cannot save {type(item)} type')
return output
......