Commit 4715d680 authored by MoritzThomasHuebner's avatar MoritzThomasHuebner

Merge remote-tracking branch 'origin/master' into 207-pep8-imports

# Conflicts:
#	bilby/core/prior.py
#	bilby/core/result.py
#	bilby/core/sampler/cpnest.py
#	bilby/core/sampler/emcee.py
#	bilby/core/sampler/pymc3.py
#	bilby/gw/prior.py
parents ad176934 d2797b95
Pipeline #35721 failed with stage
in 41 seconds
# This script is an edited version of the example found at
# https://git.ligo.org/lscsoft/example-ci-project/blob/python/.gitlab-ci.yml
# Each 0th-indendation level is a job that will be run within GitLab CI
# Each 0th-indentation level is a job that will be run within GitLab CI
# The only exception are a short list of reserved keywords
#
# https://docs.gitlab.com/ee/ci/yaml/#gitlab-ci-yml
......@@ -18,18 +18,22 @@ python-2:
stage: test
image: bilbydev/test-suite-py2
before_script:
# Source the .bashrc for MultiNest
- source /root/.bashrc
# Install the dependencies specified in the Pipfile
- pipenv install --two --python=/opt/conda/bin/python2 --system --deploy
script:
- python setup.py install
# Run tests without finding coverage
- pytest
- pytest --ignore=test/utils_py3_test.py
# test example on python 3
python-3:
stage: test
image: bilbydev/test-suite-py3
before_script:
# Source the .bashrc for MultiNest
- source /root/.bashrc
# Install the dependencies specified in the Pipfile
- pipenv install --three --python=/opt/conda/bin/python --system --deploy
script:
......@@ -63,7 +67,7 @@ pages:
script:
- mkdir public/
- mv htmlcov/ public/
- mv /builds/Monash/bilby/coverage_badge.svg public/
- mv coverage_badge.svg public/
- mv docs/_build/html/* public/
artifacts:
paths:
......
......@@ -3,6 +3,12 @@
## Unreleased
Changes currently on master, but not under a tag.
- Renamed `PriorSet` to `PriorDict`
- Renamed `BBHPriorSet` to `BBHPriorDict`
- Renamed `BNSPriorSet` to `BNSPriorDict`
- Renamed `CalibrationPriorSet` to `CalibrationPriorDict`
- Fixed a bug which caused `Interferometer.detector_tensor` not to update when `latitude`, `longitude`, `xarm_azimuth`, `yarm_azimuth`, `xarm_tilt`, `yarm_tilt` were updated.
## [0.3.1] 2018-11-06
......@@ -88,7 +94,7 @@ re-instantiate the Prior in most cases
### Added
- InterferometerStrainData now handles both time-domain and frequencu-domain data
- Adds documentation on setting data (https://monash.docs.ligo.org/bilby/transient-gw-data.html)
- Adds documentation on setting data (https://lscsoft.docs.ligo.org/bilby/transient-gw-data.html)
- Checkpointing for `dynesty`: the sampling will be checkpointed every 10 minutes (approximately) and can be resumed.
- Add functionality to plot multiple results in a corner plot, see `bilby.core.result.plot_multiple()`.
- Likelihood evaluations are now saved along with the posteriors.
......@@ -115,9 +121,8 @@ First `pip` installable version https://pypi.org/project/BILBY/ .
- Major effort to update all docstrings and add some documentation.
- Marginalized likelihoods.
- Examples of searches for gravitational waves from a Supernova and using a sine-Gaussian.
- A `PriorSet` to handle sets of priors and allows reading in from a standardised prior file (see https://monash.docs.ligo.org/bilby/prior.html).
- A `PriorSet` to handle sets of priors and allows reading in from a standardised prior file (see https://lscsoft.docs.ligo.org/bilby/prior.html).
- A standardised file for storing detector data.
### Removed
- All chainconsumer dependency as this was causing issues.
......@@ -9,7 +9,7 @@ Getting started
All the code lives in a git repository (for a short introduction to git, see
[this tutorial](https://docs.gitlab.com/ee/gitlab-basics/start-using-git.html))
which is hosted here: https://git.ligo.org/Monash/bilby. If you haven't
which is hosted here: https://git.ligo.org/lscsoft/bilby. If you haven't
already, you should
[fork](https://docs.gitlab.com/ee/gitlab-basics/fork-project.html) the repository
and clone your fork, i.e., on your local machine run
......@@ -28,7 +28,7 @@ $ python setup.py develop
which will install `bilby` and, because we used `develop` instead of `install`
when you change the code your installed version will automatically be updated.
---
---
#### Removing previously installed versions
......@@ -62,7 +62,7 @@ you've found a bug or would like a feature it doesn't have, we want to
hear from you!
Our main forum for discussion is the project's [GitLab issue
tracker](https://git.ligo.org/Monash/bilby/issues). This is the right
tracker](https://git.ligo.org/lscsoft/bilby/issues). This is the right
place to start a discussion of any of the above or most any other
topic concerning the project.
......@@ -157,84 +157,10 @@ you should:
## Code overview
In this section, we'll give an overview of how the code is structured. This is intended to help orient users and make it easier to contribute. The layout is intended to define the logic of the code and new merge requests should aim to fit within this logic (unless there is a good argument to change it). For example, code which adds a new sampler should not effect the gravitational-wave specific parts of the code. Note that this document is not programatically generated and so may get out of date with time. If you notice something wrong, please open an issue.
In this section, we'll give an overview of how the code is structured. This is intended to help orient users and make it easier to contribute. The layout is intended to define the logic of the code and new merge requests should aim to fit within this logic (unless there is a good argument to change it). For example, code which adds a new sampler should not effect the gravitational-wave specific parts of the code. Note that this document is not programatically generated and so may get out of date with time. If you notice something wrong, please open an issue.
### Top level
### Bilby Code Layout
At the top level, the code is split into three modules containing the core, gravitational-wave specific, and hyper-parameter specific functionality. New changes should always respect this
```mermaid
graph TD
bilby[bilby] --> core[.core]
bilby --> gw[.gw]
bilby --> hyper[.hyper]
```
### Core
The core module contains the core inference logic - methods to define a prior, likelihood, and run Bayesian inference.
```mermaid
graph TD
core[bilby.core] --> prior[.prior]
core --> likelihood[.likelihood]
core --> result[.result]
core --> sampler[.sampler]
core --> utils[.utils]
prior --> Prior[".Prior()"]
style Prior fill:#D7BDE2
prior --> Normal[".Normal(Prior)"]
style Normal fill:#D7BDE2
prior --> Uniform[".Uniform(Prior)"]
style Uniform fill:#D7BDE2
likelihood --> Likelihood(".Likelihood()")
style Likelihood fill:#D7BDE2
likelihood --> GaussianLikelihood(".GaussianLikelihood()")
style GaussianLikelihood fill:#D7BDE2
result --> Result(".Result()")
style Result fill:#D7BDE2
sampler --> run_sampler("run_sampler()")
style run_sampler fill:#5499C7
```
![bilby overview](docs/images/bilby_layout.png)
Note this layout is not comprehensive, for example only a few example "Priors" are shown.
### Gravitational-wave specific
```mermaid
graph TD
gw[bilby.gw] --> gw_likelihood[.likelihood]
gw --> gw_prior[.prior]
gw --> gw_source[.source]
gw --> gw_waveform_generator[.waveform_generator]
gw --> gw_utils[.utils]
gw --> gw_conversion[.conversion]
gw_likelihood --> GravitationalWaveTransient(".GravitationalWaveTransient()")
style GravitationalWaveTransient fill:#D7BDE2
gw_source --> lal_bbh(".lal_binary_black_hole()")
style lal_bbh fill:#5499C7
gw_waveform_generator--> WaveformGenerator(".WaveformGenerator()")
style WaveformGenerator fill:#D7BDE2
```
### Legend of flow diagrams
To show a module we use a grey box. A nested module is indicated as such
```mermaid
graph LR
module[module] --> A[.other_module </br>]
```
i.e., in python, `module.other_module`. Meanwhile, a class, `MyClass` living in module `my_module` is indicated by
```mermaid
graph LR
my_module[my_module] --> Class[".MyClass(parent)"]
style Class fill:#D7BDE2
```
where the `(Parent)` indicates the parent class from which this class inherits. Finally, we define a function living in a module as
```mermaid
graph LR
my_module[my_module] --> function[".function()"]
style function fill:#5499C7
```
\ No newline at end of file
......@@ -6,20 +6,20 @@ Bilby
Fulfilling all your Bayesian inference dreams.
- `Installation
instructions <https://monash.docs.ligo.org/bilby/installation.html>`__
- `Contributing <https://git.ligo.org/Monash/bilby/blob/master/CONTRIBUTING.md>`__
- `Documentation <https://monash.docs.ligo.org/bilby/index.html>`__
- `Issue tracker <https://git.ligo.org/Monash/bilby/issues>`__
instructions <https://lscsoft.docs.ligo.org/bilby/installation.html>`__
- `Contributing <https://git.ligo.org/lscsoft/bilby/blob/master/CONTRIBUTING.md>`__
- `Documentation <https://lscsoft.docs.ligo.org/bilby/index.html>`__
- `Issue tracker <https://git.ligo.org/lscsoft/bilby/issues>`__
We encourage you to contribute to the development via a merge request. For
help in creating a merge request, see `this page
<https://docs.gitlab.com/ee/gitlab-basics/add-merge-request.html>`__ or contact
us directly.
.. |pipeline status| image:: https://git.ligo.org/Monash/bilby/badges/master/pipeline.svg
:target: https://git.ligo.org/Monash/bilby/commits/master
.. |coverage report| image:: https://monash.docs.ligo.org/bilby/coverage_badge.svg
:target: https://monash.docs.ligo.org/bilby/htmlcov/
.. |pipeline status| image:: https://git.ligo.org/lscsoft/bilby/badges/master/pipeline.svg
:target: https://git.ligo.org/lscsoft/bilby/commits/master
.. |coverage report| image:: https://lscsoft.docs.ligo.org/bilby/coverage_badge.svg
:target: https://lscsoft.docs.ligo.org/bilby/htmlcov/
.. |pypi| image:: https://badge.fury.io/py/bilby.svg
:target: https://pypi.org/project/bilby/
.. |version| image:: https://img.shields.io/pypi/pyversions/bilby.svg
......
......@@ -9,9 +9,9 @@ estimation. It is primarily designed and built for inference of compact
binary coalescence events in interferometric data, but it can also be used for
more general problems.
The code, and many examples are hosted at https://git.ligo.org/Monash/bilby.
The code, and many examples are hosted at https://git.ligo.org/lscsoft/bilby.
For installation instructions see
https://monash.docs.ligo.org/bilby/installation.html.
https://lscsoft.docs.ligo.org/bilby/installation.html.
"""
......@@ -23,3 +23,5 @@ from . import core, gw, hyper
from .core import utils, likelihood, prior, result, sampler
from .core.sampler import run_sampler
from .core.likelihood import Likelihood
__version__ = utils.get_version_information()
......@@ -13,11 +13,11 @@ from scipy.special import erf, erfinv
# Keep import bilby statement, it is necessary for some eval() statements
import bilby # noqa
from . import utils
from . import utils import logger, infer_args_from_method
from .utils import logger
class PriorSet(OrderedDict):
class PriorDict(OrderedDict):
def __init__(self, dictionary=None, filename=None):
""" A set of priors
......@@ -38,7 +38,7 @@ class PriorSet(OrderedDict):
elif type(filename) is str:
self.from_file(filename)
elif dictionary is not None:
raise ValueError("PriorSet input dictionary not understood")
raise ValueError("PriorDict input dictionary not understood")
def to_file(self, outdir, label):
""" Write the prior distribution to file.
......@@ -189,20 +189,22 @@ class PriorSet(OrderedDict):
logger.debug('{} not a known prior.'.format(key))
return samples
def prob(self, sample):
def prob(self, sample, **kwargs):
"""
Parameters
----------
sample: dict
Dictionary of the samples of which we want to have the probability of
kwargs:
The keyword arguments are passed directly to `np.product`
Returns
-------
float: Joint probability of all individual sample probabilities
"""
return np.product([self[key].prob(sample[key]) for key in sample])
return np.product([self[key].prob(sample[key]) for key in sample], **kwargs)
def ln_prob(self, sample):
"""
......@@ -240,6 +242,14 @@ class PriorSet(OrderedDict):
return False
class PriorSet(PriorDict):
def __init__(self, dictionary=None, filename=None):
""" DEPRECATED: USE PriorDict INSTEAD"""
logger.warning("The name 'PriorSet' is deprecated use 'PriorDict' instead")
super(PriorSet, self).__init__(dictionary, filename)
def create_default_prior(name, default_priors_file=None):
"""Make a default prior for a parameter with a known name.
......@@ -263,7 +273,7 @@ def create_default_prior(name, default_priors_file=None):
"No prior file given.")
prior = None
else:
default_priors = PriorSet(filename=default_priors_file)
default_priors = PriorDict(filename=default_priors_file)
if name in default_priors.keys():
prior = default_priors[name]
else:
......@@ -426,8 +436,7 @@ class Prior(object):
str: A string representation of this instance
"""
subclass_args = inspect.getargspec(self.__init__).args
subclass_args.pop(0)
subclass_args = infer_args_from_method(self.__init__)
prior_name = self.__class__.__name__
property_names = [p for p in dir(self.__class__) if isinstance(getattr(self.__class__, p), property)]
......
import os
from distutils.version import LooseVersion
from collections import OrderedDict
from collections import OrderedDict, namedtuple
import numpy as np
import deepdish
......@@ -11,7 +11,7 @@ import matplotlib.pyplot as plt
from . import utils
from .utils import logger, infer_parameters_from_function
from .prior import PriorSet, DeltaFunction
from .prior import PriorDict, DeltaFunction
def result_file_name(outdir, label):
......@@ -82,7 +82,7 @@ class Result(dict):
setattr(self, key, val)
if getattr(self, 'priors', None) is not None:
self.priors = PriorSet(self.priors)
self.priors = PriorDict(self.priors)
def __add__(self, other):
matches = ['sampler', 'search_parameter_keys']
......@@ -286,22 +286,112 @@ class Result(dict):
Returns
-------
string: str
A string of latex-formatted text of the mean and 1-sigma quantiles
summary: namedtuple
An object with attributes, median, lower, upper and string
"""
summary = namedtuple('summary', ['median', 'lower', 'upper', 'string'])
if len(quantiles) != 2:
raise ValueError("quantiles must be of length 2")
quants_to_compute = np.array([quantiles[0], 0.5, quantiles[1]])
quants = np.percentile(self.posterior[key], quants_to_compute * 100)
median = quants[1]
upper = quants[2] - median
lower = median - quants[0]
summary.median = quants[1]
summary.plus = quants[2] - summary.median
summary.minus = summary.median - quants[0]
fmt = "{{0:{0}}}".format(fmt).format
string = r"${{{0}}}_{{-{1}}}^{{+{2}}}$"
return string.format(fmt(median), fmt(lower), fmt(upper))
string_template = r"${{{0}}}_{{-{1}}}^{{+{2}}}$"
summary.string = string_template.format(
fmt(summary.median), fmt(summary.minus), fmt(summary.plus))
return summary
def plot_marginals(self, parameters=None, priors=None, titles=True,
file_base_name=None, bins=50, label_fontsize=16,
title_fontsize=16, quantiles=[0.16, 0.84], dpi=300):
""" Plot 1D marginal distributions
Parameters
----------
parameters: (list, dict), optional
If given, either a list of the parameter names to include, or a
dictionary of parameter names and their "true" values to plot.
priors: {bool (False), bilby.core.prior.PriorSet}
If true, add the stored prior probability density functions to the
one-dimensional marginal distributions. If instead a PriorSet
is provided, this will be plotted.
titles: bool
If true, add 1D titles of the median and (by default 1-sigma)
error bars. To change the error bars, pass in the quantiles kwarg.
See method `get_one_dimensional_median_and_error_bar` for further
details). If `quantiles=None` is passed in, no title is added.
file_base_name: str, optional
If given, the base file name to use (by default `outdir/label_` is
used)
bins: int
The number of histogram bins
label_fontsize, title_fontsize: int
The fontsizes for the labels and titles
quantiles: list
A length-2 list of the lower and upper-quantiles to calculate
the errors bars for.
dpi: int
Dots per inch resolution of the plot
Returns
-------
figures: dictionary
A dictionary of the matplotlib figures
"""
if isinstance(parameters, dict):
plot_parameter_keys = list(parameters.keys())
truths = list(parameters.values())
elif parameters is None:
plot_parameter_keys = self.search_parameter_keys
truths = None
else:
plot_parameter_keys = list(parameters)
truths = None
labels = self.get_latex_labels_from_parameter_keys(plot_parameter_keys)
if file_base_name is None:
file_base_name = '{}/{}_'.format(self.outdir, self.label)
if priors is True:
priors = getattr(self, 'priors', False)
elif isinstance(priors, (dict)) or priors in [False, None]:
pass
else:
raise ValueError('Input priors={} not understood'.format(priors))
figures = dict()
for i, key in enumerate(plot_parameter_keys):
fig, ax = plt.subplots()
ax.hist(self.posterior[key].values, bins=bins, density=True,
histtype='step')
ax.set_xlabel(labels[i], fontsize=label_fontsize)
if truths is not None:
ax.axvline(truths[i], ls='--', color='orange')
summary = self.get_one_dimensional_median_and_error_bar(
key, quantiles=quantiles)
ax.axvline(summary.median - summary.minus, ls='--', color='C0')
ax.axvline(summary.median + summary.plus, ls='--', color='C0')
if titles:
ax.set_title(summary.string, fontsize=title_fontsize)
if isinstance(priors, dict):
theta = np.linspace(ax.get_xlim()[0], ax.get_xlim()[1], 300)
ax.plot(theta, priors[key].prob(theta), color='C2')
fig.tight_layout()
fig.savefig(file_base_name + key)
figures[key] = fig
return figures
def plot_corner(self, parameters=None, priors=None, titles=True, save=True,
filename=None, dpi=300, **kwargs):
......@@ -312,9 +402,9 @@ class Result(dict):
parameters: (list, dict), optional
If given, either a list of the parameter names to include, or a
dictionary of parameter names and their "true" values to plot.
priors: {bool (False), bilby.core.prior.PriorSet}
priors: {bool (False), bilby.core.prior.PriorDict}
If true, add the stored prior probability density functions to the
one-dimensional marginal distributions. If instead a PriorSet
one-dimensional marginal distributions. If instead a PriorDict
is provided, this will be plotted.
titles: bool
If true, add 1D titles of the median and (by default 1-sigma)
......@@ -419,7 +509,7 @@ class Result(dict):
ax = axes[i + i * len(plot_parameter_keys)]
if ax.title.get_text() == '':
ax.set_title(self.get_one_dimensional_median_and_error_bar(
par, quantiles=kwargs['quantiles']),
par, quantiles=kwargs['quantiles']).string,
**kwargs['title_kwargs'])
# Add priors to the 1D plots
......@@ -578,7 +668,7 @@ class Result(dict):
Parameters
----------
priors: dict, PriorSet
priors: dict, PriorDict
Prior distributions
"""
self.prior_values = pd.DataFrame()
......
......@@ -4,7 +4,7 @@ import datetime
from collections import OrderedDict
from ..utils import command_line_args, logger
from ..prior import PriorSet
from ..prior import PriorDict
from .base_sampler import Sampler
from .cpnest import Cpnest
......@@ -47,8 +47,8 @@ def run_sampler(likelihood, priors=None, label='label', outdir='outdir',
----------
likelihood: `bilby.Likelihood`
A `Likelihood` instance
priors: `bilby.PriorSet`
A PriorSet/dictionary of the priors for each parameter - missing
priors: `bilby.PriorDict`
A PriorDict/dictionary of the priors for each parameter - missing
parameters will use default priors, if None, all priors will be default
label: str
Name for the run, used in output files
......@@ -92,6 +92,8 @@ def run_sampler(likelihood, priors=None, label='label', outdir='outdir',
if clean:
command_line_args.clean = clean
if command_line_args.clean:
kwargs['resume'] = False
from . import implemented_samplers
......@@ -99,8 +101,8 @@ def run_sampler(likelihood, priors=None, label='label', outdir='outdir',
priors = dict()
if type(priors) in [dict, OrderedDict]:
priors = PriorSet(priors)
elif isinstance(priors, PriorSet):
priors = PriorDict(priors)
elif isinstance(priors, PriorDict):
pass
else:
raise ValueError("Input priors not understood")
......
from __future__ import absolute_import
import datetime
import numpy as np
from pandas import DataFrame
from ..utils import logger, command_line_args
from ..prior import Prior, PriorSet
from ..prior import Prior, PriorDict
from ..result import Result, read_in_result
......@@ -15,7 +16,7 @@ class Sampler(object):
----------
likelihood: likelihood.Likelihood
A object with a log_l method
priors: bilby.core.prior.PriorSet, dict
priors: bilby.core.prior.PriorDict, dict
Priors to be used in the search.
This has attributes for each parameter to be sampled.
external_sampler: str, Sampler, optional
......@@ -36,7 +37,7 @@ class Sampler(object):
-------
likelihood: likelihood.Likelihood
A object with a log_l method
priors: bilby.core.prior.PriorSet
priors: bilby.core.prior.PriorDict
Priors to be used in the search.
This has attributes for each parameter to be sampled.
external_sampler: Module
......@@ -75,10 +76,10 @@ class Sampler(object):
self, likelihood, priors, outdir='outdir', label='label',
use_ratio=False, plot=False, skip_import_verification=False, **kwargs):
self.likelihood = likelihood
if isinstance(priors, PriorSet):
if isinstance(priors, PriorDict):
self.priors = priors
else:
self.priors = PriorSet(priors)
self.priors = PriorDict(priors)
self.label = label
self.outdir = outdir
self.use_ratio = use_ratio
......@@ -441,7 +442,7 @@ class MCMCSampler(Sampler):
def print_nburn_logging_info(self):
""" Prints logging info as to how nburn was calculated """
if type(self.kwargs['nburn']) in [float, int]:
if type(self.nburn) in [float, int]:
logger.info("Discarding {} steps for burn-in".format(self.nburn))
elif self.result.max_autocorrelation_time is None:
logger.info("Autocorrelation time not calculated, discarding {} "
......
......@@ -4,7 +4,7 @@ import numpy as np
from pandas import DataFrame
from .base_sampler import NestedSampler
from ..utils import logger
from ..utils import logger, check_directory_exists_and_if_not_mkdir
try:
from cpnest import model as cpmodel, CPNest
......@@ -23,27 +23,33 @@ class Cpnest(NestedSampler):
Keyword Arguments
-----------------
npoints: int
nlive: int
The number of live points, note this can also equivalently be given as
one of [nlive, nlives, n_live_points]
one of [npoints, nlives, n_live_points]
seed: int (1234)
Initialised random seed
Nthreads: int, (1)
nthreads: int, (1)
Number of threads to use
maxmcmc: int (1000)
The maximum number of MCMC steps to take
verbose: Bool
verbose: Bool (True)
If true, print information information about the convergence during
resume: Bool (False)
Whether or not to resume from a previous run
output: str
Where to write the CPNest, by default this is
{self.outdir}/cpnest_{self.label}/
"""
default_kwargs = dict(verbose=1, Nthreads=1, Nlive=500, maxmcmc=1000,
Poolsize=100, seed=None, balance_samplers=True)
default_kwargs = dict(verbose=1, nthreads=1, nlive=500, maxmcmc=1000,
seed=None, poolsize=100, nhamiltonian=0, resume=False,