Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • john-veitch/bilby
  • duncanmmacleod/bilby
  • colm.talbot/bilby
  • lscsoft/bilby
  • matthew-pitkin/bilby
  • salvatore-vitale/tupak
  • charlie.hoy/bilby
  • bfarr/bilby
  • virginia.demilio/bilby
  • vivien/bilby
  • eric-howell/bilby
  • sebastian-khan/bilby
  • rhys.green/bilby
  • moritz.huebner/bilby
  • joseph.mills/bilby
  • scott.coughlin/bilby
  • matthew.carney/bilby
  • hyungwon.lee/bilby
  • monica.rizzo/bilby
  • christopher-berry/bilby
  • lindsay.demarchi/bilby
  • kaushik.rao/bilby
  • charles.kimball/bilby
  • andrew.matas/bilby
  • juan.calderonbustillo/bilby
  • patrick-meyers/bilby
  • hannah.middleton/bilby
  • eve.chase/bilby
  • grant.meadors/bilby
  • khun.phukon/bilby
  • sumeet.kulkarni/bilby
  • daniel.reardon/bilby
  • cjhaster/bilby
  • sylvia.biscoveanu/bilby
  • james-clark/bilby
  • meg.millhouse/bilby
  • joshua.willis/bilby
  • nikhil.sarin/bilby
  • paul.easter/bilby
  • youngmin/bilby
  • daniel-williams/bilby
  • shanika.galaudage/bilby
  • bruce.edelman/bilby
  • avi.vajpeyi/bilby
  • isobel.romero-shaw/bilby
  • andrew.kim/bilby
  • dominika.zieba/bilby
  • jonathan.davies/bilby
  • marc.arene/bilby
  • srishti.tiwari/bilby-tidal-heating-eccentric
  • aditya.vijaykumar/bilby
  • michael.williams/bilby
  • cecilio.garcia-quiros/bilby
  • rory-smith/bilby
  • maite.mateu-lucena/bilby
  • wushichao/bilby
  • kaylee.desoto/bilby
  • brandon.piotrzkowski/bilby
  • rossella.gamba/bilby
  • hunter.gabbard/bilby
  • deep.chatterjee/bilby
  • tathagata.ghosh/bilby
  • arunava.mukherjee/bilby
  • philip.relton/bilby
  • reed.essick/bilby
  • pawan.gupta/bilby
  • francisco.hernandez/bilby
  • rhiannon.udall/bilby
  • leo.tsukada/bilby
  • will-farr/bilby
  • vijay.varma/bilby
  • jeremy.baier/bilby
  • joshua.brandt/bilby
  • ethan.payne/bilby
  • ka-lok.lo/bilby
  • antoni.ramos-buades/bilby
  • oliviastephany.wilk/bilby
  • jack.heinzel/bilby
  • samson.leong/bilby-psi4
  • viviana.caceres/bilby
  • nadia.qutob/bilby
  • michael-coughlin/bilby
  • hemantakumar.phurailatpam/bilby
  • boris.goncharov/bilby
  • sama.al-shammari/bilby
  • siqi.zhong/bilby
  • jocelyn-read/bilby
  • marc.penuliar/bilby
  • stephanie.letourneau/bilby
  • alexandresebastien.goettel/bilby
  • alec.gunny/bilby
  • serguei.ossokine/bilby
  • pratyusava.baral/bilby
  • sophie.hourihane/bilby
  • eunsub/bilby
  • james.hart/bilby
  • pratyusava.baral/bilby-tg
  • zhaozc/bilby
  • pratyusava.baral/bilby_SoG
  • tomasz.baka/bilby
  • nicogerardo.bers/bilby
  • soumen.roy/bilby
  • isaac.mcmahon/healpix-redundancy
  • asamakai.baker/bilby-frequency-dependent-antenna-pattern-functions
  • anna.puecher/bilby
  • pratyusava.baral/bilby-x-g
  • thibeau.wouters/bilby
  • christian.adamcewicz/bilby
  • raffi.enficiaud/bilby
109 results
Show changes
Commits on Source (724)
Showing
with 4820 additions and 1055 deletions
......@@ -13,29 +13,44 @@ stages:
- test
- deploy
# test example on python 2
python-2:
.test-python: &test-python
stage: test
image: bilbydev/test-suite-py2
image: python
before_script:
# Source the .bashrc for MultiNest
- source /root/.bashrc
# Install the dependencies specified in the Pipfile
- pipenv install --two --python=/opt/conda/bin/python2 --system --deploy
# this is required because pytables doesn't use a wheel on py37
- apt-get -yqq update
- apt-get -yqq install libhdf5-dev
script:
- python -m pip install .
- python -c "import bilby"
- python -c "import bilby.core"
- python -c "import bilby.gw"
- python -c "import bilby.hyper"
- python -c "import cli_bilby"
# test basic setup on python2
basic-2.7:
<<: *test-python
image: python:2.7
# test basic setup on python3
basic-3.7:
<<: *test-python
image: python:3.7
# test example on python 2
python-2.7:
stage: test
image: bilbydev/bilby-test-suite-python27
script:
- python setup.py install
# Run tests without finding coverage
- pytest --ignore=test/utils_py3_test.py
# test example on python 3
python-3:
python-3.7:
stage: test
image: bilbydev/test-suite-py3
before_script:
# Source the .bashrc for MultiNest
- source /root/.bashrc
# Install the dependencies specified in the Pipfile
- pipenv install --three --python=/opt/conda/bin/python --system --deploy
image: bilbydev/bilby-test-suite-python37
script:
- python setup.py install
......@@ -49,7 +64,6 @@ python-3:
# Make the documentation
- cd docs
- conda install -y make
- make clean
- make html
......@@ -59,11 +73,25 @@ python-3:
- coverage_badge.svg
- docs/_build/html/
# Tests run at a fixed schedule rather than on push
scheduled-python-3.7:
stage: test
image: bilbydev/bilby-test-suite-python37
only:
- schedules
script:
- python setup.py install
# Run tests which are only done on schedule
- pytest test/example_test.py
- pytest test/gw_example_test.py
- pytest test/sample_from_the_prior_test.py
pages:
stage: deploy
dependencies:
- python-3
- python-2
- python-3.7
- python-2.7
script:
- mkdir public/
- mv htmlcov/ public/
......
......@@ -2,12 +2,260 @@
## Unreleased
## [0.5.7] 2019-09-19
### Added
- bilby_convert_resume file CL tool for converting dynesty resume files into preresults !599
### Changes
- Change the constants (Msun, REarth etc) to match the values in LAL !597
- Change the Greenwhich Mean Sidereal Time conversion to match the method in LAL !597
- Update dynesty requirement to 1.0.0
- Improve integration of bounds with dynesty !589
- Fixed issue with mutable default argument !596
- Allow the use of n_effective in dynesty !592
- Allow the use of n_periodic in cpnest !591
- Fix bug in dt calc
## [0.5.6] 2019-09-04
### Changes
- Deprecation of the old helper functions (e.g., fetch open data)
- Improvements to the documentation
- Fix a bug in the dt calculations of the GW likelihood
- Various small bug fixes
### Added
- LAL version information in the meta data
## [0.5.5] 2019-08-22
### Added
- Reading/writing of the prior in a JSON format
- Checks for marginalization flags
### Changes
- Improvements to the examples: reorganisation and fixing bugs
- Fixed bug with scipy>=1.3.0 and spline
- Removed the sqrt(2) normalisation from the scalar longitudinal mode
- Improve PSD filename reading (no longer required "/" to read local files)
- Fix bug in emcee chains
- Added a try/except cluase for building the lookup table
## [0.5.4] 2019-07-30
### Added
- Analytic CDFs
- Reading/writing of grid results objects
### Changed
- Dynesty default settings changed: by default, now uses 30xndim walks. This was
shown (!564) to provide better convergence for the long-duration high-spin tests.
- Fix bug in combined runs log evidence calculations
- Fixed bugs in the nightly tests
## [0.5.3] 2019-07-23
### Added
- Jitter time marginalization. For the time-marginalized likelihood, a jitter
is used to ensure proper sampling without artifacts (!534)
- Zero likelihood mode for testing and zero-likelihood test to the nightly C.I ((!542)
- 15D analytic Gaussian test example (!547)
### Changes
- Dynesty version minimum set to 0.9.7. Changes to this sampler vastly improve
performance (!537)
- Improvements to waveform plotting (!534)
- Fixed bugs in the prior loading and added tests (!531 !539 !553 !515)
- Fixed issue in 1D CDF prior plots (!538)
- ROQ weights stored as npz rather than json (memory-performance improvement) (!536)
- Distance marginalisation now uses cubic rather than linear interpolation. Improves
distance/inclination posteriors for high SNR systems. (!552)
- Inputs to hyperpe modified to allow for more flexible sampling prior specification
and improve efficiency. (!545)
- Fix definition of some spin phase parameters (!556).
## [0.5.2] 2019-06-18
### Added
- Method to read data in using gwpy get (and associated example)
- Adds a catch for broken resume files with improves reporting
### Changed
- Updated and fixed bugs in examples
- Resolve sampling time persistence for runs which are interupted
- Improvements to the PP plot
- Speed up of the distance calculation
- Fixed a bug in the inteference of bilby command line arguments with user specified command lines
- Generalised the consistency checks for ResultLists
- Fixes to some tests
- Makes the parameter conversion a static method rather than a lambda expression
## [0.5.1] 2019-06-05
### Added
- Option for the GraceDB service URL
- Precessing BNS
- Functionality to make a waveform plot
### Changed
- Changes to ROQ weight generation: finer time-steps and fixed a bug in the time definition
- Fixed typo "CompactBinaryCoalesnce" -> "CompactBinaryCoalescence" (old class now has deprecation warning)
- Fixed a minor bug in the frequency mask caching
- Minor refractoring of the GWT likelihood and detector tests
- Initial samples in dynesty now generated from the constrained prior
## [0.5.0] 2019-05-08
### Added
- A plot_skymap method to the CBCResult object based on ligo.skymap
- A plot_calibration_posterior method to the CBCResult object
- Method to merge results
### Changed
- Significant refactoring of detector module: this should be backward conmpatible. This work was done to break the large detector.py file into smaller, more manageable chunks.
- The `periodic_boundary` option to the prior classes has been changed to `boundary`.
*This breaks backward compatibility*.
The options to `boundary` are `{'periodic', 'reflective', None}`.
Periodic boundaries are supported as before.
Reflective boundaries are supported in `dynesty` and `cpnest`.
- Minor speed improvements by caching intermediate steps
- Added state plotting for dynesty. Use `check_point_plot=True` in the `run_sampler`
function to create trace plots during the dynesty checkpoints
- Dynesty now prints the progress to STDOUT rather than STDERR
- `detector` module refactored into subpackage. Maintains backward compatibility.
- Specifying alternative frequency bounds for the ROQ now possible if the appropriate
`params.dat` file is passed.
### Removed
- Obsolete (and potentially incorrect) plot_skymap methods from gw.utils
## [0.4.5] 2019-04-03
### Added
- Calibration method and plotting
- Multivariate Gaussian prior
- Bayesian model diminsionality calculator
- Dynamic dynesty (note: this is in an alpha stage)
- Waveform caching
### Changes
- Fixed bugs in the ROQ time resolution
- Fixed bugs in the gracedb wrapper-method
- Improvements to the pp-plot method
- Improved checkpointing for emcee/ptemcee
- Various perforance-related improvements
## [0.4.4] 2019-04-03
### Added
- Infrastucture for custom jump proposals (cpnest-only)
- Evidence uncertainty estimate to cpnest
### Changed
- Bug fix to close figures after creation
- Improved the frequency-mask to entirely remove values outside the mask rather
than simply set them to zero
- Fix problem with Prior prob and ln_prob if passing multiple samples
- Improved cpnest prior sampling
### Removed
-
## [0.4.3] 2019-03-21
### Added
- Constraint prior: in prior files you can now add option of a constraint based
on other parameters. Currently implements mass-constraints only.
- Grid likelihood: module to evaluate the likelihood on a grid
### Changed
- The GWTransientLikelihood no longer returns -inf for m2 > m1. It will evaluate
the likelihood as-is. To implement the constraint, use the Constraint priors.
## [0.4.2] 2019-03-21
### Added
- Fermi-Dirac and SymmetricLogUniform prior distributions
- Multivariate Gaussian example and BNS example
- Added standard GWOSC channel names
- Initial work on a fake sampler for testing
- Option for aligned spins
- Results file command line interface
- Full reconstruction of marginalized parameters
### Changed
- Fixed scheduled tests and simplify testing environment
- JSON result files can now be gzipped
- Reduced ROQ memory usage
- Default checkpointing in cpnest
## [0.4.1] 2019-03-04
### Added
- Support for JSON result files
- Before sampling a test is performed for redundant priors
### Changed
- Fixed the definition of iota to theta_jn. WARNING: this breaks backward compatibility. Previously, the CBC parameter iota was used in prior files, but was ill-defined. This fixes that, requiring all scripts to use `theta_jn` in place of `iota`
- Changed the default result file store to JSON rather than hdf5. Reading/writing of hdf5 files is still intact. The read_in_result function will still read in hdf5 files for backward compatibility
- Minor fixes to the way PSDs are calculated
- Fixed a bug in the CBC result where the frequency_domain model was pickled
- Use pickling to store the dynesty resume file and add a write-to-resume on SIGINT/SIGKILL
- Bug fix in ROQ likelihood
- Distance and phase marginalisation work with ROQ likelihood
- Cpnest now creates checkpoints (resume files) by default
### Removed
-
## [0.4.0] 2019-02-15
### Changed
- Changed the logic around redundancy tests in the `PriorDict` classes
- Fixed an accidental addition of astropy as a first-class dependency and added a check for missing dependencies to the C.I.
- Fixed a bug in the "create-your-own-time-domain-model" example
- Added citation guide to the readme
## [0.3.6] 2019-02-10
### Added
- Added the PolyChord sampler, which can be accessed by using `sampler='pypolychord'` in `run_sampler`
- `emcee` now writes all progress to disk and can resume from a previous run.
### Changed
- Cosmology generalised, users can now specify the cosmology used, default is astropy Planck15
- UniformComovingVolume prior *requires* the name to be one of "luminosity_distance", "comoving_distance", "redshift"
- Time/frequency array generation/conversion improved. We now impose `duration` is an integer multiple of
`sampling_frequency`. Converting back and forth between time/frequency arrays now works for all valid arrays.
- Updates the bilby.core.utils constants to match those of Astropy v3.0.4
- Improve the load_data_from_cache_file method
### Removed
- Removed deprecated `PriorSet` classes. Use `PriorDict` instead.
## [0.3.5] 2019-01-25
### Added
- Reduced Order Quadrature likelihood
- PTMCMCSampler
- CBC result class
- Additional tutorials on using GraceDB and experts guide on running on events in open data
### Changed
- Updated repository information in Dockerfile for PyMultinest
## [0.3.4] 2019-01-10
### Changes
- Renamed the "basic_tutorial.py" example to "fast_tutorial.py" and created a
"standard_15d_cbc_tutorial.py"
- Renamed "prior" to "priors" in bilby.gw.likelihood.GravtitationalWaveTransient
for consistency with bilby.core. **WARNING**: This will break scripts which
use marginalization.
- Added `outdir` kwarg for plotting methods in `bilby.core.result.Result`. This makes plotting
into custom destinations easier.
- Fixed definition of matched_filter_snr, the interferometer method has become `ifo.inner_product`.
### Added
- log-likelihood evaluations for pymultinest
## [0.3.3] 2018-11-08
Changes currently on master, but not under a tag.
......@@ -20,6 +268,7 @@ Changes currently on master, but not under a tag.
- Added method to result to get injection recovery credible levels
- Added function to generate a pp-plot from many results to core/result.py
- Fixed a bug which caused `Interferometer.detector_tensor` not to update when `latitude`, `longitude`, `xarm_azimuth`, `yarm_azimuth`, `xarm_tilt`, `yarm_tilt` were updated.
- Added implementation of the ROQ likelihood. The basis needs to be specified by the user.
- Extracted time and frequency series behaviour from `WaveformGenerator` and `InterferometerStrainData` and moved it to `series.gw.CoupledTimeAndFrequencySeries`
### Changes
......
......@@ -33,7 +33,7 @@ when you change the code your installed version will automatically be updated.
#### Removing previously installed versions
If you have previously installed `bilby` using `pip` (or generally find buggy
behaviour). It may be worthwhile purging your system and reinstalling. To do
behaviour), it may be worthwhile purging your system and reinstalling. To do
this, first find out where the module is being imported from: from any
directory that is *not* the source directory, do the following
......@@ -170,7 +170,7 @@ interested party) please key these three things in mind
* If you open a discussion, be timely in responding to the submitter. Note, the
reverse does not need to apply.
* Keep your questions/comments focussed on the scope of the merge request. If
* Keep your questions/comments focused on the scope of the merge request. If
while reviewing the code you notice other things which could be improved, open
a new issue.
* Be supportive - merge requests represent a lot of hard work and effort and
......
......@@ -6,16 +6,18 @@ name = "pypi"
[packages]
future = "*"
corner = "*"
numpy = ">=1.9"
numpy = "==1.15.2"
ligotimegps = "<=1.2.3"
matplotlib = "<3"
scipy = ">=0.16"
pandas = "*"
deepdish = "*"
pandas = "==0.23.0"
deepdish = "==0.3.6"
mock = "*"
astropy = "<3"
gwpy = "*"
theano = "*"
lalsuite = "*"
dill = "*"
# cpnest = "*"
dynesty = "*"
......@@ -23,7 +25,6 @@ emcee = "*"
nestle = "*"
ptemcee = "*"
pymc3 = "*"
pymultinest = "*"
[requires]
......
This diff is collapsed.
|pipeline status| |coverage report| |pypi| |version|
|pipeline status| |coverage report| |pypi| |conda| |version|
=====
Bilby
=====
Fulfilling all your Bayesian inference dreams.
A user-friendly Bayesian inference library.
Fulfilling all your Bayesian dreams.
Online material to help you get started:
- `Installation
instructions <https://lscsoft.docs.ligo.org/bilby/installation.html>`__
- `Contributing <https://git.ligo.org/lscsoft/bilby/blob/master/CONTRIBUTING.md>`__
- `Installation instructions <https://lscsoft.docs.ligo.org/bilby/installation.html>`__
- `Documentation <https://lscsoft.docs.ligo.org/bilby/index.html>`__
- `Issue tracker <https://git.ligo.org/lscsoft/bilby/issues>`__
We encourage you to contribute to the development via a merge request. For
If you need help, find an issue, or just have a question/suggestion you can
- Email our support desk: contact+lscsoft-bilby-1846-issue-@support.ligo.org
- Join our `Slack workspace <https://bilby-code.slack.com/>`__
- Ask questions (or search through other users questions and answers) on `StackOverflow <https://stackoverflow.com/questions/tagged/bilby>`__ using the bilby tag
- For www.git.ligo.org users, submit issues directly through `the issue tracker <https://git.ligo.org/lscsoft/bilby/issues>`__
We encourage you to contribute to the development of bilby. This is done via a merge request. For
help in creating a merge request, see `this page
<https://docs.gitlab.com/ee/gitlab-basics/add-merge-request.html>`__ or contact
us directly.
us directly. For advice on contributing, see `this help page <https://git.ligo.org/lscsoft/bilby/blob/master/CONTRIBUTING.md>`__.
--------------
Citation guide
--------------
If you use :code:`bilby` in a scientific publication, please cite
* `Bilby: A user-friendly Bayesian inference library for gravitational-wave
astronomy
<https://ui.adsabs.harvard.edu/#abs/2018arXiv181102042A/abstract>`__
Additionally, :code:`bilby` builds on a number of open-source packages. If you
make use of this functionality in your publications, we recommend you cite them
as requested in their associated documentation.
**Samplers**
* `dynesty <https://github.com/joshspeagle/dynesty>`__
* `nestle <https://github.com/kbarbary/nestle>`__
* `pymultinest <https://github.com/JohannesBuchner/PyMultiNest>`__
* `cpnest <https://github.com/johnveitch/cpnest>`__
* `emcee <https://github.com/dfm/emcee>`__
* `ptemcee <https://github.com/willvousden/ptemcee>`__
* `ptmcmcsampler <https://github.com/jellis18/PTMCMCSampler>`__
* `pypolychord <https://github.com/PolyChord/PolyChordLite>`__
* `PyMC3 <https://github.com/pymc-devs/pymc3>`_
**Gravitational-wave tools**
* `gwpy <https://github.com/gwpy/gwpy>`__
* `lalsuite <https://git.ligo.org/lscsoft/lalsuite>`__
* `astropy <https://github.com/astropy/astropy>`__
**Plotting**
* `corner <https://github.com/dfm/corner.py>`__ for generating corner plot
* `matplotlib <https://github.com/matplotlib/matplotlib>`__ for general plotting routines
.. |pipeline status| image:: https://git.ligo.org/lscsoft/bilby/badges/master/pipeline.svg
:target: https://git.ligo.org/lscsoft/bilby/commits/master
......@@ -22,5 +70,7 @@ us directly.
:target: https://lscsoft.docs.ligo.org/bilby/htmlcov/
.. |pypi| image:: https://badge.fury.io/py/bilby.svg
:target: https://pypi.org/project/bilby/
.. |conda| image:: https://img.shields.io/conda/vn/conda-forge/bilby.svg
:target: https://anaconda.org/conda-forge/bilby
.. |version| image:: https://img.shields.io/pypi/pyversions/bilby.svg
:target: https://pypi.org/project/bilby/
from __future__ import absolute_import
from . import likelihood, prior, result, sampler, utils
from . import grid, likelihood, prior, result, sampler, series, utils
from __future__ import division
import numpy as np
import os
import json
from collections import OrderedDict
from .prior import Prior, PriorDict
from .utils import (logtrapzexp, check_directory_exists_and_if_not_mkdir,
logger)
from .utils import BilbyJsonEncoder, decode_bilby_json
from .result import FileMovedError
def grid_file_name(outdir, label, gzip=False):
""" Returns the standard filename used for a grid file
Parameters
----------
outdir: str
Name of the output directory
label: str
Naming scheme of the output file
gzip: bool, optional
Set to True to append `.gz` to the extension for saving in gzipped format
Returns
-------
str: File name of the output file
"""
if gzip:
return os.path.join(outdir, '{}_grid.json.gz'.format(label))
else:
return os.path.join(outdir, '{}_grid.json'.format(label))
class Grid(object):
def __init__(self, likelihood=None, priors=None, grid_size=101,
save=False, label='no_label', outdir='.', gzip=False):
"""
Parameters
----------
likelihood: bilby.likelihood.Likelihood
priors: bilby.prior.PriorDict
grid_size: int, list, dict
Size of the grid, can be any of
- int: all dimensions will have equal numbers of points
- list: dimensions will use these points/this number of points in
order of priors
- dict: as for list
save: bool
Set whether to save the results of the grid
label: str
The label for the filename to which the grid is saved
outdir: str
The output directory to which the grid will be saved
gzip: bool
Set whether to gzip the output grid file
"""
if priors is None:
priors = dict()
self.likelihood = likelihood
self.priors = PriorDict(priors)
self.n_dims = len(priors)
self.parameter_names = list(self.priors.keys())
self.sample_points = dict()
self._get_sample_points(grid_size)
# evaluate the prior on the grid points
if self.n_dims > 0:
self._ln_prior = self.priors.ln_prob(
{key: self.mesh_grid[i].flatten() for i, key in
enumerate(self.parameter_names)}, axis=0).reshape(
self.mesh_grid[0].shape)
self._ln_likelihood = None
# evaluate the likelihood on the grid points
if likelihood is not None and self.n_dims > 0:
self._evaluate()
self.save = save
self.label = None
self.outdir = None
if self.save:
if isinstance(label, str):
self.label = label
if isinstance(outdir, str):
self.outdir = os.path.abspath(outdir)
self.save_to_file(gzip=gzip)
@property
def ln_prior(self):
return self._ln_prior
@property
def prior(self):
return np.exp(self.ln_prior)
@property
def ln_likelihood(self):
if self._ln_likelihood is None:
self._evaluate()
return self._ln_likelihood
@property
def ln_posterior(self):
return self.ln_likelihood + self.ln_prior
def marginalize(self, log_array, parameters=None, not_parameters=None):
"""
Marginalize over a list of parameters.
Parameters
----------
log_array: array_like
A :class:`numpy.ndarray` of log likelihood/posterior values.
parameters: list, str
A list, or single string, of parameters to marginalize over. If None
then all parameters will be marginalized over.
not_parameters: list, str
Instead of a list of parameters to marginalize over you can list
the set of parameter to *not* marginalize over.
Returns
-------
out_array: array_like
An array containing the marginalized log likelihood/posterior.
"""
if parameters is None:
params = list(self.parameter_names)
if not_parameters is not None:
if isinstance(not_parameters, str):
not_params = [not_parameters]
elif isinstance(not_parameters, list):
not_params = not_parameters
else:
raise TypeError("Parameters names must be a list or string")
for name in list(params):
if name in not_params:
params.remove(name)
elif isinstance(parameters, str):
params = [parameters]
elif isinstance(parameters, list):
params = parameters
else:
raise TypeError("Parameters names must be a list or string")
out_array = log_array.copy()
names = list(self.parameter_names)
for name in params:
out_array = self._marginalize_single(out_array, name, names)
return out_array
def _marginalize_single(self, log_array, name, non_marg_names=None):
"""
Marginalize the log likelihood/posterior over a single given parameter.
Parameters
----------
log_array: array_like
A :class:`numpy.ndarray` of log likelihood/posterior values.
name: str
The name of the parameter to marginalize over.
non_marg_names: list
A list of parameter names that have not been marginalized over.
Returns
-------
out: array_like
An array containing the marginalized log likelihood/posterior.
"""
if name not in self.parameter_names:
raise ValueError("'{}' is not a recognised "
"parameter".format(name))
if non_marg_names is None:
non_marg_names = list(self.parameter_names)
axis = non_marg_names.index(name)
non_marg_names.remove(name)
places = self.sample_points[name]
if len(places) > 1:
out = np.apply_along_axis(
logtrapzexp, axis, log_array, places[1] - places[0])
else:
# no marginalisation required, just remove the singleton dimension
z = log_array.shape
q = np.arange(0, len(z)).astype(int) != axis
out = np.reshape(log_array, tuple((np.array(list(z)))[q]))
return out
@property
def ln_evidence(self):
return self.marginalize(self.ln_posterior)
@property
def log_evidence(self):
return self.ln_evidence
@property
def log_noise_evidence(self):
return self.ln_noise_evidence
def marginalize_ln_likelihood(self, parameters=None, not_parameters=None):
"""
Marginalize the ln likelihood over either the specified parameter or
all but the specified "not_parameter". If neither is specified the
ln likelihood will be fully marginalized over.
Parameters
----------
parameters: str, list, optional
Name of, or list of names of, the parameter(s) to marginalize over.
not_parameters: str, optional
Name of, or list of names of, the parameter(s) to not marginalize over.
Returns
-------
array-like:
The marginalized ln likelihood.
"""
return self.marginalize(self.ln_likelihood, parameters=parameters,
not_parameters=not_parameters)
def marginalize_ln_posterior(self, parameters=None, not_parameters=None):
"""
Marginalize the ln posterior over either the specified parameter or all
but the specified "not_parameter". If neither is specified the
ln posterior will be fully marginalized over.
Parameters
----------
parameters: str, list, optional
Name of, or list of names of, the parameter(s) to marginalize over.
not_parameters: str, optional
Name of, or list of names of, the parameter(s) to not marginalize over.
Returns
-------
array-like:
The marginalized ln posterior.
"""
return self.marginalize(self.ln_posterior, parameters=parameters,
not_parameters=not_parameters)
def marginalize_likelihood(self, parameters=None, not_parameters=None):
"""
Marginalize the likelihood over either the specified parameter or all
but the specified "not_parameter". If neither is specified the
likelihood will be fully marginalized over.
Parameters
----------
parameters: str, list, optional
Name of, or list of names of, the parameter(s) to marginalize over.
not_parameters: str, optional
Name of, or list of names of, the parameter(s) to not marginalize over.
Returns
-------
array-like:
The marginalized likelihood.
"""
ln_like = self.marginalize(self.ln_likelihood, parameters=parameters,
not_parameters=not_parameters)
# NOTE: the output will not be properly normalised
return np.exp(ln_like - np.max(ln_like))
def marginalize_posterior(self, parameters=None, not_parameters=None):
"""
Marginalize the posterior over either the specified parameter or all
but the specified "not_parameters". If neither is specified the
posterior will be fully marginalized over.
Parameters
----------
parameters: str, list, optional
Name of, or list of names of, the parameter(s) to marginalize over.
not_parameters: str, optional
Name of, or list of names of, the parameter(s) to not marginalize over.
Returns
-------
array-like:
The marginalized posterior.
"""
ln_post = self.marginalize(self.ln_posterior, parameters=parameters,
not_parameters=not_parameters)
# NOTE: the output will not be properly normalised
return np.exp(ln_post - np.max(ln_post))
def _evaluate(self):
self._ln_likelihood = np.empty(self.mesh_grid[0].shape)
self._evaluate_recursion(0)
self.ln_noise_evidence = self.likelihood.noise_log_likelihood()
def _evaluate_recursion(self, dimension):
if dimension == self.n_dims:
current_point = tuple([[int(np.where(
self.likelihood.parameters[name] ==
self.sample_points[name])[0])] for name in self.parameter_names])
self._ln_likelihood[current_point] = self.likelihood.log_likelihood()
else:
name = self.parameter_names[dimension]
for ii in range(self._ln_likelihood.shape[dimension]):
self.likelihood.parameters[name] = self.sample_points[name][ii]
self._evaluate_recursion(dimension + 1)
def _get_sample_points(self, grid_size):
for ii, key in enumerate(self.parameter_names):
if isinstance(self.priors[key], Prior):
if isinstance(grid_size, int):
self.sample_points[key] = self.priors[key].rescale(
np.linspace(0, 1, grid_size))
elif isinstance(grid_size, list):
if isinstance(grid_size[ii], int):
self.sample_points[key] = self.priors[key].rescale(
np.linspace(0, 1, grid_size[ii]))
else:
self.sample_points[key] = grid_size[ii]
elif isinstance(grid_size, dict):
if isinstance(grid_size[key], int):
self.sample_points[key] = self.priors[key].rescale(
np.linspace(0, 1, grid_size[key]))
else:
self.sample_points[key] = grid_size[key]
else:
raise TypeError("Unrecognized 'grid_size' type")
# set the mesh of points
self.mesh_grid = np.meshgrid(
*(self.sample_points[key] for key in self.parameter_names),
indexing='ij')
def _get_save_data_dictionary(self):
# This list defines all the parameters saved in the grid object
save_attrs = [
'label', 'outdir', 'parameter_names', 'n_dims', 'priors',
'sample_points', 'ln_likelihood', 'ln_evidence',
'ln_noise_evidence']
dictionary = OrderedDict()
for attr in save_attrs:
try:
dictionary[attr] = getattr(self, attr)
except ValueError as e:
logger.debug("Unable to save {}, message: {}".format(attr, e))
pass
return dictionary
def _safe_outdir_creation(self, outdir=None, caller_func=None):
if outdir is None:
outdir = self.outdir
try:
check_directory_exists_and_if_not_mkdir(outdir)
except PermissionError:
raise FileMovedError("Can not write in the out directory.\n"
"Did you move the here file from another system?\n"
"Try calling " + caller_func.__name__ + " with the 'outdir' "
"keyword argument, e.g. " + caller_func.__name__ + "(outdir='.')")
return outdir
def save_to_file(self, filename=None, overwrite=False, outdir=None,
gzip=False):
"""
Writes the Grid to a file.
Parameters
----------
filename: str, optional
Filename to write to (overwrites the default)
overwrite: bool, optional
Whether or not to overwrite an existing result file.
default=False
outdir: str, optional
Path to the outdir. Default is the one stored in the Grid object.
gzip: bool, optional
If true this will gzip the resulting file and add '.gz' to the file
extension.
"""
outdir = self._safe_outdir_creation(outdir, self.save_to_file)
if filename is None:
if self.label is None:
raise ValueError("'label' for the output file name is not given")
filename = grid_file_name(outdir, self.label, gzip)
if os.path.isfile(filename):
if overwrite:
logger.debug('Removing existing file {}'.format(filename))
os.remove(filename)
else:
logger.debug(
'Renaming existing file {} to {}.old'.format(filename,
filename))
os.rename(filename, filename + '.old')
logger.debug("Saving result to {}".format(filename))
dictionary = self._get_save_data_dictionary()
try:
dictionary["priors"] = dictionary["priors"]._get_json_dict()
if gzip or (os.path.splitext(filename)[-1] == '.gz'):
import gzip
# encode to a string
json_str = json.dumps(dictionary, cls=BilbyJsonEncoder).encode('utf-8')
with gzip.GzipFile(filename, 'w') as file:
file.write(json_str)
else:
with open(filename, 'w') as file:
json.dump(dictionary, file, indent=2, cls=BilbyJsonEncoder)
except Exception as e:
logger.error("\n\n Saving the data has failed with the "
"following message:\n {} \n\n".format(e))
@classmethod
def read(cls, filename=None, outdir=None, label=None, gzip=False):
""" Read in a saved .json grid file
Parameters
----------
filename: str
If given, try to load from this filename
outdir, label: str
If given, use the default naming convention for saved results file
gzip: bool
If given, whether the file is gzipped or not (only required if the
file is gzipped, but does not have the standard '.gz' file
extension)
Returns
-------
grid: bilby.core.grid.Grid
Raises
-------
ValueError: If no filename is given and either outdir or label is None
If no bilby.core.grid.Grid is found in the path
"""
if filename is not None:
fname = filename
else:
if (outdir is None) and (label is None):
raise ValueError("No information given to load file")
else:
fname = grid_file_name(outdir, label, gzip)
if os.path.isfile(fname):
if gzip or os.path.splitext(fname)[1].lstrip('.') == 'gz':
import gzip
with gzip.GzipFile(fname, 'r') as file:
json_str = file.read().decode('utf-8')
dictionary = json.loads(json_str, object_hook=decode_bilby_json)
else:
with open(fname, 'r') as file:
dictionary = json.load(file, object_hook=decode_bilby_json)
try:
grid = cls(likelihood=None, priors=dictionary['priors'],
grid_size=dictionary['sample_points'],
label=dictionary['label'], outdir=dictionary['outdir'])
# set the likelihood
grid._ln_likelihood = dictionary['ln_likelihood']
grid.ln_noise_evidence = dictionary['ln_noise_evidence']
return grid
except TypeError as e:
raise IOError("Unable to load dictionary, error={}".format(e))
else:
raise IOError("No result '{}' found".format(filename))
......@@ -14,9 +14,11 @@ class Likelihood(object):
Parameters
----------
parameters:
parameters: dict
A dictionary of the parameter names and associated values
"""
self.parameters = parameters
self._meta_data = None
def __repr__(self):
return self.__class__.__name__ + '(parameters={})'.format(self.parameters)
......@@ -50,10 +52,7 @@ class Likelihood(object):
@property
def meta_data(self):
try:
return self._meta_data
except AttributeError:
return None
return getattr(self, '_meta_data', None)
@meta_data.setter
def meta_data(self, meta_data):
......@@ -63,6 +62,31 @@ class Likelihood(object):
raise ValueError("The meta_data must be an instance of dict")
class ZeroLikelihood(Likelihood):
""" A special test-only class which already returns zero likelihood
Parameters
----------
likelihood: bilby.core.likelihood.Likelihood
A likelihood object to mimic
"""
def __init__(self, likelihood):
super(ZeroLikelihood, self).__init__(dict.fromkeys(likelihood.parameters))
self.parameters = likelihood.parameters
self._parent = likelihood
def log_likelihood(self):
return 0
def noise_log_likelihood(self):
return 0
def __getattr__(self, name):
return getattr(self._parent, name)
class Analytical1DLikelihood(Likelihood):
"""
A general class for 1D analytical functions. The model
......@@ -81,11 +105,11 @@ class Analytical1DLikelihood(Likelihood):
def __init__(self, x, y, func):
parameters = infer_parameters_from_function(func)
Likelihood.__init__(self, dict.fromkeys(parameters))
super(Analytical1DLikelihood, self).__init__(dict.fromkeys(parameters))
self.x = x
self.y = y
self.__func = func
self.__function_keys = list(self.parameters.keys())
self._func = func
self._function_keys = list(self.parameters.keys())
def __repr__(self):
return self.__class__.__name__ + '(x={}, y={}, func={})'.format(self.x, self.y, self.func.__name__)
......@@ -93,7 +117,7 @@ class Analytical1DLikelihood(Likelihood):
@property
def func(self):
""" Make func read-only """
return self.__func
return self._func
@property
def model_parameters(self):
......@@ -103,7 +127,7 @@ class Analytical1DLikelihood(Likelihood):
@property
def function_keys(self):
""" Makes function_keys read_only """
return self.__function_keys
return self._function_keys
@property
def n(self):
......@@ -113,24 +137,24 @@ class Analytical1DLikelihood(Likelihood):
@property
def x(self):
""" The independent variable. Setter assures that single numbers will be converted to arrays internally """
return self.__x
return self._x
@x.setter
def x(self, x):
if isinstance(x, int) or isinstance(x, float):
x = np.array([x])
self.__x = x
self._x = x
@property
def y(self):
""" The dependent variable. Setter assures that single numbers will be converted to arrays internally """
return self.__y
return self._y
@y.setter
def y(self, y):
if isinstance(y, int) or isinstance(y, float):
y = np.array([y])
self.__y = y
self._y = y
@property
def residual(self):
......@@ -161,7 +185,7 @@ class GaussianLikelihood(Analytical1DLikelihood):
to that for `x` and `y`.
"""
Analytical1DLikelihood.__init__(self, x=x, y=y, func=func)
super(GaussianLikelihood, self).__init__(x=x, y=y, func=func)
self.sigma = sigma
# Check if sigma was provided, if not it is a parameter
......@@ -222,7 +246,7 @@ class PoissonLikelihood(Analytical1DLikelihood):
fixed value is given).
"""
Analytical1DLikelihood.__init__(self, x=x, y=y, func=func)
super(PoissonLikelihood, self).__init__(x=x, y=y, func=func)
def log_likelihood(self):
rate = self.func(self.x, **self.model_parameters)
......@@ -273,7 +297,7 @@ class ExponentialLikelihood(Analytical1DLikelihood):
value is given). The model should return the expected mean of
the exponential distribution for each data point.
"""
Analytical1DLikelihood.__init__(self, x=x, y=y, func=func)
super(ExponentialLikelihood, self).__init__(x=x, y=y, func=func)
def log_likelihood(self):
mu = self.func(self.x, **self.model_parameters)
......@@ -287,7 +311,7 @@ class ExponentialLikelihood(Analytical1DLikelihood):
@property
def y(self):
""" Property assures that y-value is positive. """
return self.__y
return self._y
@y.setter
def y(self, y):
......@@ -295,7 +319,7 @@ class ExponentialLikelihood(Analytical1DLikelihood):
y = np.array([y])
if np.any(y < 0):
raise ValueError("Data must be non-negative")
self.__y = y
self._y = y
class StudentTLikelihood(Analytical1DLikelihood):
......@@ -328,7 +352,7 @@ class StudentTLikelihood(Analytical1DLikelihood):
Set the scale of the distribution. If not given then this defaults
to 1, which specifies a standard (central) Student's t-distribution
"""
Analytical1DLikelihood.__init__(self, x=x, y=y, func=func)
super(StudentTLikelihood, self).__init__(x=x, y=y, func=func)
self.nu = nu
self.sigma = sigma
......@@ -389,7 +413,7 @@ class JointLikelihood(Likelihood):
likelihoods to be combined parsed as arguments
"""
self.likelihoods = likelihoods
Likelihood.__init__(self, parameters={})
super(JointLikelihood, self).__init__(parameters={})
self.__sync_parameters()
def __sync_parameters(self):
......@@ -403,19 +427,19 @@ class JointLikelihood(Likelihood):
@property
def likelihoods(self):
""" The list of likelihoods """
return self.__likelihoods
return self._likelihoods
@likelihoods.setter
def likelihoods(self, likelihoods):
likelihoods = copy.deepcopy(likelihoods)
if isinstance(likelihoods, tuple) or isinstance(likelihoods, list):
if all(isinstance(likelihood, Likelihood) for likelihood in likelihoods):
self.__likelihoods = list(likelihoods)
self._likelihoods = list(likelihoods)
else:
raise ValueError('Try setting the JointLikelihood like this\n'
'JointLikelihood(first_likelihood, second_likelihood, ...)')
elif isinstance(likelihoods, Likelihood):
self.__likelihoods = [likelihoods]
self._likelihoods = [likelihoods]
else:
raise ValueError('Input likelihood is not a list of tuple. You need to set multiple likelihoods.')
......
This diff is collapsed.
This diff is collapsed.
......@@ -9,20 +9,27 @@ from ..prior import PriorDict
from .base_sampler import Sampler
from .cpnest import Cpnest
from .dynesty import Dynesty
from .dynamic_dynesty import DynamicDynesty
from .emcee import Emcee
from .nestle import Nestle
from .polychord import PyPolyChord
from .ptemcee import Ptemcee
from .ptmcmc import PTMCMCSampler
from .pymc3 import Pymc3
from .pymultinest import Pymultinest
from .fake_sampler import FakeSampler
from . import proposal
implemented_samplers = {
'cpnest': Cpnest, 'dynesty': Dynesty, 'emcee': Emcee, 'nestle': Nestle,
'ptemcee': Ptemcee, 'pymc3': Pymc3, 'pymultinest': Pymultinest}
IMPLEMENTED_SAMPLERS = {
'cpnest': Cpnest, 'dynamic_dynesty': DynamicDynesty, 'dynesty': Dynesty, 'emcee': Emcee, 'nestle': Nestle,
'ptemcee': Ptemcee,'ptmcmcsampler' : PTMCMCSampler,
'pymc3': Pymc3, 'pymultinest': Pymultinest, 'pypolychord': PyPolyChord,
'fake_sampler': FakeSampler }
if command_line_args.sampler_help:
sampler = command_line_args.sampler_help
if sampler in implemented_samplers:
sampler_class = implemented_samplers[sampler]
if sampler in IMPLEMENTED_SAMPLERS:
sampler_class = IMPLEMENTED_SAMPLERS[sampler]
print('Help for sampler "{}":'.format(sampler))
print(sampler_class.__doc__)
else:
......@@ -31,7 +38,7 @@ if command_line_args.sampler_help:
'the name of the sampler')
else:
print('Requested sampler {} not implemented'.format(sampler))
print('Available samplers = {}'.format(implemented_samplers))
print('Available samplers = {}'.format(IMPLEMENTED_SAMPLERS))
sys.exit()
......@@ -39,7 +46,8 @@ if command_line_args.sampler_help:
def run_sampler(likelihood, priors=None, label='label', outdir='outdir',
sampler='dynesty', use_ratio=None, injection_parameters=None,
conversion_function=None, plot=False, default_priors_file=None,
clean=None, meta_data=None, save=True, **kwargs):
clean=None, meta_data=None, save=True, gzip=False,
result_class=None, **kwargs):
"""
The primary interface to easy parameter estimation
......@@ -81,6 +89,13 @@ def run_sampler(likelihood, priors=None, label='label', outdir='outdir',
overwritten.
save: bool
If true, save the priors and results to disk.
If hdf5, save as an hdf5 file instead of json.
gzip: bool
If true, and save is true, gzip the saved results file.
result_class: bilby.core.result.Result, or child of
The result class to use. By default, `bilby.core.result.Result` is used,
but objects which inherit from this class can be given providing
additional methods.
**kwargs:
All kwargs are passed directly to the samplers `run` function
......@@ -99,7 +114,7 @@ def run_sampler(likelihood, priors=None, label='label', outdir='outdir',
if command_line_args.clean:
kwargs['resume'] = False
from . import implemented_samplers
from . import IMPLEMENTED_SAMPLERS
if priors is None:
priors = dict()
......@@ -118,17 +133,23 @@ def run_sampler(likelihood, priors=None, label='label', outdir='outdir',
meta_data = dict()
meta_data['likelihood'] = likelihood.meta_data
if command_line_args.bilby_zero_likelihood_mode:
from bilby.core.likelihood import ZeroLikelihood
likelihood = ZeroLikelihood(likelihood)
if isinstance(sampler, Sampler):
pass
elif isinstance(sampler, str):
if sampler.lower() in implemented_samplers:
sampler_class = implemented_samplers[sampler.lower()]
if sampler.lower() in IMPLEMENTED_SAMPLERS:
sampler_class = IMPLEMENTED_SAMPLERS[sampler.lower()]
sampler = sampler_class(
likelihood, priors=priors, outdir=outdir, label=label,
injection_parameters=injection_parameters, meta_data=meta_data,
use_ratio=use_ratio, plot=plot, **kwargs)
use_ratio=use_ratio, plot=plot, result_class=result_class,
**kwargs)
else:
print(implemented_samplers)
print(IMPLEMENTED_SAMPLERS)
raise ValueError(
"Sampler {} not yet implemented".format(sampler))
elif inspect.isclass(sampler):
......@@ -140,22 +161,25 @@ def run_sampler(likelihood, priors=None, label='label', outdir='outdir',
else:
raise ValueError(
"Provided sampler should be a Sampler object or name of a known "
"sampler: {}.".format(', '.join(implemented_samplers.keys())))
"sampler: {}.".format(', '.join(IMPLEMENTED_SAMPLERS.keys())))
if sampler.cached_result:
logger.warning("Using cached result")
return sampler.cached_result
start_time = datetime.datetime.now()
if command_line_args.test:
if command_line_args.bilby_test_mode:
result = sampler._run_test()
else:
result = sampler.run_sampler()
end_time = datetime.datetime.now()
result.sampling_time = (end_time - start_time).total_seconds()
logger.info('Sampling time: {}'.format(end_time - start_time))
# Some samplers calculate the sampling time internally
if result.sampling_time is None:
result.sampling_time = end_time - start_time
logger.info('Sampling time: {}'.format(result.sampling_time))
# Convert sampling time into seconds
result.sampling_time = result.sampling_time.total_seconds()
if sampler.use_ratio:
result.log_noise_evidence = likelihood.noise_log_likelihood()
......@@ -167,18 +191,22 @@ def run_sampler(likelihood, priors=None, label='label', outdir='outdir',
result.log_bayes_factor = \
result.log_evidence - result.log_noise_evidence
if result.injection_parameters is not None:
if conversion_function is not None:
result.injection_parameters = conversion_function(
result.injection_parameters)
# Initial save of the sampler in case of failure in post-processing
if save:
result.save_to_file(extension=save, gzip=gzip)
if None not in [result.injection_parameters, conversion_function]:
result.injection_parameters = conversion_function(
result.injection_parameters)
result.samples_to_posterior(likelihood=likelihood, priors=priors,
result.samples_to_posterior(likelihood=likelihood, priors=result.priors,
conversion_function=conversion_function)
if save:
result.save_to_file()
logger.info("Results saved to {}/".format(outdir))
# The overwrite here ensures we overwrite the initially stored data
result.save_to_file(overwrite=True, extension=save, gzip=gzip)
if plot:
result.plot_corner()
logger.info("Summary of results:\n{}".format(result))
return result
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.