Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • john-veitch/bilby
  • duncanmmacleod/bilby
  • colm.talbot/bilby
  • lscsoft/bilby
  • matthew-pitkin/bilby
  • salvatore-vitale/tupak
  • charlie.hoy/bilby
  • bfarr/bilby
  • virginia.demilio/bilby
  • vivien/bilby
  • eric-howell/bilby
  • sebastian-khan/bilby
  • rhys.green/bilby
  • moritz.huebner/bilby
  • joseph.mills/bilby
  • scott.coughlin/bilby
  • matthew.carney/bilby
  • hyungwon.lee/bilby
  • monica.rizzo/bilby
  • christopher-berry/bilby
  • lindsay.demarchi/bilby
  • kaushik.rao/bilby
  • charles.kimball/bilby
  • andrew.matas/bilby
  • juan.calderonbustillo/bilby
  • patrick-meyers/bilby
  • hannah.middleton/bilby
  • eve.chase/bilby
  • grant.meadors/bilby
  • khun.phukon/bilby
  • sumeet.kulkarni/bilby
  • daniel.reardon/bilby
  • cjhaster/bilby
  • sylvia.biscoveanu/bilby
  • james-clark/bilby
  • meg.millhouse/bilby
  • joshua.willis/bilby
  • nikhil.sarin/bilby
  • paul.easter/bilby
  • youngmin/bilby
  • daniel-williams/bilby
  • shanika.galaudage/bilby
  • bruce.edelman/bilby
  • avi.vajpeyi/bilby
  • isobel.romero-shaw/bilby
  • andrew.kim/bilby
  • dominika.zieba/bilby
  • jonathan.davies/bilby
  • marc.arene/bilby
  • srishti.tiwari/bilby-tidal-heating-eccentric
  • aditya.vijaykumar/bilby
  • michael.williams/bilby
  • cecilio.garcia-quiros/bilby
  • rory-smith/bilby
  • maite.mateu-lucena/bilby
  • wushichao/bilby
  • kaylee.desoto/bilby
  • brandon.piotrzkowski/bilby
  • rossella.gamba/bilby
  • hunter.gabbard/bilby
  • deep.chatterjee/bilby
  • tathagata.ghosh/bilby
  • arunava.mukherjee/bilby
  • philip.relton/bilby
  • reed.essick/bilby
  • pawan.gupta/bilby
  • francisco.hernandez/bilby
  • rhiannon.udall/bilby
  • leo.tsukada/bilby
  • will-farr/bilby
  • vijay.varma/bilby
  • jeremy.baier/bilby
  • joshua.brandt/bilby
  • ethan.payne/bilby
  • ka-lok.lo/bilby
  • antoni.ramos-buades/bilby
  • oliviastephany.wilk/bilby
  • jack.heinzel/bilby
  • samson.leong/bilby-psi4
  • viviana.caceres/bilby
  • nadia.qutob/bilby
  • michael-coughlin/bilby
  • hemantakumar.phurailatpam/bilby
  • boris.goncharov/bilby
  • sama.al-shammari/bilby
  • siqi.zhong/bilby
  • jocelyn-read/bilby
  • marc.penuliar/bilby
  • stephanie.letourneau/bilby
  • alexandresebastien.goettel/bilby
  • alec.gunny/bilby
  • serguei.ossokine/bilby
  • pratyusava.baral/bilby
  • sophie.hourihane/bilby
  • eunsub/bilby
  • james.hart/bilby
  • pratyusava.baral/bilby-tg
  • zhaozc/bilby
  • pratyusava.baral/bilby_SoG
  • tomasz.baka/bilby
  • nicogerardo.bers/bilby
  • soumen.roy/bilby
  • isaac.mcmahon/healpix-redundancy
  • asamakai.baker/bilby-frequency-dependent-antenna-pattern-functions
  • anna.puecher/bilby
  • pratyusava.baral/bilby-x-g
  • thibeau.wouters/bilby
  • christian.adamcewicz/bilby
  • raffi.enficiaud/bilby
109 results
Show changes
Commits on Source (14)
Showing
with 391 additions and 230 deletions
......@@ -157,3 +157,18 @@ deploy_release:
- twine upload dist/*
only:
- tags
precommits-py3.7:
stage: test
image: bilbydev/v2-dockerfile-test-suite-python37
script:
- source activate python37
- mkdir -p .pip37
- pip install --upgrade pip
- pip --cache-dir=.pip37 install --upgrade bilby
- pip --cache-dir=.pip37 install .
- pip --cache-dir=.pip37 install pre-commit
# Run precommits (flake8, spellcheck, isort, no merge conflicts, etc)
- pre-commit run --all-files --verbose --show-diff-on-failure
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.3.0
hooks:
- id: check-merge-conflict # prevent committing files with merge conflicts
- id: flake8 # checks for flake8 errors
#- repo: https://github.com/codespell-project/codespell
# rev: v1.16.0
# hooks:
# - id: codespell # Spellchecker
# args: [-L, nd, --skip, "*.ipynb,*.html", --ignore-words=.dictionary.txt]
# exclude: ^examples/tutorials/
#- repo: https://github.com/asottile/seed-isort-config
# rev: v1.3.0
# hooks:
# - id: seed-isort-config
# args: [--application-directories, 'bilby/']
#- repo: https://github.com/pre-commit/mirrors-isort
# rev: v4.3.21
# hooks:
# - id: isort # sort imports alphabetically and separates import into sections
# args: [-w=88, -m=3, -tc, -sp=setup.cfg ]
......@@ -49,6 +49,33 @@ def my_new_function(x, y, print=False):
5. Don't repeat yourself. If code is repeated in multiple places, wrap it up into a function.
6. Add tests. The C.I. is there to do the work of "checking" the code, both now and into the future. Use it.
## Automated code checking
In order to automate checking of the code quality, we use
[pre-commit](https://pre-commit.com/). For more details, see the documentation,
here we will give a quick-start guide:
1. Install and configure:
```console
$ pip install pre-commit # install the pre-commit package
$ cd bilby
$ pre-commit install
```
2. Now, when you run `$ git commit`, there will be a pre-commit check.
This is going to search for issues in your code: spelling, formatting, etc.
In some cases, it will automatically fix the code, in other cases, it will
print a warning. If it automatically fixed the code, you'll need to add the
changes to the index (`$ git add FILE.py`) and run `$ git commit` again. If
it didn't automatically fix the code, but still failed, it will have printed
a message as to why the commit failed. Read the message, fix the issues,
then recommit.
3. The pre-commit checks are done to avoid pushing and then failing. But, you
can skip them by running `$ git commit --no-verify`, but note that the C.I.
still does the check so you won't be able to merge until the issues are
resolved.
If you experience any issues with pre-commit, please ask for support on the
usual help channels.
## Code relevance
The bilby code base is intended to be highly modular and flexible. We encourage
......
......@@ -8,7 +8,7 @@ from collections import OrderedDict
from .prior import Prior, PriorDict
from .utils import (logtrapzexp, check_directory_exists_and_if_not_mkdir,
logger)
from .utils import BilbyJsonEncoder, decode_bilby_json
from .utils import BilbyJsonEncoder, load_json, move_old_file
from .result import FileMovedError
......@@ -397,18 +397,7 @@ class Grid(object):
filename = grid_file_name(outdir, self.label, gzip)
if os.path.isfile(filename):
if overwrite:
logger.debug('Removing existing file {}'.format(filename))
os.remove(filename)
else:
logger.debug(
'Renaming existing file {} to {}.old'.format(filename,
filename))
os.rename(filename, filename + '.old')
logger.debug("Saving result to {}".format(filename))
move_old_file(filename, overwrite)
dictionary = self._get_save_data_dictionary()
try:
......@@ -452,23 +441,14 @@ class Grid(object):
"""
if filename is not None:
fname = filename
else:
if filename is None:
if (outdir is None) and (label is None):
raise ValueError("No information given to load file")
else:
fname = grid_file_name(outdir, label, gzip)
filename = grid_file_name(outdir, label, gzip)
if os.path.isfile(fname):
if gzip or os.path.splitext(fname)[1].lstrip('.') == 'gz':
import gzip
with gzip.GzipFile(fname, 'r') as file:
json_str = file.read().decode('utf-8')
dictionary = json.loads(json_str, object_hook=decode_bilby_json)
else:
with open(fname, 'r') as file:
dictionary = json.load(file, object_hook=decode_bilby_json)
if os.path.isfile(filename):
dictionary = load_json(filename, gzip)
try:
grid = cls(likelihood=None, priors=dictionary['priors'],
grid_size=dictionary['sample_points'],
......
......@@ -2,7 +2,7 @@ from __future__ import division, print_function
import copy
import numpy as np
from scipy.special import gammaln
from scipy.special import gammaln, xlogy
from scipy.stats import multivariate_normal
from .utils import infer_parameters_from_function
......@@ -402,6 +402,53 @@ class StudentTLikelihood(Analytical1DLikelihood):
self._nu = nu
class Multinomial(Likelihood):
"""
Likelihood for system with N discrete possibilities.
"""
def __init__(self, data, n_dimensions, label="parameter_"):
"""
Parameters
----------
data: array-like
The number of objects in each class
n_dimensions: int
The number of classes
"""
self.data = np.array(data)
self._total = np.sum(self.data)
super(Multinomial, self).__init__(dict())
self.n = n_dimensions
self.label = label
self._nll = None
def log_likelihood(self):
"""
Since n - 1 parameters are sampled, the last parameter is 1 - the rest
"""
probs = [self.parameters[self.label + str(ii)]
for ii in range(self.n - 1)]
probs.append(1 - sum(probs))
return self._multinomial_ln_pdf(probs=probs)
def noise_log_likelihood(self):
"""
Our null hypothesis is that all bins have probability 1 / nbins, i.e.,
no bin is preferred over any other.
"""
if self._nll is None:
self._nll = self._multinomial_ln_pdf(probs=1 / self.n)
return self._nll
def _multinomial_ln_pdf(self, probs):
"""Lifted from scipy.stats.multinomial._logpdf"""
ln_prob = gammaln(self._total + 1) + np.sum(
xlogy(self.data, probs) - gammaln(self.data + 1), axis=-1)
return ln_prob
class AnalyticalMultidimensionalCovariantGaussian(Likelihood):
"""
A multivariate Gaussian likelihood
......
......@@ -8,7 +8,8 @@ import scipy.stats
from scipy.integrate import cumtrapz
from scipy.interpolate import interp1d
from bilby.core.utils import infer_args_from_method, BilbyJsonEncoder, decode_bilby_json, logger
from bilby.core.utils import infer_args_from_method, BilbyJsonEncoder, decode_bilby_json, logger, \
get_dict_with_properties
class Prior(object):
......@@ -280,15 +281,8 @@ class Prior(object):
def get_instantiation_dict(self):
subclass_args = infer_args_from_method(self.__init__)
property_names = [p for p in dir(self.__class__)
if isinstance(getattr(self.__class__, p), property)]
dict_with_properties = self.__dict__.copy()
for key in property_names:
dict_with_properties[key] = getattr(self, key)
instantiation_dict = dict()
for key in subclass_args:
instantiation_dict[key] = dict_with_properties[key]
return instantiation_dict
dict_with_properties = get_dict_with_properties(self)
return {key: dict_with_properties[key] for key in subclass_args}
@property
def boundary(self):
......
......@@ -225,6 +225,62 @@ ConditionalFermiDirac = conditional_prior_factory(FermiDirac)
ConditionalInterped = conditional_prior_factory(Interped)
class DirichletElement(ConditionalBeta):
"""
Single element in a dirichlet distribution
The probability scales as
$p(x_order) \propto (x_max - x_order)^(n_dimensions - order - 2)$
for x_order < x_max, where x_max is the sum of x_i for i < order
Examples
--------
n_dimensions = 1:
p(x_0) \propto 1 ; 0 < x_0 < 1
n_dimensions = 2:
p(x_0) \propto (1 - x_0) ; 0 < x_0 < 1
p(x_1) \propto 1 ; 0 < x_1 < 1
Parameters
----------
order: int
Order of this element of the dirichlet distribution.
n_dimensions: int
Total number of elements of the dirichlet distribution
label: str
Label for the dirichlet distribution.
This should be the same for all elements.
"""
def __init__(self, order, n_dimensions, label):
super(DirichletElement, self).__init__(
minimum=0, maximum=1, alpha=1, beta=n_dimensions - order - 1,
name=label + str(order),
condition_func=self.dirichlet_condition
)
self.label = label
self.n_dimensions = n_dimensions
self.order = order
self._required_variables = [
label + str(ii) for ii in range(order)
]
self.__class__.__name__ = 'Dirichlet'
def dirichlet_condition(self, reference_parms, **kwargs):
remaining = 1 - sum(
[kwargs[self.label + str(ii)] for ii in range(self.order)]
)
return dict(minimum=reference_parms["minimum"], maximum=remaining)
def __repr__(self):
return Prior.__repr__(self)
def get_instantiation_dict(self):
return Prior.get_instantiation_dict(self)
class ConditionalPriorException(PriorException):
""" General base class for all conditional prior exceptions """
......
......@@ -725,6 +725,49 @@ class ConditionalPriorDict(PriorDict):
self._resolve_conditions()
class DirichletPriorDict(ConditionalPriorDict):
def __init__(self, n_dim=None, label="dirichlet_"):
from .conditional import DirichletElement
self.n_dim = n_dim
self.label = label
super(DirichletPriorDict, self).__init__(dictionary=dict())
for ii in range(n_dim - 1):
self[label + "{}".format(ii)] = DirichletElement(
order=ii, n_dimensions=n_dim, label=label
)
def copy(self, **kwargs):
return self.__class__(n_dim=self.n_dim, label=self.label)
def _get_json_dict(self):
total_dict = dict()
total_dict["__prior_dict__"] = True
total_dict["__module__"] = self.__module__
total_dict["__name__"] = self.__class__.__name__
total_dict["n_dim"] = self.n_dim
total_dict["label"] = self.label
return total_dict
@classmethod
def _get_from_json_dict(cls, prior_dict):
try:
cls == getattr(
import_module(prior_dict["__module__"]),
prior_dict["__name__"])
except ImportError:
logger.debug("Cannot import prior module {}.{}".format(
prior_dict["__module__"], prior_dict["__name__"]
))
except KeyError:
logger.debug("Cannot find module name to load")
for key in ["__module__", "__name__", "__prior_dict__"]:
if key in prior_dict:
del prior_dict[key]
obj = cls(**prior_dict)
return obj
class ConditionalPriorDictException(PriorDictException):
""" General base class for all conditional prior dict exceptions """
......
......@@ -3,7 +3,7 @@ import scipy.stats
from scipy.special import erfinv
from .base import Prior, PriorException
from bilby.core.utils import logger, infer_args_from_method
from bilby.core.utils import logger, infer_args_from_method, get_dict_with_properties
class BaseJointPriorDist(object):
......@@ -105,11 +105,7 @@ class BaseJointPriorDist(object):
def get_instantiation_dict(self):
subclass_args = infer_args_from_method(self.__init__)
property_names = [p for p in dir(self.__class__)
if isinstance(getattr(self.__class__, p), property)]
dict_with_properties = self.__dict__.copy()
for key in property_names:
dict_with_properties[key] = getattr(self, key)
dict_with_properties = get_dict_with_properties(self)
instantiation_dict = dict()
for key in subclass_args:
if isinstance(dict_with_properties[key], list):
......
......@@ -22,7 +22,7 @@ from .utils import (
check_directory_exists_and_if_not_mkdir,
latex_plot_format, safe_save_figure,
)
from .utils import BilbyJsonEncoder, decode_bilby_json
from .utils import BilbyJsonEncoder, load_json, move_old_file
from .prior import Prior, PriorDict, DeltaFunction
......@@ -259,14 +259,7 @@ class Result(object):
filename = _determine_file_name(filename, outdir, label, 'json', gzip)
if os.path.isfile(filename):
if gzip or os.path.splitext(filename)[1].lstrip('.') == 'gz':
import gzip
with gzip.GzipFile(filename, 'r') as file:
json_str = file.read().decode('utf-8')
dictionary = json.loads(json_str, object_hook=decode_bilby_json)
else:
with open(filename, 'r') as file:
dictionary = json.load(file, object_hook=decode_bilby_json)
dictionary = load_json(filename, gzip)
try:
return cls(**dictionary)
except TypeError as e:
......@@ -468,17 +461,7 @@ class Result(object):
if filename is None:
filename = result_file_name(outdir, self.label, extension, gzip)
if os.path.isfile(filename):
if overwrite:
logger.debug('Removing existing file {}'.format(filename))
os.remove(filename)
else:
logger.debug(
'Renaming existing file {} to {}.old'.format(filename,
filename))
os.rename(filename, filename + '.old')
logger.debug("Saving result to {}".format(filename))
move_old_file(filename, overwrite)
# Convert the prior to a string representation for saving on disk
dictionary = self._get_save_data_dictionary()
......
......@@ -5,7 +5,6 @@ import dill as pickle
import signal
import numpy as np
from pandas import DataFrame
from ..utils import logger, check_directory_exists_and_if_not_mkdir
from .base_sampler import Sampler
......@@ -139,20 +138,7 @@ class DynamicDynesty(Dynesty):
print("")
# self.result.sampler_output = out
weights = np.exp(out['logwt'] - out['logz'][-1])
nested_samples = DataFrame(
out.samples, columns=self.search_parameter_keys)
nested_samples['weights'] = weights
nested_samples['log_likelihood'] = out.logl
self.result.samples = dynesty.utils.resample_equal(out.samples, weights)
self.result.nested_samples = nested_samples
self.result.log_likelihood_evaluations = self.reorder_loglikelihoods(
unsorted_loglikelihoods=out.logl, unsorted_samples=out.samples,
sorted_samples=self.result.samples)
self.result.log_evidence = out.logz[-1]
self.result.log_evidence_err = out.logzerr[-1]
self._generate_result(out)
if self.plot:
self.generate_trace_plots(out)
......
......@@ -363,26 +363,29 @@ class Dynesty(NestedSampler):
with open(dynesty_result, 'wb') as file:
pickle.dump(out, file)
self._generate_result(out)
self.calc_likelihood_count()
self.result.sampling_time = self.sampling_time
if self.plot:
self.generate_trace_plots(out)
return self.result
def _generate_result(self, out):
import dynesty
weights = np.exp(out['logwt'] - out['logz'][-1])
nested_samples = DataFrame(
out.samples, columns=self.search_parameter_keys)
nested_samples['weights'] = weights
nested_samples['log_likelihood'] = out.logl
self.result.samples = dynesty.utils.resample_equal(out.samples, weights)
self.result.nested_samples = nested_samples
self.result.log_likelihood_evaluations = self.reorder_loglikelihoods(
unsorted_loglikelihoods=out.logl, unsorted_samples=out.samples,
sorted_samples=self.result.samples)
self.calc_likelihood_count()
self.result.log_evidence = out.logz[-1]
self.result.log_evidence_err = out.logzerr[-1]
self.result.sampling_time = self.sampling_time
if self.plot:
self.generate_trace_plots(out)
return self.result
def _run_nested_wrapper(self, kwargs):
""" Wrapper function to run_nested
......
......@@ -373,20 +373,26 @@ class Emcee(MCMCSampler):
self.calculate_autocorrelation(
self.sampler.chain.reshape((-1, self.ndim)))
self.print_nburn_logging_info()
self.calc_likelihood_count()
self._generate_result()
self.result.samples = self.sampler.chain[:, self.nburn:, :].reshape(
(-1, self.ndim))
self.result.walkers = self.sampler.chain
return self.result
def _generate_result(self):
self.result.nburn = self.nburn
self.calc_likelihood_count()
if self.result.nburn > self.nsteps:
raise SamplerError(
"The run has finished, but the chain is not burned in: "
"`nburn < nsteps`. Try increasing the number of steps.")
self.result.samples = self.sampler.chain[:, self.nburn:, :].reshape(
(-1, self.ndim))
"`nburn < nsteps` ({} < {}). Try increasing the "
"number of steps.".format(self.result.nburn, self.nsteps))
blobs = np.array(self.sampler.blobs)
blobs_trimmed = blobs[self.nburn:, :, :].reshape((-1, 2))
log_likelihoods, log_priors = blobs_trimmed.T
self.result.log_likelihood_evaluations = log_likelihoods
self.result.log_prior_evaluations = log_priors
self.result.walkers = self.sampler.chain
self.result.log_evidence = np.nan
self.result.log_evidence_err = np.nan
return self.result
......@@ -3,7 +3,6 @@ from ..utils import logger, get_progress_bar
import numpy as np
import os
from .emcee import Emcee
from .base_sampler import SamplerError
class Kombine(Emcee):
......@@ -157,20 +156,11 @@ class Kombine(Emcee):
tmp_chain = self.sampler.chain.copy()
self.calculate_autocorrelation(tmp_chain.reshape((-1, self.ndim)))
self.print_nburn_logging_info()
self.result.nburn = self.nburn
if self.result.nburn > self.nsteps:
raise SamplerError(
"The run has finished, but the chain is not burned in: `nburn < nsteps` ({} < {}). Try increasing the "
"number of steps.".format(self.result.nburn, self.nsteps))
self._generate_result()
self.result.log_evidence_err = np.nan
tmp_chain = self.sampler.chain[self.nburn:, :, :].copy()
self.result.samples = tmp_chain.reshape((-1, self.ndim))
blobs = np.array(self.sampler.blobs)
blobs_trimmed = blobs[self.nburn:, :, :].reshape((-1, 2))
self.calc_likelihood_count()
log_likelihoods, log_priors = blobs_trimmed.T
self.result.log_likelihood_evaluations = log_likelihoods
self.result.log_prior_evaluations = log_priors
self.result.walkers = self.sampler.chain.reshape((self.nwalkers, self.nsteps, self.ndim))
self.result.log_evidence = np.nan
self.result.log_evidence_err = np.nan
return self.result
......@@ -469,43 +469,13 @@ class Pymc3(MCMCSampler):
for sms in self.step_method[key]:
curmethod = sms.lower()
methodslist.append(curmethod)
args = {}
if curmethod == 'nuts':
if nuts_kwargs is not None:
args = nuts_kwargs
elif step_kwargs is not None:
args = step_kwargs.pop('nuts', {})
# add values into nuts_kwargs
nuts_kwargs = args
else:
args = {}
else:
if step_kwargs is not None:
args = step_kwargs.get(curmethod, {})
else:
args = {}
self.kwargs['step'].append(
pymc3.__dict__[step_methods[curmethod]](vars=[self.pymc3_priors[key]], **args))
nuts_kwargs = self._create_nuts_kwargs(curmethod, key, nuts_kwargs, pymc3, step_kwargs,
step_methods)
else:
curmethod = self.step_method[key].lower()
methodslist.append(curmethod)
args = {}
if curmethod == 'nuts':
if nuts_kwargs is not None:
args = nuts_kwargs
elif step_kwargs is not None:
args = step_kwargs.pop('nuts', {})
# add values into nuts_kwargs
nuts_kwargs = args
else:
args = {}
else:
if step_kwargs is not None:
args = step_kwargs.get(curmethod, {})
else:
args = {}
self.kwargs['step'].append(
pymc3.__dict__[step_methods[curmethod]](vars=[self.pymc3_priors[key]], **args))
nuts_kwargs = self._create_nuts_kwargs(curmethod, key, nuts_kwargs, pymc3, step_kwargs,
step_methods)
else:
with self.pymc3_model:
# check for a compound step list
......@@ -514,18 +484,7 @@ class Pymc3(MCMCSampler):
for sms in self.step_method:
curmethod = sms.lower()
methodslist.append(curmethod)
args = {}
if curmethod == 'nuts':
if nuts_kwargs is not None:
args = nuts_kwargs
elif step_kwargs is not None:
args = step_kwargs.pop('nuts', {})
# add values into nuts_kwargs
nuts_kwargs = args
else:
args = {}
else:
args = step_kwargs.get(curmethod, {})
args, nuts_kwargs = self._create_args_and_nuts_kwargs(curmethod, nuts_kwargs, step_kwargs)
compound.append(pymc3.__dict__[step_methods[curmethod]](**args))
self.kwargs['step'] = compound
else:
......@@ -533,18 +492,7 @@ class Pymc3(MCMCSampler):
if self.step_method is not None:
curmethod = self.step_method.lower()
methodslist.append(curmethod)
args = {}
if curmethod == 'nuts':
if nuts_kwargs is not None:
args = nuts_kwargs
elif step_kwargs is not None:
args = step_kwargs.pop('nuts', {})
# add values into nuts_kwargs
nuts_kwargs = args
else:
args = {}
else:
args = step_kwargs.get(curmethod, {})
args, nuts_kwargs = self._create_args_and_nuts_kwargs(curmethod, nuts_kwargs, step_kwargs)
self.kwargs['step'] = pymc3.__dict__[step_methods[curmethod]](**args)
else:
# re-add step_kwargs if no step methods are set
......@@ -582,6 +530,37 @@ class Pymc3(MCMCSampler):
self.calc_likelihood_count()
return self.result
def _create_args_and_nuts_kwargs(self, curmethod, nuts_kwargs, step_kwargs):
if curmethod == 'nuts':
args, nuts_kwargs = self._get_nuts_args(nuts_kwargs, step_kwargs)
else:
args = step_kwargs.get(curmethod, {})
return args, nuts_kwargs
def _create_nuts_kwargs(self, curmethod, key, nuts_kwargs, pymc3, step_kwargs, step_methods):
if curmethod == 'nuts':
args, nuts_kwargs = self._get_nuts_args(nuts_kwargs, step_kwargs)
else:
if step_kwargs is not None:
args = step_kwargs.get(curmethod, {})
else:
args = {}
self.kwargs['step'].append(
pymc3.__dict__[step_methods[curmethod]](vars=[self.pymc3_priors[key]], **args))
return nuts_kwargs
@staticmethod
def _get_nuts_args(nuts_kwargs, step_kwargs):
if nuts_kwargs is not None:
args = nuts_kwargs
elif step_kwargs is not None:
args = step_kwargs.pop('nuts', {})
# add values into nuts_kwargs
nuts_kwargs = args
else:
args = {}
return args, nuts_kwargs
def set_prior(self):
"""
Set the PyMC3 prior distributions.
......
......@@ -132,6 +132,15 @@ def _infer_args_from_function_except_for_first_arg(func):
return infer_args_from_function_except_n_args(func=func, n=1)
def get_dict_with_properties(obj):
property_names = [p for p in dir(obj.__class__)
if isinstance(getattr(obj.__class__, p), property)]
dict_with_properties = obj.__dict__.copy()
for key in property_names:
dict_with_properties[key] = getattr(obj, key)
return dict_with_properties
def get_sampling_frequency(time_array):
"""
Calculate sampling frequency from a time series
......@@ -1026,6 +1035,41 @@ def encode_astropy_quantity(dct):
return dct
def move_old_file(filename, overwrite=False):
""" Moves or removes an old file.
Parameters
----------
filename: str
Name of the file to be move
overwrite: bool, optional
Whether or not to remove the file or to change the name
to filename + '.old'
"""
if os.path.isfile(filename):
if overwrite:
logger.debug('Removing existing file {}'.format(filename))
os.remove(filename)
else:
logger.debug(
'Renaming existing file {} to {}.old'.format(filename,
filename))
os.rename(filename, filename + '.old')
logger.debug("Saving result to {}".format(filename))
def load_json(filename, gzip):
if gzip or os.path.splitext(filename)[1].lstrip('.') == 'gz':
import gzip
with gzip.GzipFile(filename, 'r') as file:
json_str = file.read().decode('utf-8')
dictionary = json.loads(json_str, object_hook=decode_bilby_json)
else:
with open(filename, 'r') as file:
dictionary = json.load(file, object_hook=decode_bilby_json)
return dictionary
def decode_bilby_json(dct):
if dct.get("__prior_dict__", False):
cls = getattr(import_module(dct['__module__']), dct['__name__'])
......
......@@ -439,14 +439,7 @@ class GravitationalWaveTransient(Likelihood):
if signal_polarizations is None:
signal_polarizations = \
self.waveform_generator.frequency_domain_strain(self.parameters)
d_inner_h = 0
h_inner_h = 0
for interferometer in self.interferometers:
per_detector_snr = self.calculate_snrs(
signal_polarizations, interferometer)
d_inner_h += per_detector_snr.d_inner_h
h_inner_h += per_detector_snr.optimal_snr_squared
d_inner_h, h_inner_h = self._calculate_inner_products(signal_polarizations)
d_inner_h_dist = (
d_inner_h * self.parameters['luminosity_distance'] /
......@@ -472,6 +465,17 @@ class GravitationalWaveTransient(Likelihood):
self._rescale_signal(signal_polarizations, new_distance)
return new_distance
def _calculate_inner_products(self, signal_polarizations):
d_inner_h = 0
h_inner_h = 0
for interferometer in self.interferometers:
per_detector_snr = self.calculate_snrs(
signal_polarizations, interferometer)
d_inner_h += per_detector_snr.d_inner_h
h_inner_h += per_detector_snr.optimal_snr_squared
return d_inner_h, h_inner_h
def generate_phase_sample_from_marginalized_likelihood(
self, signal_polarizations=None):
"""
......@@ -497,14 +501,7 @@ class GravitationalWaveTransient(Likelihood):
if signal_polarizations is None:
signal_polarizations = \
self.waveform_generator.frequency_domain_strain(self.parameters)
d_inner_h = 0
h_inner_h = 0
for interferometer in self.interferometers:
per_detector_snr = self.calculate_snrs(
signal_polarizations, interferometer)
d_inner_h += per_detector_snr.d_inner_h
h_inner_h += per_detector_snr.optimal_snr_squared
d_inner_h, h_inner_h = self._calculate_inner_products(signal_polarizations)
phases = np.linspace(0, 2 * np.pi, 101)
phasor = np.exp(-2j * phases)
......
......@@ -393,7 +393,13 @@ class CompactBinaryCoalescenceResult(CoreResult):
)
)
else:
fig, axs = plt.subplots(2, 1)
old_font_size = rcParams["font.size"]
rcParams["font.size"] = 20
fig, axs = plt.subplots(
2, 1,
gridspec_kw=dict(height_ratios=[1.5, 1]),
figsize=(16, 12.5)
)
if PLOT_DATA:
if format == "html":
......@@ -567,9 +573,10 @@ class CompactBinaryCoalescenceResult(CoreResult):
col=1,
)
else:
lower_limit = np.mean(fd_waveforms, axis=0)[0] / 1e3
axs[0].loglog(
plot_frequencies,
np.median(fd_waveforms, axis=0), color=WAVEFORM_COLOR, label='Median reconstructed')
np.mean(fd_waveforms, axis=0), color=WAVEFORM_COLOR, label='Mean reconstructed')
axs[0].fill_between(
plot_frequencies,
np.percentile(fd_waveforms, lower_percentile, axis=0),
......@@ -578,7 +585,7 @@ class CompactBinaryCoalescenceResult(CoreResult):
int(upper_percentile - lower_percentile)),
alpha=0.3)
axs[1].plot(
plot_times, np.median(td_waveforms, axis=0),
plot_times, np.mean(td_waveforms, axis=0),
color=WAVEFORM_COLOR)
axs[1].fill_between(
plot_times, np.percentile(
......@@ -653,12 +660,12 @@ class CompactBinaryCoalescenceResult(CoreResult):
axs[0].set_xlim(interferometer.minimum_frequency,
interferometer.maximum_frequency)
axs[1].set_xlim(start_time, end_time)
axs[0].set_ylim(lower_limit)
axs[0].set_xlabel(f_domain_x_label)
axs[0].set_ylabel(f_domain_y_label)
axs[1].set_xlabel(t_domain_x_label)
axs[1].set_ylabel(t_domain_y_label)
axs[0].legend(loc='lower left')
axs[0].legend(loc='lower left', ncol=2)
if save:
filename = os.path.join(
......@@ -675,7 +682,9 @@ class CompactBinaryCoalescenceResult(CoreResult):
)
plt.close()
logger.debug("Waveform figure saved to {}".format(filename))
rcParams["font.size"] = old_font_size
else:
rcParams["font.size"] = old_font_size
return fig
def plot_skymap(
......
......@@ -67,18 +67,18 @@ def lal_binary_black_hole(
mode_array:
Activate a specific mode array and evaluate the model using those
modes only. e.g. waveform_arguments =
dict(waveform_approximant='IMRPhenomHM', modearray=[[2,2],[2,-2])
dict(waveform_approximant='IMRPhenomHM', mode_array=[[2,2],[2,-2])
returns the 22 and 2-2 modes only of IMRPhenomHM. You can only
specify modes that are included in that particular model. e.g.
waveform_arguments = dict(waveform_approximant='IMRPhenomHM',
modearray=[[2,2],[2,-2],[5,5],[5,-5]]) is not allowed because the
mode_array=[[2,2],[2,-2],[5,5],[5,-5]]) is not allowed because the
55 modes are not included in this model. Be aware that some models
only take positive modes and return the positive and the negative
mode together, while others need to call both. e.g.
waveform_arguments = dict(waveform_approximant='IMRPhenomHM',
modearray=[[2,2],[4,-4]]) returns the 22 and 2-2 of IMRPhenomHM.
mode_array=[[2,2],[4,-4]]) returns the 22 and 2-2 of IMRPhenomHM.
However, waveform_arguments =
dict(waveform_approximant='IMRPhenomXHM', modearray=[[2,2],[4,-4]])
dict(waveform_approximant='IMRPhenomXHM', mode_array=[[2,2],[4,-4]])
returns the 22 and 4-4 of IMRPhenomXHM.
Returns
......@@ -150,18 +150,18 @@ def lal_binary_neutron_star(
mode_array:
Activate a specific mode array and evaluate the model using those
modes only. e.g. waveform_arguments =
dict(waveform_approximant='IMRPhenomHM', modearray=[[2,2],[2,-2])
dict(waveform_approximant='IMRPhenomHM', mode_array=[[2,2],[2,-2])
returns the 22 and 2-2 modes only of IMRPhenomHM. You can only
specify modes that are included in that particular model. e.g.
waveform_arguments = dict(waveform_approximant='IMRPhenomHM',
modearray=[[2,2],[2,-2],[5,5],[5,-5]]) is not allowed because the
mode_array=[[2,2],[2,-2],[5,5],[5,-5]]) is not allowed because the
55 modes are not included in this model. Be aware that some models
only take positive modes and return the positive and the negative
mode together, while others need to call both. e.g.
waveform_arguments = dict(waveform_approximant='IMRPhenomHM',
modearray=[[2,2],[4,-4]]) returns the 22 a\nd 2-2 of IMRPhenomHM.
mode_array=[[2,2],[4,-4]]) returns the 22 a\nd 2-2 of IMRPhenomHM.
However, waveform_arguments =
dict(waveform_approximant='IMRPhenomXHM', modearray=[[2,2],[4,-4]])
dict(waveform_approximant='IMRPhenomXHM', mode_array=[[2,2],[4,-4]])
returns the 22 and 4-4 of IMRPhenomXHM.
Returns
......@@ -217,18 +217,18 @@ def lal_eccentric_binary_black_hole_no_spins(
mode_array:
Activate a specific mode array and evaluate the model using those
modes only. e.g. waveform_arguments =
dict(waveform_approximant='IMRPhenomHM', modearray=[[2,2],[2,-2])
dict(waveform_approximant='IMRPhenomHM', mode_array=[[2,2],[2,-2])
returns the 22 and 2-2 modes only of IMRPhenomHM. You can only
specify modes that are included in that particular model. e.g.
waveform_arguments = dict(waveform_approximant='IMRPhenomHM',
modearray=[[2,2],[2,-2],[5,5],[5,-5]]) is not allowed because the
mode_array=[[2,2],[2,-2],[5,5],[5,-5]]) is not allowed because the
55 modes are not included in this model. Be aware that some models
only take positive modes and return the positive and the negative
mode together, while others need to call both. e.g.
waveform_arguments = dict(waveform_approximant='IMRPhenomHM',
modearray=[[2,2],[4,-4]]) returns the 22 and 2-2 of IMRPhenomHM.
mode_array=[[2,2],[4,-4]]) returns the 22 and 2-2 of IMRPhenomHM.
However, waveform_arguments =
dict(waveform_approximant='IMRPhenomXHM', modearray=[[2,2],[4,-4]])
dict(waveform_approximant='IMRPhenomXHM', mode_array=[[2,2],[4,-4]])
returns the 22 and 4-4 of IMRPhenomXHM.
Returns
......@@ -343,12 +343,12 @@ def _base_lal_cbc_fd_waveform(
lalsim_SimInspiralWaveformParamsInsertTidalLambda2(
waveform_dictionary, lambda_2)
if 'modearray' in waveform_kwargs:
modearray = waveform_kwargs['modearray']
mode_array = lalsim.SimInspiralCreateModeArray()
for mode in modearray:
lalsim.SimInspiralModeArrayActivateMode(mode_array, mode[0], mode[1])
lalsim.SimInspiralWaveformParamsInsertModeArray(waveform_dictionary, mode_array)
if ('mode_array' in waveform_kwargs) and waveform_kwargs['mode_array'] is not None:
mode_array = waveform_kwargs['mode_array']
mode_array_lal = lalsim.SimInspiralCreateModeArray()
for mode in mode_array:
lalsim.SimInspiralModeArrayActivateMode(mode_array_lal, mode[0], mode[1])
lalsim.SimInspiralWaveformParamsInsertModeArray(waveform_dictionary, mode_array_lal)
if lalsim.SimInspiralImplementedFDApproximants(approximant):
wf_func = lalsim_SimInspiralChooseFDWaveform
......
......@@ -717,28 +717,15 @@ def lalsim_SimInspiralFD(
approximant: int, str
"""
[mass_1, mass_2, spin_1x, spin_1y, spin_1z, spin_2x, spin_2y, spin_2z,
luminosity_distance, iota, phase, longitude_ascending_nodes,
eccentricity, mean_per_ano, delta_frequency, minimum_frequency,
maximum_frequency, reference_frequency] = convert_args_list_to_float(
args = convert_args_list_to_float(
mass_1, mass_2, spin_1x, spin_1y, spin_1z, spin_2x, spin_2y, spin_2z,
luminosity_distance, iota, phase, longitude_ascending_nodes,
eccentricity, mean_per_ano, delta_frequency, minimum_frequency,
maximum_frequency, reference_frequency)
if isinstance(approximant, int):
pass
elif isinstance(approximant, str):
approximant = lalsim_GetApproximantFromString(approximant)
else:
raise ValueError("approximant not an int")
approximant = _get_lalsim_approximant(approximant)
return lalsim.SimInspiralFD(
mass_1, mass_2, spin_1x, spin_1y, spin_1z, spin_2x, spin_2y,
spin_2z, luminosity_distance, iota, phase,
longitude_ascending_nodes, eccentricity, mean_per_ano, delta_frequency,
minimum_frequency, maximum_frequency, reference_frequency,
waveform_dictionary, approximant)
return lalsim.SimInspiralFD(*args, waveform_dictionary, approximant)
def lalsim_SimInspiralChooseFDWaveform(
......@@ -774,28 +761,25 @@ def lalsim_SimInspiralChooseFDWaveform(
approximant: int, str
"""
[mass_1, mass_2, spin_1x, spin_1y, spin_1z, spin_2x, spin_2y, spin_2z,
luminosity_distance, iota, phase, longitude_ascending_nodes,
eccentricity, mean_per_ano, delta_frequency, minimum_frequency,
maximum_frequency, reference_frequency] = convert_args_list_to_float(
args = convert_args_list_to_float(
mass_1, mass_2, spin_1x, spin_1y, spin_1z, spin_2x, spin_2y, spin_2z,
luminosity_distance, iota, phase, longitude_ascending_nodes,
eccentricity, mean_per_ano, delta_frequency, minimum_frequency,
maximum_frequency, reference_frequency)
approximant = _get_lalsim_approximant(approximant)
return lalsim.SimInspiralChooseFDWaveform(*args, waveform_dictionary, approximant)
def _get_lalsim_approximant(approximant):
if isinstance(approximant, int):
pass
elif isinstance(approximant, str):
approximant = lalsim_GetApproximantFromString(approximant)
else:
raise ValueError("approximant not an int")
return lalsim.SimInspiralChooseFDWaveform(
mass_1, mass_2, spin_1x, spin_1y, spin_1z, spin_2x, spin_2y,
spin_2z, luminosity_distance, iota, phase,
longitude_ascending_nodes, eccentricity, mean_per_ano, delta_frequency,
minimum_frequency, maximum_frequency, reference_frequency,
waveform_dictionary, approximant)
return approximant
def lalsim_SimInspiralChooseFDWaveformSequence(
......