Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • john-veitch/bilby
  • duncanmmacleod/bilby
  • colm.talbot/bilby
  • lscsoft/bilby
  • matthew-pitkin/bilby
  • salvatore-vitale/tupak
  • charlie.hoy/bilby
  • bfarr/bilby
  • virginia.demilio/bilby
  • vivien/bilby
  • eric-howell/bilby
  • sebastian-khan/bilby
  • rhys.green/bilby
  • moritz.huebner/bilby
  • joseph.mills/bilby
  • scott.coughlin/bilby
  • matthew.carney/bilby
  • hyungwon.lee/bilby
  • monica.rizzo/bilby
  • christopher-berry/bilby
  • lindsay.demarchi/bilby
  • kaushik.rao/bilby
  • charles.kimball/bilby
  • andrew.matas/bilby
  • juan.calderonbustillo/bilby
  • patrick-meyers/bilby
  • hannah.middleton/bilby
  • eve.chase/bilby
  • grant.meadors/bilby
  • khun.phukon/bilby
  • sumeet.kulkarni/bilby
  • daniel.reardon/bilby
  • cjhaster/bilby
  • sylvia.biscoveanu/bilby
  • james-clark/bilby
  • meg.millhouse/bilby
  • joshua.willis/bilby
  • nikhil.sarin/bilby
  • paul.easter/bilby
  • youngmin/bilby
  • daniel-williams/bilby
  • shanika.galaudage/bilby
  • bruce.edelman/bilby
  • avi.vajpeyi/bilby
  • isobel.romero-shaw/bilby
  • andrew.kim/bilby
  • dominika.zieba/bilby
  • jonathan.davies/bilby
  • marc.arene/bilby
  • srishti.tiwari/bilby-tidal-heating-eccentric
  • aditya.vijaykumar/bilby
  • michael.williams/bilby
  • cecilio.garcia-quiros/bilby
  • rory-smith/bilby
  • maite.mateu-lucena/bilby
  • wushichao/bilby
  • kaylee.desoto/bilby
  • brandon.piotrzkowski/bilby
  • rossella.gamba/bilby
  • hunter.gabbard/bilby
  • deep.chatterjee/bilby
  • tathagata.ghosh/bilby
  • arunava.mukherjee/bilby
  • philip.relton/bilby
  • reed.essick/bilby
  • pawan.gupta/bilby
  • francisco.hernandez/bilby
  • rhiannon.udall/bilby
  • leo.tsukada/bilby
  • will-farr/bilby
  • vijay.varma/bilby
  • jeremy.baier/bilby
  • joshua.brandt/bilby
  • ethan.payne/bilby
  • ka-lok.lo/bilby
  • antoni.ramos-buades/bilby
  • oliviastephany.wilk/bilby
  • jack.heinzel/bilby
  • samson.leong/bilby-psi4
  • viviana.caceres/bilby
  • nadia.qutob/bilby
  • michael-coughlin/bilby
  • hemantakumar.phurailatpam/bilby
  • boris.goncharov/bilby
  • sama.al-shammari/bilby
  • siqi.zhong/bilby
  • jocelyn-read/bilby
  • marc.penuliar/bilby
  • stephanie.letourneau/bilby
  • alexandresebastien.goettel/bilby
  • alec.gunny/bilby
  • serguei.ossokine/bilby
  • pratyusava.baral/bilby
  • sophie.hourihane/bilby
  • eunsub/bilby
  • james.hart/bilby
  • pratyusava.baral/bilby-tg
  • zhaozc/bilby
  • pratyusava.baral/bilby_SoG
  • tomasz.baka/bilby
  • nicogerardo.bers/bilby
  • soumen.roy/bilby
  • isaac.mcmahon/healpix-redundancy
  • asamakai.baker/bilby-frequency-dependent-antenna-pattern-functions
  • anna.puecher/bilby
  • pratyusava.baral/bilby-x-g
  • thibeau.wouters/bilby
  • christian.adamcewicz/bilby
  • raffi.enficiaud/bilby
109 results
Show changes
Commits on Source (46)
Showing
with 368 additions and 182 deletions
......@@ -44,12 +44,17 @@ containers:
- apt-get -yqq install libhdf5-dev
script:
- python -m pip install .
- python -m pip list installed
- python -c "import bilby"
- python -c "import bilby.bilby_mcmc"
- python -c "import bilby.core"
- python -c "import bilby.core.prior"
- python -c "import bilby.core.sampler"
- python -c "import bilby.core.utils"
- python -c "import bilby.gw"
- python -c "import bilby.gw.detector"
- python -c "import bilby.gw.eos"
- python -c "import bilby.gw.likelihood"
- python -c "import bilby.gw.sampler"
- python -c "import bilby.hyper"
- python -c "import cli_bilby"
......@@ -122,6 +127,7 @@ python-3.7:
image: containers.ligo.org/lscsoft/bilby/v2-bilby-python37
script:
- python -m pip install .
- python -m pip list installed
# Run pyflakes
- flake8 .
......@@ -143,6 +149,7 @@ python-3.8:
image: containers.ligo.org/lscsoft/bilby/v2-bilby-python38
script:
- python -m pip install .
- python -m pip list installed
- pytest
# test example on python 3.9
......@@ -152,6 +159,7 @@ python-3.9:
image: containers.ligo.org/lscsoft/bilby/v2-bilby-python39
script:
- python -m pip install .
- python -m pip list installed
- pytest
......@@ -162,6 +170,7 @@ python-3.7-samplers:
image: containers.ligo.org/lscsoft/bilby/v2-bilby-python37
script:
- python -m pip install .
- python -m pip list installed
- pytest test/integration/sampler_run_test.py --durations 10
......@@ -172,6 +181,7 @@ python-3.8-samplers:
image: containers.ligo.org/lscsoft/bilby/v2-bilby-python38
script:
- python -m pip install .
- python -m pip list installed
- pytest test/integration/sampler_run_test.py --durations 10
......@@ -182,6 +192,7 @@ python-3.9-samplers:
image: containers.ligo.org/lscsoft/bilby/v2-bilby-python39
script:
- python -m pip install .
- python -m pip list installed
- pytest test/integration/sampler_run_test.py --durations 10
......@@ -193,6 +204,7 @@ integration-tests-python-3.7:
- schedules
script:
- python -m pip install .
- python -m pip list installed
# Run tests which are only done on schedule
- pytest test/integration/example_test.py
......@@ -204,7 +216,21 @@ plotting-python-3.7:
- schedules
script:
- python -m pip install .
- python -m pip install ligo.skymap
- python -m pip install "ligo.skymap>=0.5.3"
- python -m pip list installed
- pytest test/gw/plot_test.py
plotting-python-3.9:
stage: test
image: containers.ligo.org/lscsoft/bilby/v2-bilby-python37
needs: ["basic-3.9", "precommits-py3.9"]
only:
- schedules
script:
- python -m pip install .
- python -m pip install "ligo.skymap>=0.5.3"
- python -m pip list installed
- pytest test/gw/plot_test.py
......@@ -301,4 +327,4 @@ pypi-release:
script:
- twine upload dist/*
only:
- tags
- tags
......@@ -15,14 +15,8 @@ repos:
hooks:
- id: codespell
args: [--ignore-words=.dictionary.txt]
- repo: https://github.com/asottile/seed-isort-config
rev: v1.3.0
hooks:
- id: seed-isort-config
args: [--application-directories, 'bilby/']
files: ^bilby/bilby_mcmc/
- repo: https://github.com/pre-commit/mirrors-isort
rev: v4.3.21
rev: v5.10.1
hooks:
- id: isort # sort imports alphabetically and separates import into sections
args: [-w=88, -m=3, -tc, -sp=setup.cfg ]
......
......@@ -27,6 +27,7 @@ Gregory Ashton
Hector Estelles
Ignacio Magaña Hernandez
Isobel Marguarethe Romero-Shaw
Jack Heinzel
Jade Powell
James A Clark
Jeremy G Baier
......@@ -69,6 +70,7 @@ Sharan Banagiri
Shichao Wu
Simon Stevenson
Soichiro Morisaki
Stephen R Green
Sumeet Kulkarni
Sylvia Biscoveanu
Tathagata Ghosh
......
# All notable changes will be documented in this file
## [1.1.5] 2022-01-14
Version 1.1.5 release of bilby
### Added
- Option to enforce that a GW signal fits into the segment duration (!1041)
- Remove the save `.dat` samples file with `dynesty` (!1028)
- Catch corrupted `json` result result files being passed (!1034)
### Changes
- Fixes to conversion function for `nessai` and `cpnest` (!1055)
- Workaround for `astropy` v5 (!1054)
- Various fixes to testing system (!1038, !1044, !1045, !1048)
- Updated defaults for `nessai` (!1042)
- Small bug fixes (!1032, !1036, !1039, !1046, !1052)
- Bug fix in the definition of some standard interferometers (!1037)
- Improvements to the multi-banded GWT likelihood (!1026)
- Improve meta data comparison (!1035)
## [1.1.4] 2021-10-08
Version 1.1.4 release of bilby
......
......@@ -538,10 +538,17 @@ class Result(object):
@classmethod
@docstring(_load_doctstring.format(format="json"))
def from_json(cls, filename=None, outdir=None, label=None, gzip=False):
from json.decoder import JSONDecodeError
filename = _determine_file_name(filename, outdir, label, 'json', gzip)
if os.path.isfile(filename):
dictionary = load_json(filename, gzip)
try:
dictionary = load_json(filename, gzip)
except JSONDecodeError as e:
raise IOError(
"JSON failed to decode {} with message {}".format(filename, e)
)
try:
return cls(**dictionary)
except TypeError as e:
......@@ -1987,9 +1994,9 @@ def plot_multiple(results, filename=None, labels=None, colours=None,
if evidences:
if np.isnan(results[0].log_bayes_factor):
template = ' $\mathrm{{ln}}(Z)={lnz:1.3g}$'
template = r' $\mathrm{{ln}}(Z)={lnz:1.3g}$'
else:
template = ' $\mathrm{{ln}}(B)={lnbf:1.3g}$'
template = r' $\mathrm{{ln}}(B)={lnbf:1.3g}$'
labels = [template.format(lnz=result.log_evidence,
lnbf=result.log_bayes_factor)
for ii, result in enumerate(results)]
......
......@@ -512,7 +512,13 @@ class Sampler(object):
key, self) is False:
logger.debug("Cached value {} is unmatched".format(key))
use_cache = False
if self.meta_data["likelihood"] != self.cached_result.meta_data["likelihood"]:
try:
# Recursive check the dictionaries allowing for numpy arrays
np.testing.assert_equal(
self.meta_data["likelihood"],
self.cached_result.meta_data["likelihood"]
)
except AssertionError:
use_cache = False
if use_cache is False:
self.cached_result = None
......
......@@ -3,6 +3,7 @@ import array
import copy
import numpy as np
from numpy.lib.recfunctions import structured_to_unstructured
from pandas import DataFrame
from .base_sampler import NestedSampler
......@@ -121,11 +122,12 @@ class Cpnest(NestedSampler):
out.plot()
self.calc_likelihood_count()
self.result.posterior = DataFrame(out.posterior_samples)
self.result.samples = structured_to_unstructured(
out.posterior_samples[self.search_parameter_keys]
)
self.result.log_likelihood_evaluations = out.posterior_samples['logL']
self.result.nested_samples = DataFrame(out.get_nested_samples(filename=''))
self.result.nested_samples.rename(columns=dict(logL='log_likelihood'), inplace=True)
self.result.posterior.rename(columns=dict(logL='log_likelihood', logPrior='log_prior'),
inplace=True)
_, log_weights = compute_weights(np.array(self.result.nested_samples.log_likelihood),
np.array(out.NS.state.nlive))
self.result.nested_samples['weights'] = np.exp(log_weights)
......
......@@ -246,6 +246,12 @@ class Dynesty(NestedSampler):
else:
self._last_print_time = _time
# Add time in current run to overall sampling time
total_time = self.sampling_time + _time - self.start_time
# Remove fractional seconds
total_time_str = str(total_time).split('.')[0]
# Extract results at the current iteration.
(worst, ustar, vstar, loglstar, logvol, logwt,
logz, logzvar, h, nc, worst_it, boundidx, bounditer,
......@@ -281,12 +287,11 @@ class Dynesty(NestedSampler):
self.pbar.set_postfix_str(" ".join(string), refresh=False)
self.pbar.update(niter - self.pbar.n)
elif "interval" in self.kwargs["print_method"]:
formatted = " ".join([str(_time - self.start_time)] + string)
print("{}it [{}]".format(niter, formatted), file=sys.stdout)
formatted = " ".join([total_time_str] + string)
print("{}it [{}]".format(niter, formatted), file=sys.stdout, flush=True)
else:
_time = datetime.datetime.now()
formatted = " ".join([str(_time - self.start_time)] + string)
print("{}it [{}]".format(niter, formatted), file=sys.stdout)
formatted = " ".join([total_time_str] + string)
print("{}it [{}]".format(niter, formatted), file=sys.stdout, flush=True)
def _apply_dynesty_boundaries(self):
self._periodic = list()
......@@ -639,8 +644,6 @@ class Dynesty(NestedSampler):
if self.sampler.pool is not None:
self.sampler.M = self.sampler.pool.map
self.dump_samples_to_dat()
def dump_samples_to_dat(self):
sampler = self.sampler
ln_weights = sampler.saved_logwt - sampler.saved_logz[-1]
......
import numpy as np
import os
from pandas import DataFrame
from .base_sampler import NestedSampler
......@@ -15,55 +16,42 @@ class Nessai(NestedSampler):
Documentation: https://nessai.readthedocs.io/
"""
default_kwargs = dict(
output=None,
nlive=1000,
stopping=0.1,
resume=True,
max_iteration=None,
checkpointing=True,
seed=1234,
acceptance_threshold=0.01,
analytic_priors=False,
maximum_uninformed=1000,
uninformed_proposal=None,
uninformed_proposal_kwargs=None,
flow_class=None,
flow_config=None,
training_frequency=None,
reset_weights=False,
reset_permutations=False,
reset_acceptance=False,
train_on_empty=True,
cooldown=100,
memory=False,
poolsize=None,
drawsize=None,
max_poolsize_scale=10,
update_poolsize=False,
latent_prior='truncated_gaussian',
draw_latent_kwargs=None,
compute_radius_with_all=False,
min_radius=False,
max_radius=50,
check_acceptance=False,
fuzz=1.0,
expansion_fraction=1.0,
rescale_parameters=True,
rescale_bounds=[-1, 1],
update_bounds=False,
boundary_inversion=False,
inversion_type='split', detect_edges=False,
detect_edges_kwargs=None,
reparameterisations=None,
n_pool=None,
max_threads=1,
pytorch_threads=None,
plot=None,
proposal_plots=False
)
_default_kwargs = None
seed_equiv_kwargs = ['sampling_seed']
@property
def default_kwargs(self):
"""Default kwargs for nessai.
Retrieves default values from nessai directly and then includes any
bilby specific defaults. This avoids the need to update bilby when the
defaults change or new kwargs are added to nessai.
"""
if not self._default_kwargs:
from inspect import signature
from nessai.flowsampler import FlowSampler
from nessai.nestedsampler import NestedSampler
from nessai.proposal import AugmentedFlowProposal, FlowProposal
kwargs = {}
classes = [
AugmentedFlowProposal,
FlowProposal,
NestedSampler,
FlowSampler,
]
for c in classes:
kwargs.update(
{k: v.default for k, v in signature(c).parameters.items() if v.default is not v.empty}
)
# Defaults for bilby that will override nessai defaults
bilby_defaults = dict(
output=None,
)
kwargs.update(bilby_defaults)
self._default_kwargs = kwargs
return self._default_kwargs
def log_prior(self, theta):
"""
......@@ -82,7 +70,7 @@ class Nessai(NestedSampler):
def run_sampler(self):
from nessai.flowsampler import FlowSampler
from nessai.model import Model as BaseModel
from nessai.livepoint import dict_to_live_points
from nessai.livepoint import dict_to_live_points, live_points_to_array
from nessai.posterior import compute_weights
from nessai.utils import setup_logger
......@@ -148,12 +136,13 @@ class Nessai(NestedSampler):
# Manually set likelihood evaluations because parallelisation breaks the counter
self.result.num_likelihood_evaluations = out.ns.likelihood_evaluations[-1]
self.result.posterior = DataFrame(out.posterior_samples)
self.result.samples = live_points_to_array(
out.posterior_samples, self.search_parameter_keys
)
self.result.log_likelihood_evaluations = out.posterior_samples['logL']
self.result.nested_samples = DataFrame(out.nested_samples)
self.result.nested_samples.rename(
columns=dict(logL='log_likelihood', logP='log_prior'), inplace=True)
self.result.posterior.rename(
columns=dict(logL='log_likelihood', logP='log_prior'), inplace=True)
_, log_weights = compute_weights(np.array(self.result.nested_samples.log_likelihood),
np.array(out.ns.state.nlive))
self.result.nested_samples['weights'] = np.exp(log_weights)
......@@ -194,9 +183,9 @@ class Nessai(NestedSampler):
self.kwargs['n_pool'] = None
if not self.kwargs['output']:
self.kwargs['output'] = self.outdir + '/{}_nessai/'.format(self.label)
if self.kwargs['output'].endswith('/') is False:
self.kwargs['output'] = '{}/'.format(self.kwargs['output'])
self.kwargs['output'] = os.path.join(
self.outdir, '{}_nessai'.format(self.label), ''
)
check_directory_exists_and_if_not_mkdir(self.kwargs['output'])
NestedSampler._verify_kwargs_against_default_kwargs(self)
......@@ -551,23 +551,13 @@ class Pymc3(MCMCSampler):
with self.pymc3_model:
# perform the sampling
trace = pymc3.sample(**self.kwargs)
trace = pymc3.sample(**self.kwargs, return_inferencedata=True)
nparams = len([key for key in self.priors.keys() if not isinstance(self.priors[key], DeltaFunction)])
nsamples = len(trace) * self.chains
self.result.samples = np.zeros((nsamples, nparams))
count = 0
for key in self.priors.keys():
if not isinstance(self.priors[key], DeltaFunction): # ignore DeltaFunction variables
if not isinstance(self.priors[key], MultivariateGaussian):
self.result.samples[:, count] = trace[key]
else:
# get multivariate Gaussian samples
priorset = self.multivariate_normal_sets[key]['set']
index = self.multivariate_normal_sets[key]['index']
self.result.samples[:, count] = trace[priorset][:, index]
count += 1
posterior = trace.posterior.to_dataframe().reset_index()
self.result.samples = posterior[self.search_parameter_keys]
self.result.log_likelihood_evaluations = np.sum(
trace.log_likelihood.likelihood.values, axis=-1
).flatten()
self.result.sampler_output = np.nan
self.calculate_autocorrelation(self.result.samples)
self.result.log_evidence = np.nan
......
......@@ -79,6 +79,12 @@ class Pymultinest(NestedSampler):
temporary_directory=True,
**kwargs
):
try:
from mpi4py import MPI
using_mpi = MPI.COMM_WORLD.Get_size() > 1
except ImportError:
using_mpi = False
super(Pymultinest, self).__init__(
likelihood=likelihood,
priors=priors,
......@@ -92,7 +98,6 @@ class Pymultinest(NestedSampler):
)
self._apply_multinest_boundaries()
self.exit_code = exit_code
using_mpi = len([key for key in os.environ if "MPI" in key])
if using_mpi and temporary_directory:
logger.info(
"Temporary directory incompatible with MPI, "
......@@ -111,15 +116,15 @@ class Pymultinest(NestedSampler):
kwargs["n_live_points"] = kwargs.pop(equiv)
def _verify_kwargs_against_default_kwargs(self):
""" Check the kwargs """
"""Check the kwargs"""
self.outputfiles_basename = self.kwargs.pop("outputfiles_basename", None)
# for PyMultiNest >=2.9 the n_params kwarg cannot be None
if self.kwargs["n_params"] is None:
self.kwargs["n_params"] = self.ndim
if self.kwargs['dump_callback'] is None:
self.kwargs['dump_callback'] = self._dump_callback
if self.kwargs["dump_callback"] is None:
self.kwargs["dump_callback"] = self._dump_callback
NestedSampler._verify_kwargs_against_default_kwargs(self)
def _dump_callback(self, *args, **kwargs):
......@@ -166,7 +171,7 @@ class Pymultinest(NestedSampler):
)
def write_current_state_and_exit(self, signum=None, frame=None):
""" Write current state and exit on exit_code """
"""Write current state and exit on exit_code"""
logger.info(
"Run interrupted by signal {}: checkpoint and exit on {}".format(
signum, self.exit_code
......@@ -187,11 +192,13 @@ class Pymultinest(NestedSampler):
self.outputfiles_basename, self.temporary_outputfiles_basename
)
)
if self.outputfiles_basename.endswith('/'):
if self.outputfiles_basename.endswith("/"):
outputfiles_basename_stripped = self.outputfiles_basename[:-1]
else:
outputfiles_basename_stripped = self.outputfiles_basename
distutils.dir_util.copy_tree(self.temporary_outputfiles_basename, outputfiles_basename_stripped)
distutils.dir_util.copy_tree(
self.temporary_outputfiles_basename, outputfiles_basename_stripped
)
def _move_temporary_directory_to_proper_path(self):
"""
......@@ -241,9 +248,9 @@ class Pymultinest(NestedSampler):
return self.result
def _check_and_load_sampling_time_file(self):
self.time_file_path = self.kwargs["outputfiles_basename"] + '/sampling_time.dat'
self.time_file_path = self.kwargs["outputfiles_basename"] + "/sampling_time.dat"
if os.path.exists(self.time_file_path):
with open(self.time_file_path, 'r') as time_file:
with open(self.time_file_path, "r") as time_file:
self.total_sampling_time = float(time_file.readline())
else:
self.total_sampling_time = 0
......@@ -253,7 +260,7 @@ class Pymultinest(NestedSampler):
new_sampling_time = current_time - self.start_time
self.total_sampling_time += new_sampling_time
self.start_time = current_time
with open(self.time_file_path, 'w') as time_file:
with open(self.time_file_path, "w") as time_file:
time_file.write(str(self.total_sampling_time))
def _clean_up_run_directory(self):
......@@ -271,16 +278,18 @@ class Pymultinest(NestedSampler):
estimate of `remaining_prior_volume / N`.
"""
import pandas as pd
dir_ = self.kwargs["outputfiles_basename"]
dead_points = np.genfromtxt(dir_ + "/ev.dat")
live_points = np.genfromtxt(dir_ + "/phys_live.points")
nlive = self.kwargs["n_live_points"]
final_log_prior_volume = - len(dead_points) / nlive - np.log(nlive)
final_log_prior_volume = -len(dead_points) / nlive - np.log(nlive)
live_points = np.insert(live_points, -1, final_log_prior_volume, axis=-1)
nested_samples = pd.DataFrame(
np.vstack([dead_points, live_points]).copy(),
columns=self.search_parameter_keys + ["log_likelihood", "log_prior_volume", "mode"]
columns=self.search_parameter_keys
+ ["log_likelihood", "log_prior_volume", "mode"],
)
return nested_samples
......@@ -73,6 +73,6 @@ def loaded_modules_dict():
module_names = list(sys.modules.keys())
vdict = {}
for key in module_names:
if "." not in key:
if "." not in str(key):
vdict[key] = str(getattr(sys.modules[key], "__version__", "N/A"))
return vdict
......@@ -11,7 +11,7 @@ from ..core.utils import logger, solar_mass, command_line_args
from ..core.prior import DeltaFunction
from .utils import lalsim_SimInspiralTransformPrecessingNewInitialConditions
from .eos.eos import SpectralDecompositionEOS, EOSFamily, IntegrateTOV
from .cosmology import get_cosmology
from .cosmology import get_cosmology, z_at_value
def redshift_to_luminosity_distance(redshift, cosmology=None):
......@@ -27,7 +27,6 @@ def redshift_to_comoving_distance(redshift, cosmology=None):
@np.vectorize
def luminosity_distance_to_redshift(distance, cosmology=None):
from astropy import units
from astropy.cosmology import z_at_value
cosmology = get_cosmology(cosmology)
return z_at_value(cosmology.luminosity_distance, distance * units.Mpc)
......@@ -35,7 +34,6 @@ def luminosity_distance_to_redshift(distance, cosmology=None):
@np.vectorize
def comoving_distance_to_redshift(distance, cosmology=None):
from astropy import units
from astropy.cosmology import z_at_value
cosmology = get_cosmology(cosmology)
return z_at_value(cosmology.comoving_distance, distance * units.Mpc)
......
......@@ -63,3 +63,15 @@ def set_cosmology(cosmology=None):
COSMOLOGY[1] = cosmology.name
else:
COSMOLOGY[1] = repr(cosmology)
def z_at_value(func, fval, **kwargs):
"""
Wrapped version of :code:`astropy.cosmology.z_at_value` to return float
rather than an :code:`astropy Quantity` as returned for :code:`astropy>=5`.
See https://docs.astropy.org/en/stable/api/astropy.cosmology.z_at_value.html#astropy.cosmology.z_at_value
for detailed documentation.
"""
from astropy.cosmology import z_at_value
return float(z_at_value(func=func, fval=fval, **kwargs))
......@@ -9,5 +9,5 @@ length = 0.6
latitude = 52 + 14. / 60 + 42.528 / 3600
longitude = 9 + 48. / 60 + 25.894 / 3600
elevation = 114.425
xarm_azimuth = 115.9431
yarm_azimuth = 21.6117
xarm_azimuth = 21.6117
yarm_azimuth = 115.9431
......@@ -15,3 +15,5 @@ longitude = 137 + 18 / 60 + 21.44171 / 3600
elevation = 414.181
xarm_azimuth = 90 - 60.39623489157727
yarm_azimuth = 90 + 29.60357629670688
xarm_tilt = 0.0031414
yarm_tilt = -0.0036270
......@@ -11,3 +11,5 @@ longitude = -(90 + 46. / 60 + 27.2654 / 3600)
elevation = -6.574
xarm_azimuth = 197.7165
yarm_azimuth = 287.7165
xarm_tilt = -3.121e-4
yarm_tilt = -6.107e-4
......@@ -11,10 +11,10 @@ from .psd import PowerSpectralDensity
class InterferometerList(list):
""" A list of Interferometer objects """
"""A list of Interferometer objects"""
def __init__(self, interferometers):
""" Instantiate a InterferometerList
"""Instantiate a InterferometerList
The InterferometerList is a list of Interferometer objects, each
object has the data used in evaluating the likelihood
......@@ -32,7 +32,9 @@ class InterferometerList(list):
if type(ifo) == str:
ifo = get_empty_interferometer(ifo)
if type(ifo) not in [Interferometer, TriangularInterferometer]:
raise TypeError("Input list of interferometers are not all Interferometer objects")
raise TypeError(
"Input list of interferometers are not all Interferometer objects"
)
else:
self.append(ifo)
self._check_interferometers()
......@@ -45,28 +47,37 @@ class InterferometerList(list):
If both checks fail, then a ValueError is raised.
"""
consistent_attributes = ['duration', 'start_time', 'sampling_frequency']
consistent_attributes = ["duration", "start_time", "sampling_frequency"]
for attribute in consistent_attributes:
x = [getattr(interferometer.strain_data, attribute)
for interferometer in self]
x = [
getattr(interferometer.strain_data, attribute)
for interferometer in self
]
try:
if not all(y == x[0] for y in x):
ifo_strs = ["{ifo}[{attribute}]={value}".format(
ifo=ifo.name,
attribute=attribute,
value=getattr(ifo.strain_data, attribute))
for ifo in self]
ifo_strs = [
"{ifo}[{attribute}]={value}".format(
ifo=ifo.name,
attribute=attribute,
value=getattr(ifo.strain_data, attribute),
)
for ifo in self
]
raise ValueError(
"The {} of all interferometers are not the same: {}".format(
attribute, ', '.join(ifo_strs)))
attribute, ", ".join(ifo_strs)
)
)
except ValueError as e:
if not all(math.isclose(y, x[0], abs_tol=1e-5) for y in x):
raise ValueError(e)
else:
logger.warning(e)
def set_strain_data_from_power_spectral_densities(self, sampling_frequency, duration, start_time=0):
""" Set the `Interferometer.strain_data` from the power spectral densities of the detectors
def set_strain_data_from_power_spectral_densities(
self, sampling_frequency, duration, start_time=0
):
"""Set the `Interferometer.strain_data` from the power spectral densities of the detectors
This uses the `interferometer.power_spectral_density` object to set
the `strain_data` to a noise realization. See
......@@ -83,12 +94,16 @@ class InterferometerList(list):
"""
for interferometer in self:
interferometer.set_strain_data_from_power_spectral_density(sampling_frequency=sampling_frequency,
duration=duration,
start_time=start_time)
interferometer.set_strain_data_from_power_spectral_density(
sampling_frequency=sampling_frequency,
duration=duration,
start_time=start_time,
)
def set_strain_data_from_zero_noise(self, sampling_frequency, duration, start_time=0):
""" Set the `Interferometer.strain_data` from the power spectral densities of the detectors
def set_strain_data_from_zero_noise(
self, sampling_frequency, duration, start_time=0
):
"""Set the `Interferometer.strain_data` from the power spectral densities of the detectors
This uses the `interferometer.power_spectral_density` object to set
the `strain_data` to zero noise. See
......@@ -105,11 +120,19 @@ class InterferometerList(list):
"""
for interferometer in self:
interferometer.set_strain_data_from_zero_noise(sampling_frequency=sampling_frequency,
duration=duration,
start_time=start_time)
def inject_signal(self, parameters=None, injection_polarizations=None, waveform_generator=None):
interferometer.set_strain_data_from_zero_noise(
sampling_frequency=sampling_frequency,
duration=duration,
start_time=start_time,
)
def inject_signal(
self,
parameters=None,
injection_polarizations=None,
waveform_generator=None,
raise_error=True,
):
""" Inject a signal into noise in each of the three detectors.
Parameters
......@@ -124,6 +147,9 @@ class InterferometerList(list):
waveform_generator: bilby.gw.waveform_generator.WaveformGenerator
A WaveformGenerator instance using the source model to inject. If
`injection_polarizations` is given, this will be ignored.
raise_error: bool
Whether to raise an error if the injected signal does not fit in
the segment.
Notes
==========
......@@ -138,22 +164,29 @@ class InterferometerList(list):
"""
if injection_polarizations is None:
if waveform_generator is not None:
injection_polarizations = \
waveform_generator.frequency_domain_strain(parameters)
injection_polarizations = waveform_generator.frequency_domain_strain(
parameters
)
else:
raise ValueError(
"inject_signal needs one of waveform_generator or "
"injection_polarizations.")
"injection_polarizations."
)
all_injection_polarizations = list()
for interferometer in self:
all_injection_polarizations.append(
interferometer.inject_signal(parameters=parameters, injection_polarizations=injection_polarizations))
interferometer.inject_signal(
parameters=parameters,
injection_polarizations=injection_polarizations,
raise_error=raise_error,
)
)
return all_injection_polarizations
def save_data(self, outdir, label=None):
""" Creates a save file for the data in plain text format
"""Creates a save file for the data in plain text format
Parameters
==========
......@@ -165,7 +198,7 @@ class InterferometerList(list):
for interferometer in self:
interferometer.save_data(outdir=outdir, label=label)
def plot_data(self, signal=None, outdir='.', label=None):
def plot_data(self, signal=None, outdir=".", label=None):
if utils.command_line_args.bilby_test_mode:
return
......@@ -209,13 +242,14 @@ class InterferometerList(list):
@property
def meta_data(self):
""" Dictionary of the per-interferometer meta_data """
return {interferometer.name: interferometer.meta_data
for interferometer in self}
"""Dictionary of the per-interferometer meta_data"""
return {
interferometer.name: interferometer.meta_data for interferometer in self
}
@staticmethod
def _filename_from_outdir_label_extension(outdir, label, extension="h5"):
return os.path.join(outdir, label + f'.{extension}')
return os.path.join(outdir, label + f".{extension}")
_save_docstring = """ Saves the object to a {format} file
......@@ -239,53 +273,66 @@ class InterferometerList(list):
"""
def to_hdf5(self, outdir='outdir', label='ifo_list'):
def to_hdf5(self, outdir="outdir", label="ifo_list"):
import deepdish
if sys.version_info[0] < 3:
raise NotImplementedError('Pickling of InterferometerList is not supported in Python 2.'
'Use Python 3 instead.')
label = label + '_' + ''.join(ifo.name for ifo in self)
raise NotImplementedError(
"Pickling of InterferometerList is not supported in Python 2."
"Use Python 3 instead."
)
label = label + "_" + "".join(ifo.name for ifo in self)
utils.check_directory_exists_and_if_not_mkdir(outdir)
try:
filename = self._filename_from_outdir_label_extension(outdir, label, "h5")
deepdish.io.save(filename, self)
except AttributeError:
logger.warning("Saving to hdf5 using deepdish failed. Pickle dumping instead.")
logger.warning(
"Saving to hdf5 using deepdish failed. Pickle dumping instead."
)
self.to_pickle(outdir=outdir, label=label)
@classmethod
def from_hdf5(cls, filename=None):
import deepdish
if sys.version_info[0] < 3:
raise NotImplementedError('Pickling of InterferometerList is not supported in Python 2.'
'Use Python 3 instead.')
raise NotImplementedError(
"Pickling of InterferometerList is not supported in Python 2."
"Use Python 3 instead."
)
res = deepdish.io.load(filename)
if res.__class__ == list:
res = cls(res)
if res.__class__ != cls:
raise TypeError('The loaded object is not a InterferometerList')
raise TypeError("The loaded object is not a InterferometerList")
return res
def to_pickle(self, outdir="outdir", label="ifo_list"):
import dill
utils.check_directory_exists_and_if_not_mkdir('outdir')
label = label + '_' + ''.join(ifo.name for ifo in self)
filename = self._filename_from_outdir_label_extension(outdir, label, extension="pkl")
utils.check_directory_exists_and_if_not_mkdir("outdir")
label = label + "_" + "".join(ifo.name for ifo in self)
filename = self._filename_from_outdir_label_extension(
outdir, label, extension="pkl"
)
with open(filename, "wb") as ff:
dill.dump(self, ff)
@classmethod
def from_pickle(cls, filename=None):
import dill
with open(filename, "rb") as ff:
res = dill.load(ff)
if res.__class__ != cls:
raise TypeError('The loaded object is not an InterferometerList')
raise TypeError("The loaded object is not an InterferometerList")
return res
to_hdf5.__doc__ = _save_docstring.format(
format="hdf5", extra=""".. deprecated:: 1.1.0
Use :func:`to_pickle` instead."""
format="hdf5",
extra=""".. deprecated:: 1.1.0
Use :func:`to_pickle` instead.""",
)
to_pickle.__doc__ = _save_docstring.format(
format="pickle", extra=".. versionadded:: 1.1.0"
......@@ -295,10 +342,21 @@ class InterferometerList(list):
class TriangularInterferometer(InterferometerList):
def __init__(self, name, power_spectral_density, minimum_frequency, maximum_frequency,
length, latitude, longitude, elevation, xarm_azimuth, yarm_azimuth,
xarm_tilt=0., yarm_tilt=0.):
def __init__(
self,
name,
power_spectral_density,
minimum_frequency,
maximum_frequency,
length,
latitude,
longitude,
elevation,
xarm_azimuth,
yarm_azimuth,
xarm_tilt=0.0,
yarm_tilt=0.0,
):
super(TriangularInterferometer, self).__init__([])
self.name = name
# for attr in ['power_spectral_density', 'minimum_frequency', 'maximum_frequency']:
......@@ -310,15 +368,46 @@ class TriangularInterferometer(InterferometerList):
maximum_frequency = [maximum_frequency] * 3
for ii in range(3):
self.append(Interferometer(
'{}{}'.format(name, ii + 1), power_spectral_density[ii], minimum_frequency[ii], maximum_frequency[ii],
length, latitude, longitude, elevation, xarm_azimuth, yarm_azimuth, xarm_tilt, yarm_tilt))
self.append(
Interferometer(
"{}{}".format(name, ii + 1),
power_spectral_density[ii],
minimum_frequency[ii],
maximum_frequency[ii],
length,
latitude,
longitude,
elevation,
xarm_azimuth,
yarm_azimuth,
xarm_tilt,
yarm_tilt,
)
)
xarm_azimuth += 240
yarm_azimuth += 240
latitude += np.arctan(length * np.sin(xarm_azimuth * np.pi / 180) * 1e3 / utils.radius_of_earth)
longitude += np.arctan(length * np.cos(xarm_azimuth * np.pi / 180) * 1e3 / utils.radius_of_earth)
latitude += (
np.arctan(
length
* np.sin(xarm_azimuth * np.pi / 180)
* 1e3
/ utils.radius_of_earth
)
* 180
/ np.pi
)
longitude += (
np.arctan(
length
* np.cos(xarm_azimuth * np.pi / 180)
* 1e3
/ utils.radius_of_earth
)
* 180
/ np.pi
)
def get_empty_interferometer(name):
......@@ -351,34 +440,38 @@ def get_empty_interferometer(name):
interferometer: Interferometer
Interferometer instance
"""
filename = os.path.join(os.path.dirname(__file__), 'detectors', '{}.interferometer'.format(name))
filename = os.path.join(
os.path.dirname(__file__), "detectors", "{}.interferometer".format(name)
)
try:
return load_interferometer(filename)
except OSError:
raise ValueError('Interferometer {} not implemented'.format(name))
raise ValueError("Interferometer {} not implemented".format(name))
def load_interferometer(filename):
"""Load an interferometer from a file."""
parameters = dict()
with open(filename, 'r') as parameter_file:
with open(filename, "r") as parameter_file:
lines = parameter_file.readlines()
for line in lines:
if line[0] == '#' or line[0] == '\n':
if line[0] == "#" or line[0] == "\n":
continue
split_line = line.split('=')
split_line = line.split("=")
key = split_line[0].strip()
value = eval('='.join(split_line[1:]))
value = eval("=".join(split_line[1:]))
parameters[key] = value
if 'shape' not in parameters.keys():
if "shape" not in parameters.keys():
ifo = Interferometer(**parameters)
logger.debug('Assuming L shape for {}'.format('name'))
elif parameters['shape'].lower() in ['l', 'ligo']:
parameters.pop('shape')
logger.debug("Assuming L shape for {}".format("name"))
elif parameters["shape"].lower() in ["l", "ligo"]:
parameters.pop("shape")
ifo = Interferometer(**parameters)
elif parameters['shape'].lower() in ['triangular', 'triangle']:
parameters.pop('shape')
elif parameters["shape"].lower() in ["triangular", "triangle"]:
parameters.pop("shape")
ifo = TriangularInterferometer(**parameters)
else:
raise IOError("{} could not be loaded. Invalid parameter 'shape'.".format(filename))
raise IOError(
"{} could not be loaded. Invalid parameter 'shape'.".format(filename)
)
return ifo
......@@ -17,7 +17,7 @@ conversion_dict = {'pressure': {'cgs': C_SI ** 4. / G_SI * 10., 'si': C_SI ** 4.
'pseudo_enthalpy': {'dimensionless': 1.},
'mass': {'g': C_SI ** 2. / G_SI * 1000, 'kg': C_SI ** 2. / G_SI, 'geom': 1.,
'm_sol': C_SI ** 2. / G_SI / MSUN_SI},
'radius': {'cm': 100., 'mass': 1., 'km': .001},
'radius': {'cm': 100., 'm': 1., 'km': .001},
'tidal_deformability': {'geom': 1.}}
......
from .base import GravitationalWaveTransient
from .basic import BasicGravitationalWaveTransient
from .roq import BilbyROQParamsRangeError, ROQGravitationalWaveTransient
from .multiband import MBGravitationalWaveTransient
from ..source import lal_binary_black_hole
from ..waveform_generator import WaveformGenerator
def get_binary_black_hole_likelihood(interferometers):
""" A wrapper to quickly set up a likelihood for BBH parameter estimation
Parameters
==========
interferometers: {bilby.gw.detector.InterferometerList, list}
A list of `bilby.detector.Interferometer` instances, typically the
output of either `bilby.detector.get_interferometer_with_open_data`
or `bilby.detector.get_interferometer_with_fake_noise_and_injection`
Returns
=======
bilby.GravitationalWaveTransient: The likelihood to pass to `run_sampler`
"""
waveform_generator = WaveformGenerator(
duration=interferometers.duration,
sampling_frequency=interferometers.sampling_frequency,
frequency_domain_source_model=lal_binary_black_hole,
waveform_arguments={'waveform_approximant': 'IMRPhenomPv2',
'reference_frequency': 50})
return GravitationalWaveTransient(interferometers, waveform_generator)