Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • john-veitch/bilby
  • duncanmmacleod/bilby
  • colm.talbot/bilby
  • lscsoft/bilby
  • matthew-pitkin/bilby
  • salvatore-vitale/tupak
  • charlie.hoy/bilby
  • bfarr/bilby
  • virginia.demilio/bilby
  • vivien/bilby
  • eric-howell/bilby
  • sebastian-khan/bilby
  • rhys.green/bilby
  • moritz.huebner/bilby
  • joseph.mills/bilby
  • scott.coughlin/bilby
  • matthew.carney/bilby
  • hyungwon.lee/bilby
  • monica.rizzo/bilby
  • christopher-berry/bilby
  • lindsay.demarchi/bilby
  • kaushik.rao/bilby
  • charles.kimball/bilby
  • andrew.matas/bilby
  • juan.calderonbustillo/bilby
  • patrick-meyers/bilby
  • hannah.middleton/bilby
  • eve.chase/bilby
  • grant.meadors/bilby
  • khun.phukon/bilby
  • sumeet.kulkarni/bilby
  • daniel.reardon/bilby
  • cjhaster/bilby
  • sylvia.biscoveanu/bilby
  • james-clark/bilby
  • meg.millhouse/bilby
  • joshua.willis/bilby
  • nikhil.sarin/bilby
  • paul.easter/bilby
  • youngmin/bilby
  • daniel-williams/bilby
  • shanika.galaudage/bilby
  • bruce.edelman/bilby
  • avi.vajpeyi/bilby
  • isobel.romero-shaw/bilby
  • andrew.kim/bilby
  • dominika.zieba/bilby
  • jonathan.davies/bilby
  • marc.arene/bilby
  • srishti.tiwari/bilby-tidal-heating-eccentric
  • aditya.vijaykumar/bilby
  • michael.williams/bilby
  • cecilio.garcia-quiros/bilby
  • rory-smith/bilby
  • maite.mateu-lucena/bilby
  • wushichao/bilby
  • kaylee.desoto/bilby
  • brandon.piotrzkowski/bilby
  • rossella.gamba/bilby
  • hunter.gabbard/bilby
  • deep.chatterjee/bilby
  • tathagata.ghosh/bilby
  • arunava.mukherjee/bilby
  • philip.relton/bilby
  • reed.essick/bilby
  • pawan.gupta/bilby
  • francisco.hernandez/bilby
  • rhiannon.udall/bilby
  • leo.tsukada/bilby
  • will-farr/bilby
  • vijay.varma/bilby
  • jeremy.baier/bilby
  • joshua.brandt/bilby
  • ethan.payne/bilby
  • ka-lok.lo/bilby
  • antoni.ramos-buades/bilby
  • oliviastephany.wilk/bilby
  • jack.heinzel/bilby
  • samson.leong/bilby-psi4
  • viviana.caceres/bilby
  • nadia.qutob/bilby
  • michael-coughlin/bilby
  • hemantakumar.phurailatpam/bilby
  • boris.goncharov/bilby
  • sama.al-shammari/bilby
  • siqi.zhong/bilby
  • jocelyn-read/bilby
  • marc.penuliar/bilby
  • stephanie.letourneau/bilby
  • alexandresebastien.goettel/bilby
  • alec.gunny/bilby
  • serguei.ossokine/bilby
  • pratyusava.baral/bilby
  • sophie.hourihane/bilby
  • eunsub/bilby
  • james.hart/bilby
  • pratyusava.baral/bilby-tg
  • zhaozc/bilby
  • pratyusava.baral/bilby_SoG
  • tomasz.baka/bilby
  • nicogerardo.bers/bilby
  • soumen.roy/bilby
  • isaac.mcmahon/healpix-redundancy
  • asamakai.baker/bilby-frequency-dependent-antenna-pattern-functions
  • anna.puecher/bilby
  • pratyusava.baral/bilby-x-g
  • thibeau.wouters/bilby
  • christian.adamcewicz/bilby
  • raffi.enficiaud/bilby
109 results
Show changes
Commits on Source (82)
Showing
with 1581 additions and 608 deletions
......@@ -172,7 +172,7 @@ python-3.10:
.test-sampler: &test-sampler
stage: test
script:
- python -m pip install .
- python -m pip install .[all]
- python -m pip list installed
- pytest test/integration/sampler_run_test.py --durations 10 -v
......
......@@ -9,6 +9,7 @@ Aditya Vijaykumar
Andrew Kim
Andrew Miller
Antoni Ramos-Buades
Apratim Ganguly
Avi Vajpeyi
Bruce Edelman
Carl-Johan Haster
......@@ -40,6 +41,7 @@ Karl Wette
Katerina Chatziioannou
Kaylee de Soto
Khun Sang Phukon
Kruthi Krishna
Kshipraa Athar
Leslie Wade
Liting Xiao
......@@ -57,6 +59,7 @@ Moritz Huebner
Nicola De Lillo
Nikhil Sarin
Nirban Bose
Noah Wolfe
Olivia Wilk
Paul Easter
Paul Lasky
......
# All notable changes will be documented in this file
## [1.4.0] 2022-11-18
Version 1.4.0 release of Bilby
The main changes in this release are support for more recent versions of `dynesty` (!1138)
and `nessai` (!1161) and adding the
`RelativeBinningGravitationalWaveTransientLikelihood` (!1105)
(see [arXiv:1806.08792](https://arxiv.org/abs/1806.08792)) for details.
### Added
- Per-detector likelihood calculations (!1149)
- `bilby.gw.likelihood.relative.RelativeBinningGravitationalWaveTransient` (!1105)
### Changes
- Reset the timer for `PyMultiNest` when overwriting an existing checkpoint directory (!1163)
- Cache the computed the noise log likelihood for the `GravitationalWaveTransient` (!1179)
- Set the reference chirp mass for the multi banded likelihood from the prior when not specified (!1169)
- Bugfix in the name of the saved ASD file in `Interferometer.save_data` (!1176)
- Modify the window length for stationarity tests for `ptemcee` (!1146)
- Explicit support for `nessai>=0.7.0` (!1161)
- Allow prior arguments read from a string to be functions (!1144)
- Support `dynesty>=1.1.0` (!1138)
## [1.3.0] 2022-10-23
Version 1.3.0 release of Bilby
......
......@@ -53,6 +53,7 @@ as requested in their associated documentation.
* `pymultinest <https://github.com/JohannesBuchner/PyMultiNest>`__
* `cpnest <https://github.com/johnveitch/cpnest>`__
* `emcee <https://github.com/dfm/emcee>`__
* `nessai <https://github.com/mj-will/nessai>`_
* `ptemcee <https://github.com/willvousden/ptemcee>`__
* `ptmcmcsampler <https://github.com/jellis18/PTMCMCSampler>`__
* `pypolychord <https://github.com/PolyChord/PolyChordLite>`__
......
......@@ -408,6 +408,10 @@ class Prior(object):
The string is interpreted as a call to instantiate another prior
class, Bilby will attempt to recursively construct that prior,
e.g., Uniform(minimum=0, maximum=1), my.custom.PriorClass(**kwargs).
- Else If the string contains a ".":
It is treated as a path to a Python function and imported, e.g.,
"some_module.some_function" returns
:code:`import some_module; return some_module.some_function`
- Else:
Try to evaluate the string using `eval`. Only built-in functions
and numpy methods can be used, e.g., np.pi / 2, 1.57.
......@@ -448,10 +452,17 @@ class Prior(object):
try:
val = eval(val, dict(), dict(np=np, inf=np.inf, pi=np.pi))
except NameError:
raise TypeError(
"Cannot evaluate prior, "
"failed to parse argument {}".format(val)
)
if "." in val:
module = '.'.join(val.split('.')[:-1])
func = val.split('.')[-1]
new_val = getattr(import_module(module), func, val)
if val == new_val:
raise TypeError(
"Cannot evaluate prior, "
f"failed to parse argument {val}"
)
else:
val = new_val
return val
......
......@@ -1882,9 +1882,13 @@ class ResultList(list):
self._error_or_warning_consistency(msg)
def check_consistent_data(self):
if not np.allclose([res.log_noise_evidence for res in self], self[0].log_noise_evidence, atol=1e-8, rtol=0.0)\
and not np.all([np.isnan(res.log_noise_evidence) for res in self]):
if not np.allclose(
[res.log_noise_evidence for res in self],
self[0].log_noise_evidence,
atol=1e-8,
rtol=0.0,
equal_nan=True,
):
msg = "Inconsistent data between results"
self._error_or_warning_consistency(msg)
......
import datetime
import numpy as np
from ..utils import logger
from .base_sampler import Sampler, signal_wrapper
from .dynesty import Dynesty, _log_likelihood_wrapper, _prior_transform_wrapper
from .dynesty import Dynesty
class DynamicDynesty(Dynesty):
......@@ -14,221 +8,36 @@ class DynamicDynesty(Dynesty):
All positional and keyword arguments (i.e., the args and kwargs) passed to
`run_sampler` will be propagated to `dynesty.DynamicNestedSampler`, see
documentation for that class for further help. Under Other Parameter below,
we list commonly all kwargs and the bilby defaults.
documentation for that class for further help.
Parameters
==========
likelihood: likelihood.Likelihood
A object with a log_l method
priors: bilby.core.prior.PriorDict, dict
Priors to be used in the search.
This has attributes for each parameter to be sampled.
outdir: str, optional
Name of the output directory
label: str, optional
Naming scheme of the output files
use_ratio: bool, optional
Switch to set whether or not you want to use the log-likelihood ratio
or just the log-likelihood
plot: bool, optional
Switch to set whether or not you want to create traceplots
skip_import_verification: bool
Skips the check if the sampler is installed if true. This is
only advisable for testing environments
Other Parameters
------==========
bound: {'none', 'single', 'multi', 'balls', 'cubes'}, ('multi')
Method used to select new points
sample: {'unif', 'rwalk', 'slice', 'rslice', 'hslice'}, ('rwalk')
Method used to sample uniformly within the likelihood constraints,
conditioned on the provided bounds
walks: int
Number of walks taken if using `sample='rwalk'`, defaults to `ndim * 5`
verbose: Bool
If true, print information information about the convergence during
check_point: bool,
If true, use check pointing.
check_point_delta_t: float (600)
The approximate checkpoint period (in seconds). Should the run be
interrupted, it can be resumed from the last checkpoint. Set to
`None` to turn-off check pointing
n_check_point: int, optional (None)
The number of steps to take before check pointing (override
check_point_delta_t).
resume: bool
If true, resume run from checkpoint (if available)
For additional documentation see bilby.core.sampler.Dynesty.
"""
default_kwargs = dict(
bound="multi",
sample="rwalk",
verbose=True,
check_point_delta_t=600,
first_update=None,
npdim=None,
rstate=None,
queue_size=None,
pool=None,
use_pool=None,
logl_args=None,
logl_kwargs=None,
ptform_args=None,
ptform_kwargs=None,
enlarge=None,
bootstrap=None,
vol_dec=0.5,
vol_check=2.0,
facc=0.5,
slices=5,
walks=None,
update_interval=0.6,
nlive_init=500,
maxiter_init=None,
maxcall_init=None,
dlogz_init=0.01,
logl_max_init=np.inf,
nlive_batch=500,
wt_function=None,
wt_kwargs=None,
maxiter_batch=None,
maxcall_batch=None,
maxiter=None,
maxcall=None,
maxbatch=None,
stop_function=None,
stop_kwargs=None,
use_stop=True,
save_bounds=True,
print_progress=True,
print_func=None,
live_points=None,
)
def __init__(
self,
likelihood,
priors,
outdir="outdir",
label="label",
use_ratio=False,
plot=False,
skip_import_verification=False,
check_point=True,
n_check_point=None,
check_point_delta_t=600,
resume=True,
**kwargs,
):
super(DynamicDynesty, self).__init__(
likelihood=likelihood,
priors=priors,
outdir=outdir,
label=label,
use_ratio=use_ratio,
plot=plot,
skip_import_verification=skip_import_verification,
**kwargs,
)
self.n_check_point = n_check_point
self.check_point = check_point
self.resume = resume
if self.n_check_point is None:
# If the log_likelihood_eval_time is not calculable then
# check_point is set to False.
if np.isnan(self._log_likelihood_eval_time):
self.check_point = False
n_check_point_raw = check_point_delta_t / self._log_likelihood_eval_time
n_check_point_rnd = int(float(f"{n_check_point_raw:1.0g}"))
self.n_check_point = n_check_point_rnd
self.resume_file = f"{self.outdir}/{self.label}_resume.pickle"
external_sampler_name = "dynesty"
@property
def external_sampler_name(self):
return "dynesty"
def nlive(self):
return self.kwargs["nlive_init"]
@property
def sampler_function_kwargs(self):
keys = [
"nlive_init",
"maxiter_init",
"maxcall_init",
"dlogz_init",
"logl_max_init",
"nlive_batch",
"wt_function",
"wt_kwargs",
"maxiter_batch",
"maxcall_batch",
"maxiter",
"maxcall",
"maxbatch",
"stop_function",
"stop_kwargs",
"use_stop",
"save_bounds",
"print_progress",
"print_func",
"live_points",
]
return {key: self.kwargs[key] for key in keys}
@signal_wrapper
def run_sampler(self):
import dynesty
def sampler_init(self):
from dynesty import DynamicNestedSampler
self._setup_pool()
self.sampler = dynesty.DynamicNestedSampler(
loglikelihood=_log_likelihood_wrapper,
prior_transform=_prior_transform_wrapper,
ndim=self.ndim,
**self.sampler_init_kwargs,
)
if self.check_point:
out = self._run_external_sampler_with_checkpointing()
else:
out = self._run_external_sampler_without_checkpointing()
self._close_pool()
return DynamicNestedSampler
# Flushes the output to force a line break
if self.kwargs["verbose"]:
print("")
# self.result.sampler_output = out
self._generate_result(out)
if self.plot:
self.generate_trace_plots(out)
return self.result
def _run_external_sampler_with_checkpointing(self):
logger.debug("Running sampler with checkpointing")
if self.resume:
resume = self.read_saved_state(continuing=True)
if resume:
logger.info("Resuming from previous run.")
old_ncall = self.sampler.ncall
sampler_kwargs = self.sampler_function_kwargs.copy()
sampler_kwargs["maxcall"] = self.n_check_point
self.start_time = datetime.datetime.now()
while True:
sampler_kwargs["maxcall"] += self.n_check_point
self.sampler.run_nested(**sampler_kwargs)
if self.sampler.ncall == old_ncall:
break
old_ncall = self.sampler.ncall
self.write_current_state()
@property
def sampler_class(self):
from dynesty.dynamicsampler import DynamicSampler
self._remove_checkpoint()
return self.sampler.results
return DynamicSampler
def write_current_state_and_exit(self, signum=None, frame=None):
Sampler.write_current_state_and_exit(self=self, signum=signum, frame=frame)
def finalize_sampler_kwargs(self, sampler_kwargs):
sampler_kwargs["maxcall"] = self.sampler.ncall + self.n_check_point
def _verify_kwargs_against_default_kwargs(self):
Sampler._verify_kwargs_against_default_kwargs(self)
def read_saved_state(self, continuing=False):
resume = super(DynamicDynesty, self).read_saved_state(continuing=continuing)
if not resume:
return resume
else:
self.sampler.loglikelihood.pool = self.pool
return resume
This diff is collapsed.
import os
import sys
import numpy as np
from pandas import DataFrame
from scipy.special import logsumexp
from ..utils import check_directory_exists_and_if_not_mkdir, load_json, logger
from .base_sampler import NestedSampler
from .base_sampler import NestedSampler, signal_wrapper
class Nessai(NestedSampler):
......@@ -19,8 +21,22 @@ class Nessai(NestedSampler):
"""
_default_kwargs = None
_run_kwargs_list = None
sampling_seed_key = "seed"
@property
def run_kwargs_list(self):
"""List of kwargs used in the run method of :code:`FlowSampler`"""
if not self._run_kwargs_list:
from nessai.utils.bilbyutils import get_run_kwargs_list
self._run_kwargs_list = get_run_kwargs_list()
ignored_kwargs = ["save"]
for ik in ignored_kwargs:
if ik in self._run_kwargs_list:
self._run_kwargs_list.remove(ik)
return self._run_kwargs_list
@property
def default_kwargs(self):
"""Default kwargs for nessai.
......@@ -28,32 +44,38 @@ class Nessai(NestedSampler):
Retrieves default values from nessai directly and then includes any
bilby specific defaults. This avoids the need to update bilby when the
defaults change or new kwargs are added to nessai.
Includes the following kwargs that are specific to bilby:
- :code:`nessai_log_level`: allows setting the logging level in nessai
- :code:`nessai_logging_stream`: allows setting the logging stream
- :code:`nessai_plot`: allows toggling the plotting in FlowSampler.run
"""
if not self._default_kwargs:
from inspect import signature
from nessai.flowsampler import FlowSampler
from nessai.nestedsampler import NestedSampler
from nessai.proposal import AugmentedFlowProposal, FlowProposal
kwargs = {}
classes = [
AugmentedFlowProposal,
FlowProposal,
NestedSampler,
FlowSampler,
]
for c in classes:
kwargs.update(
{
k: v.default
for k, v in signature(c).parameters.items()
if v.default is not v.empty
}
)
from nessai.utils.bilbyutils import get_all_kwargs
kwargs = get_all_kwargs()
# Defaults for bilby that will override nessai defaults
bilby_defaults = dict(output=None, exit_code=self.exit_code)
bilby_defaults = dict(
output=None,
exit_code=self.exit_code,
nessai_log_level=None,
nessai_logging_stream="stdout",
nessai_plot=True,
plot_posterior=False, # bilby already produces a posterior plot
log_on_iteration=False, # Use periodic logging by default
logging_interval=60, # Log every 60 seconds
)
kwargs.update(bilby_defaults)
# Kwargs that cannot be set in bilby
remove = [
"save",
"signal_handling",
]
for k in remove:
if k in kwargs:
kwargs.pop(k)
self._default_kwargs = kwargs
return self._default_kwargs
......@@ -72,12 +94,10 @@ class Nessai(NestedSampler):
"""
return self.priors.ln_prob(theta, axis=0)
def run_sampler(self):
from nessai.flowsampler import FlowSampler
from nessai.livepoint import dict_to_live_points, live_points_to_array
def get_nessai_model(self):
"""Get the model for nessai."""
from nessai.livepoint import dict_to_live_points
from nessai.model import Model as BaseModel
from nessai.posterior import compute_weights
from nessai.utils import setup_logger
class Model(BaseModel):
"""A wrapper class to pass our log_likelihood and priors into nessai
......@@ -124,47 +144,115 @@ class Nessai(NestedSampler):
"""Proposal probability for new the point"""
return self.log_prior(x)
# Setup the logger for nessai using the same settings as the bilby logger
setup_logger(
self.outdir, label=self.label, log_level=logger.getEffectiveLevel()
)
@staticmethod
def from_unit_hypercube(x):
"""Map samples from the unit hypercube to the prior."""
theta = {}
for n in self._search_parameter_keys:
theta[n] = self.priors[n].rescale(x[n])
return dict_to_live_points(theta)
@staticmethod
def to_unit_hypercube(x):
"""Map samples from the prior to the unit hypercube."""
theta = {n: x[n] for n in self._search_parameter_keys}
return dict_to_live_points(self.priors.cdf(theta))
model = Model(self.search_parameter_keys, self.priors)
try:
out = FlowSampler(model, **self.kwargs)
out.run(save=True, plot=self.plot)
except TypeError as e:
raise TypeError(f"Unable to initialise nessai sampler with error: {e}")
except (SystemExit, KeyboardInterrupt) as e:
import sys
logger.info(
f"Caught {type(e).__name__} with args {e.args}, "
f"exiting with signal {self.exit_code}"
)
sys.exit(self.exit_code)
return model
def split_kwargs(self):
"""Split kwargs into configuration and run time kwargs"""
kwargs = self.kwargs.copy()
run_kwargs = {}
for k in self.run_kwargs_list:
run_kwargs[k] = kwargs.pop(k)
run_kwargs["plot"] = kwargs.pop("nessai_plot")
return kwargs, run_kwargs
def get_posterior_weights(self):
"""Get the posterior weights for the nested samples"""
from nessai.posterior import compute_weights
_, log_weights = compute_weights(
np.array(self.fs.nested_samples["logL"]),
np.array(self.fs.ns.state.nlive),
)
w = np.exp(log_weights - logsumexp(log_weights))
return w
def get_nested_samples(self):
"""Get the nested samples dataframe"""
ns = DataFrame(self.fs.nested_samples)
ns.rename(
columns=dict(logL="log_likelihood", logP="log_prior", it="iteration"),
inplace=True,
)
return ns
def update_result(self):
"""Update the result object."""
from nessai.livepoint import live_points_to_array
# Manually set likelihood evaluations because parallelisation breaks the counter
self.result.num_likelihood_evaluations = out.ns.likelihood_evaluations[-1]
self.result.num_likelihood_evaluations = self.fs.ns.total_likelihood_evaluations
self.result.samples = live_points_to_array(
out.posterior_samples, self.search_parameter_keys
self.fs.posterior_samples, self.search_parameter_keys
)
self.result.log_likelihood_evaluations = out.posterior_samples["logL"]
self.result.nested_samples = DataFrame(out.nested_samples)
self.result.nested_samples.rename(
columns=dict(logL="log_likelihood", logP="log_prior"), inplace=True
self.result.log_likelihood_evaluations = self.fs.posterior_samples["logL"]
self.result.nested_samples = self.get_nested_samples()
self.result.nested_samples["weights"] = self.get_posterior_weights()
self.result.log_evidence = self.fs.log_evidence
self.result.log_evidence_err = self.fs.log_evidence_error
@signal_wrapper
def run_sampler(self):
"""Run the sampler.
Nessai is designed to be ran in two stages, initialise the sampler
and then call the run method with additional configuration. This means
there are effectively two sets of keyword arguments: one for
initializing the sampler and the other for the run function.
"""
from nessai.flowsampler import FlowSampler
from nessai.utils import setup_logger
kwargs, run_kwargs = self.split_kwargs()
# Setup the logger for nessai, use nessai_log_level if specified, else use
# the level of the bilby logger.
nessai_log_level = kwargs.pop("nessai_log_level")
if nessai_log_level is None or nessai_log_level == "bilby":
nessai_log_level = logger.getEffectiveLevel()
nessai_logging_stream = kwargs.pop("nessai_logging_stream")
setup_logger(
self.outdir,
label=self.label,
log_level=nessai_log_level,
stream=nessai_logging_stream,
)
_, log_weights = compute_weights(
np.array(self.result.nested_samples.log_likelihood),
np.array(out.ns.state.nlive),
# Get the nessai model
model = self.get_nessai_model()
# Configure the sampler
self.fs = FlowSampler(
model,
signal_handling=False, # Disable signal handling so it can be handled by bilby
**kwargs,
)
self.result.nested_samples["weights"] = np.exp(log_weights)
self.result.log_evidence = out.ns.log_evidence
self.result.log_evidence_err = np.sqrt(out.ns.information / out.ns.nlive)
# Run the sampler
self.fs.run(**run_kwargs)
# Update the result
self.update_result()
return self.result
def _translate_kwargs(self, kwargs):
"""Translate the keyword arguments"""
super()._translate_kwargs(kwargs)
if "nlive" not in kwargs:
for equiv in self.npoints_equiv_kwargs:
......@@ -178,10 +266,7 @@ class Nessai(NestedSampler):
kwargs["n_pool"] = self._npool
def _verify_kwargs_against_default_kwargs(self):
"""
Set the directory where the output will be written
and check resume and checkpoint status.
"""
"""Verify the keyword arguments"""
if "config_file" in self.kwargs:
d = load_json(self.kwargs["config_file"], None)
self.kwargs.update(d)
......@@ -190,10 +275,6 @@ class Nessai(NestedSampler):
if not self.kwargs["plot"]:
self.kwargs["plot"] = self.plot
if self.kwargs["n_pool"] == 1 and self.kwargs["max_threads"] == 1:
logger.warning("Setting pool to None (n_pool=1 & max_threads=1)")
self.kwargs["n_pool"] = None
if not self.kwargs["output"]:
self.kwargs["output"] = os.path.join(
self.outdir, f"{self.label}_nessai", ""
......@@ -202,5 +283,21 @@ class Nessai(NestedSampler):
check_directory_exists_and_if_not_mkdir(self.kwargs["output"])
NestedSampler._verify_kwargs_against_default_kwargs(self)
def write_current_state(self):
"""Write the current state of the sampler"""
self.fs.ns.checkpoint()
def write_current_state_and_exit(self, signum=None, frame=None):
"""
Overwrites the base class to make sure that :code:`Nessai` terminates
properly.
"""
if hasattr(self, "fs"):
self.fs.terminate_run(code=signum)
else:
logger.warning("Sampler is not initialized")
self._log_interruption(signum=signum)
sys.exit(self.exit_code)
def _setup_pool(self):
pass
......@@ -47,7 +47,7 @@ class Ptemcee(MCMCSampler):
list commonly used kwargs and the bilby defaults.
Parameters
==========
----------
nsamples: int, (5000)
The requested number of samples. Note, in cases where the
autocorrelation parameter is difficult to measure, it is possible to
......@@ -116,7 +116,7 @@ class Ptemcee(MCMCSampler):
Other Parameters
================
----------------
nwalkers: int, (200)
The number of walkers
nsteps: int, (100)
......@@ -296,7 +296,7 @@ class Ptemcee(MCMCSampler):
"""Draw the initial positions from the prior
Returns
=======
-------
pos0: list
The initial postitions of the walkers, with shape (ntemps, nwalkers, ndim)
......@@ -315,7 +315,7 @@ class Ptemcee(MCMCSampler):
See pos0 in the class initialization for details.
Returns
=======
-------
pos0: list
The initial postitions of the walkers, with shape (ntemps, nwalkers, ndim)
......@@ -504,6 +504,7 @@ class Ptemcee(MCMCSampler):
t0 = datetime.datetime.now()
logger.info("Starting to sample")
while True:
for (pos0, log_posterior, log_likelihood) in sampler.sample(
self.pos0,
......@@ -531,6 +532,7 @@ class Ptemcee(MCMCSampler):
)
self.pos0 = pos0
self.chain_array[:, self.iteration, :] = pos0[0, :, :]
self.log_likelihood_array[:, :, self.iteration] = log_likelihood
self.log_posterior_array[:, :, self.iteration] = log_posterior
......@@ -538,6 +540,9 @@ class Ptemcee(MCMCSampler):
self.log_posterior_array[:, :, : self.iteration], axis=1
)
# (nwalkers, ntemps, iterations)
# so mean_log_posterior is shaped (nwalkers, iterations)
# Calculate time per iteration
self.time_per_check.append((datetime.datetime.now() - t0).total_seconds())
t0 = datetime.datetime.now()
......@@ -725,25 +730,87 @@ def check_iteration(
beta_list,
tau_list,
tau_list_n,
Q_list,
gelman_rubin_list,
mean_log_posterior,
verbose=True,
):
"""Per-iteration logic to calculate the convergence check
"""Per-iteration logic to calculate the convergence check.
To check convergence, this function does the following:
1. Calculate the autocorrelation time (tau) for each dimension for each walker,
corresponding to those dimensions in search_parameter_keys that aren't
specifically excluded in ci.ignore_keys_for_tau.
a. Store the average tau for each dimension, averaged over each walker.
2. Calculate the Gelman-Rubin statistic (see `get_Q_convergence`), measuring
the convergence of the ensemble of walkers.
3. Calculate the number of effective samples; we aggregate the total number
of burned-in samples (amongst all walkers), divided by a multiple of the
current maximum average autocorrelation time. Tuned by `ci.burn_in_nact`
and `ci.thin_by_nact`.
4. If the Gelman-Rubin statistic < `ci.Q_tol` and `ci.nsamples` < the
number of effective samples, we say that our ensemble is converged,
setting `converged = True`.
5. For some number of the latest steps (set by the autocorrelation time
and the GRAD_WINDOW_LENGTH parameter), we find the maxmium gradient
of the autocorrelation time over all of our dimensions, over all walkers
(autocorrelation time is already averaged over walkers) and the maximum
value of the gradient of the mean log posterior over iterations, over
all walkers.
6. If the maximum gradient in tau is less than `ci.gradient_tau` and the
maximum gradient in the mean log posterior is less than
`ci.gradient_mean_log_posterior`, we set `tau_usable = True`.
7. If both `converged` and `tau_usable` are true, we return `stop = True`,
indicating that our ensemble is converged + burnt in on this
iteration.
8. Also prints progress! (see `print_progress`)
Notes
-----
The gradient of tau is computed with a Savgol-Filter, over windows in
sample number of length `GRAD_WINDOW_LENGTH`. This value must be an odd integer.
For `ndim > 3`, we calculate this as the nearest odd integer to ndim.
For `ndim <= 3`, we calculate this as the nearest odd integer to nwalkers, as
typically a much larger window length than polynomial order (default 2) leads
to more stable smoothing.
Parameters
==========
----------
iteration: int
Number indexing the current iteration, at which we are checking
convergence.
samples: np.ndarray
Array of ensemble MCMC samples, shaped like (number of walkers, number
of MCMC steps, number of dimensions).
sampler: bilby.core.sampler.Ptemcee
Bilby Ptemcee sampler object; in particular, this function uses the list
of walker temperatures stored in `sampler.betas`.
convergence_inputs: bilby.core.sampler.ptemcee.ConvergenceInputs
A named tuple of the convergence checking inputs
search_parameter_keys: list
A list of the search parameter keys
time_per_check, tau_list, tau_list_n: list
Lists used for tracking the run
beta_list: list
List of floats storing the walker inverse temperatures.
tau_list: list
List of average autocorrelation times for each dimension, averaged
over walkers, at each checked iteration. So, an effective shape
of (number of iterations so far, number of dimensions).
tau_list_n: list
List of iteration numbers, enumerating the first "axis" of tau_list.
E.g. if tau_list_n[1] = 5, this means that the list found at
tau_list[1] was calculated on iteration number 5.
gelman_rubin_list: list (floats)
list of values of the Gelman-Rubin statistic; the value calculated
in this call of check_iteration is appended to the gelman_rubin_list.
mean_log_posterior: np.ndarray
Float array shaped like (number of walkers, number of MCMC steps),
with the log of the posterior, averaged over the dimensions.
verbose: bool
Whether to print the output
Returns
=======
-------
stop: bool
A boolean flag, True if the stopping criteria has been met
burn: int
......@@ -757,14 +824,9 @@ def check_iteration(
"""
ci = convergence_inputs
# Note: nsteps is the number of steps in the samples while iterations is
# the current iteration number. So iteration > nsteps by the number of
# of discards
nwalkers, nsteps, ndim = samples.shape
nwalkers, nsteps, ndim = samples.shape
tau_array = calculate_tau_array(samples, search_parameter_keys, ci)
# Maximum over parameters, mean over walkers
tau = np.max(np.mean(tau_array, axis=0))
# Apply multiplicitive safety factor
......@@ -775,8 +837,8 @@ def check_iteration(
tau_list.append(list(np.mean(tau_array, axis=0)))
tau_list_n.append(iteration)
Q = get_Q_convergence(samples)
Q_list.append(Q)
gelman_rubin_statistic = get_Q_convergence(samples)
gelman_rubin_list.append(gelman_rubin_statistic)
if np.isnan(tau) or np.isinf(tau):
if verbose:
......@@ -791,7 +853,7 @@ def check_iteration(
np.nan,
False,
convergence_inputs,
Q,
gelman_rubin_statistic,
)
return False, np.nan, np.nan, np.nan, np.nan
......@@ -805,18 +867,24 @@ def check_iteration(
nsamples_effective = int(nwalkers * (nsteps - nburn) / thin)
# Calculate convergence boolean
converged = Q < ci.Q_tol and ci.nsamples < nsamples_effective
converged = gelman_rubin_statistic < ci.Q_tol and ci.nsamples < nsamples_effective
logger.debug(
f"Convergence: Q<Q_tol={Q < ci.Q_tol}, "
f"nsamples<nsamples_effective={ci.nsamples < nsamples_effective}"
"Convergence: Q<Q_tol={}, nsamples<nsamples_effective={}".format(
gelman_rubin_statistic < ci.Q_tol, ci.nsamples < nsamples_effective
)
)
GRAD_WINDOW_LENGTH = nwalkers + 1
GRAD_WINDOW_LENGTH = 2 * ((ndim + 1) // 2) + 1
if GRAD_WINDOW_LENGTH <= 3:
GRAD_WINDOW_LENGTH = 2 * (nwalkers // 2) + 1
nsteps_to_check = ci.autocorr_tau * np.max([2 * GRAD_WINDOW_LENGTH, tau_int])
lower_tau_index = np.max([0, len(tau_list) - nsteps_to_check])
check_taus = np.array(tau_list[lower_tau_index:])
if not np.any(np.isnan(check_taus)) and check_taus.shape[0] > GRAD_WINDOW_LENGTH:
gradient_tau = get_max_gradient(check_taus, axis=0, window_length=11)
gradient_tau = get_max_gradient(
check_taus, axis=0, window_length=GRAD_WINDOW_LENGTH
)
if gradient_tau < ci.gradient_tau:
logger.debug(
......@@ -876,13 +944,51 @@ def check_iteration(
gradient_mean_log_posterior,
tau_usable,
convergence_inputs,
Q,
gelman_rubin_statistic,
)
stop = converged and tau_usable
return stop, nburn, thin, tau_int, nsamples_effective
def get_max_gradient(x, axis=0, window_length=11, polyorder=2, smooth=False):
"""Calculate the maximum value of the gradient in the input data.
Applies a Savitzky-Golay filter (`scipy.signal.savgol_filter`) to the input
data x, along a particular axis. This filter smooths the data and, as configured
in this function, simultaneously calculates the derivative of the smoothed data.
If smooth=True is provided, it will apply a Savitzky-Golay filter with a
polynomial order of 3 to the input data before applying this filter a second
time and calculating the derivative. This function will return the maximum value
of the derivative returned by the filter.
See https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.savgol_filter.html
for more information on the Savitzky-Golay filter that we use. Some parameter
documentation has been borrowed from this source.
Parameters
----------
x : np.ndarray
Array of input data (can be int or float, as `savgol_filter` casts to float).
axis : int, default = 0
The axis of the input array x over which to calculate the gradient.
window_length : int, default = 11
The length of the filter window (i.e., the number of coefficients to use
in approximating the data).
polyorder : int, default = 2
The order of the polynomial used to fit the samples. polyorder must be less
than window_length.
smooth : bool, default = False
If true, this will smooth the data with a Savitzky-Golay filter before
providing it to the Savitzky-Golay filter for calculating the derviative.
Probably useful if you think your input data is especially noisy.
Returns
-------
max_gradient : float
Maximum value of the gradient.
"""
from scipy.signal import savgol_filter
if smooth:
......@@ -895,12 +1001,54 @@ def get_max_gradient(x, axis=0, window_length=11, polyorder=2, smooth=False):
def get_Q_convergence(samples):
"""Calculate the Gelman-Rubin statistic as an estimate of convergence for
an ensemble of MCMC walkers.
Calculates the Gelman-Rubin statistic, from Gelman and Rubin (1992).
See section 2.2 of Gelman and Rubin (1992), at
https://doi.org/10.1214/ss/1177011136.
There is also a good description of this statistic in section 7.4.2
of "Advanced Statistical Computing" (Peng 2021), in-progress course notes,
currently found at
https://bookdown.org/rdpeng/advstatcomp/monitoring-convergence.html.
As of this writing, L in this resource refers to the number of sampling
steps remaining after some have been discarded to achieve burn-in,
equivalent to nsteps here. Paraphrasing, we compare the variance between
our walkers (chains) to the variance within each walker (compare
inter-walker vs. intra-walker variance). We do this because our walkers
should be indistinguishable from one another when they reach a steady-state,
i.e. convergence. Looking at V-hat in the definition of this function, we
can see that as nsteps -> infinity, B (inter-chain variance) -> 0,
R -> 1; so, R >~ 1 is a good condition for the convergence of our ensemble.
In practice, this function calculates the Gelman-Rubin statistic for
each dimension, and then returns the largest of these values. This
means that we can be sure that, once the walker with the largest such value
achieves a Gelman-Rubin statistic of >~ 1, the others have as well.
Parameters
----------
samples: np.ndarray
Array of ensemble MCMC samples, shaped like (number of walkers, number
of MCMC steps, number of dimensions).
Returns
-------
Q: float
The largest value of the Gelman-Rubin statistic, from those values
calculated for each dimension. If only one step is represented in
samples, this returns np.inf.
"""
nwalkers, nsteps, ndim = samples.shape
if nsteps > 1:
W = np.mean(np.var(samples, axis=1), axis=0)
per_walker_mean = np.mean(samples, axis=1)
mean = np.mean(per_walker_mean, axis=0)
B = nsteps / (nwalkers - 1.0) * np.sum((per_walker_mean - mean) ** 2, axis=0)
Vhat = (nsteps - 1) / nsteps * W + (nwalkers + 1) / (nwalkers * nsteps) * B
Q_per_dim = np.sqrt(Vhat / W)
return np.max(Q_per_dim)
......@@ -977,7 +1125,31 @@ def print_progress(
def calculate_tau_array(samples, search_parameter_keys, ci):
"""Compute ACT tau for 0-temperature chains"""
"""Calculate the autocorrelation time for zero-temperature chains.
Calculates the autocorrelation time for each chain, for those parameters/
dimensions that are not explicitly excluded in ci.ignore_keys_for_tau.
Parameters
----------
samples: np.ndarray
Array of ensemble MCMC samples, shaped like (number of walkers, number
of MCMC steps, number of dimensions).
search_parameter_keys: list
A list of the search parameter keys
ci : collections.namedtuple
Collection of settings for convergence tests, including autocorrelation
calculation. If a value in search_parameter_keys is included in
ci.ignore_keys_for_tau, this function will not calculate an
autocorrelation time for any walker along that particular dimension.
Returns
-------
tau_array: np.ndarray
Float array shaped like (nwalkers, ndim) (with all np.inf for any
dimension that is excluded by ci.ignore_keys_for_tau).
"""
import emcee
nwalkers, nsteps, ndim = samples.shape
......@@ -1141,9 +1313,11 @@ def plot_tau(
def plot_mean_log_posterior(mean_log_posterior, outdir, label):
import matplotlib.pyplot as plt
mean_log_posterior[mean_log_posterior < -1e100] = np.nan
ntemps, nsteps = mean_log_posterior.shape
ymax = np.max(mean_log_posterior)
ymin = np.min(mean_log_posterior[:, -100:])
ymax = np.nanmax(mean_log_posterior)
ymin = np.nanmin(mean_log_posterior[:, -100:])
ymax += 0.1 * (ymax - ymin)
ymin -= 0.1 * (ymax - ymin)
......
......@@ -145,6 +145,8 @@ class Pymultinest(_TemporaryFileSamplerMixin, NestedSampler):
self._setup_run_directory()
self._check_and_load_sampling_time_file()
if not self.kwargs["resume"]:
self.total_sampling_time = 0.0
# Overwrite pymultinest's signal handling function
pm_run = importlib.import_module("pymultinest.run")
......
......@@ -830,6 +830,9 @@ def _generate_all_cbc_parameters(sample, defaults, base_conversion,
output_sample = fill_from_fixed_priors(output_sample, priors)
output_sample, _ = base_conversion(output_sample)
if likelihood is not None:
compute_per_detector_log_likelihoods(
samples=output_sample, likelihood=likelihood, npool=npool)
marginalized_parameters = getattr(likelihood, "_marginalized_parameters", list())
if len(marginalized_parameters) > 0:
try:
......@@ -1404,11 +1407,9 @@ def compute_snrs(sample, likelihood, npool=1):
if likelihood is not None:
if isinstance(sample, dict):
likelihood.parameters.update(sample)
signal_polarizations =\
likelihood.waveform_generator.frequency_domain_strain(sample)
signal_polarizations = likelihood.waveform_generator.frequency_domain_strain(sample)
for ifo in likelihood.interferometers:
per_detector_snr = likelihood.calculate_snrs(
signal_polarizations, ifo)
per_detector_snr = likelihood.calculate_snrs(signal_polarizations, ifo)
sample['{}_matched_filter_snr'.format(ifo.name)] =\
per_detector_snr.complex_matched_filter_snr
sample['{}_optimal_snr'.format(ifo.name)] = \
......@@ -1468,6 +1469,117 @@ def _compute_snrs(args):
return snrs
def compute_per_detector_log_likelihoods(samples, likelihood, npool=1, block=10):
"""
Calculate the log likelihoods in each detector.
Parameters
==========
samples: DataFrame
Posterior from run with a marginalised likelihood.
likelihood: bilby.gw.likelihood.GravitationalWaveTransient
Likelihood used during sampling.
npool: int, (default=1)
If given, perform generation (where possible) using a multiprocessing pool
block: int, (default=10)
Size of the blocks to use in multiprocessing
Returns
=======
sample: DataFrame
Returns the posterior with new samples.
"""
if likelihood is not None:
if not callable(likelihood.compute_per_detector_log_likelihood):
logger.debug('Not computing per-detector log likelihoods.')
return samples
if isinstance(samples, dict):
likelihood.parameters.update(samples)
samples = likelihood.compute_per_detector_log_likelihood()
return samples
elif not isinstance(samples, DataFrame):
raise ValueError("Unable to handle input samples of type {}".format(type(samples)))
from tqdm.auto import tqdm
logger.info('Computing per-detector log likelihoods.')
# Initialize cache dict
cached_samples_dict = dict()
# Store samples to convert for checking
cached_samples_dict["_samples"] = samples
# Set up the multiprocessing
if npool > 1:
from ..core.sampler.base_sampler import _initialize_global_variables
pool = multiprocessing.Pool(
processes=npool,
initializer=_initialize_global_variables,
initargs=(likelihood, None, None, False),
)
logger.info(
"Using a pool with size {} for nsamples={}"
.format(npool, len(samples))
)
else:
from ..core.sampler.base_sampler import _sampling_convenience_dump
_sampling_convenience_dump.likelihood = likelihood
pool = None
fill_args = [(ii, row) for ii, row in samples.iterrows()]
ii = 0
pbar = tqdm(total=len(samples), file=sys.stdout)
while ii < len(samples):
if ii in cached_samples_dict:
ii += block
pbar.update(block)
continue
if pool is not None:
subset_samples = pool.map(_compute_per_detector_log_likelihoods,
fill_args[ii: ii + block])
else:
subset_samples = [list(_compute_per_detector_log_likelihoods(xx))
for xx in fill_args[ii: ii + block]]
cached_samples_dict[ii] = subset_samples
ii += block
pbar.update(len(subset_samples))
pbar.close()
if pool is not None:
pool.close()
pool.join()
new_samples = np.concatenate(
[np.array(val) for key, val in cached_samples_dict.items() if key != "_samples"]
)
for ii, key in \
enumerate([f'{ifo.name}_log_likelihood' for ifo in likelihood.interferometers]):
samples[key] = new_samples[:, ii]
return samples
else:
logger.debug('Not computing per-detector log likelihoods.')
def _compute_per_detector_log_likelihoods(args):
"""A wrapper of computing the per-detector log likelihoods to enable multiprocessing"""
from ..core.sampler.base_sampler import _sampling_convenience_dump
likelihood = _sampling_convenience_dump.likelihood
ii, sample = args
sample = dict(sample).copy()
likelihood.parameters.update(dict(sample).copy())
new_sample = likelihood.compute_per_detector_log_likelihood()
return tuple((new_sample[key] for key in
[f'{ifo.name}_log_likelihood' for ifo in likelihood.interferometers]))
def generate_posterior_samples_from_marginalized_likelihood(
samples, likelihood, npool=1, block=10, use_cache=True):
"""
......
......@@ -285,7 +285,7 @@ class Interferometer(object):
else:
return 0
def get_detector_response(self, waveform_polarizations, parameters):
def get_detector_response(self, waveform_polarizations, parameters, frequencies=None):
""" Get the detector response for a particular waveform
Parameters
......@@ -294,11 +294,21 @@ class Interferometer(object):
polarizations of the waveform
parameters: dict
parameters describing position and time of arrival of the signal
frequencies: array-like, optional
The frequency values to evaluate the response at. If
not provided, the response is computed using
:code:`self.frequency_array`. If the frequencies are
specified, no frequency masking is performed.
Returns
=======
array_like: A 3x3 array representation of the detector response (signal observed in the interferometer)
"""
if frequencies is None:
frequencies = self.frequency_array[self.frequency_mask]
mask = self.frequency_mask
else:
mask = np.ones(len(frequencies), dtype=bool)
signal = {}
for mode in waveform_polarizations.keys():
det_response = self.antenna_response(
......@@ -308,9 +318,7 @@ class Interferometer(object):
parameters['psi'], mode)
signal[mode] = waveform_polarizations[mode] * det_response
signal_ifo = sum(signal.values())
signal_ifo *= self.strain_data.frequency_mask
signal_ifo = sum(signal.values()) * mask
time_shift = self.time_delay_from_geocenter(
parameters['ra'], parameters['dec'], parameters['geocent_time'])
......@@ -320,12 +328,11 @@ class Interferometer(object):
dt_geocent = parameters['geocent_time'] - self.strain_data.start_time
dt = dt_geocent + time_shift
signal_ifo[self.strain_data.frequency_mask] = signal_ifo[self.strain_data.frequency_mask] * np.exp(
-1j * 2 * np.pi * dt * self.strain_data.frequency_array[self.strain_data.frequency_mask])
signal_ifo[mask] = signal_ifo[mask] * np.exp(-1j * 2 * np.pi * dt * frequencies)
signal_ifo[self.strain_data.frequency_mask] *= self.calibration_model.get_calibration_factor(
self.strain_data.frequency_array[self.strain_data.frequency_mask],
prefix='recalib_{}_'.format(self.name), **parameters)
signal_ifo[mask] *= self.calibration_model.get_calibration_factor(
frequencies, prefix='recalib_{}_'.format(self.name), **parameters
)
return signal_ifo
......@@ -620,7 +627,12 @@ class Interferometer(object):
return self.strain_data.frequency_domain_strain / self.amplitude_spectral_density_array
def save_data(self, outdir, label=None):
""" Creates a save file for the data in plain text format
""" Creates save files for interferometer data in plain text format.
Saves two files: the frequency domain strain data with three columns [f, real part of h(f),
imaginary part of h(f)], and the amplitude spectral density with two columns [f, ASD(f)].
Note that in v1.3.0 and below, the ASD was saved in a file called *_psd.dat.
Parameters
==========
......@@ -631,10 +643,10 @@ class Interferometer(object):
"""
if label is None:
filename_psd = '{}/{}_psd.dat'.format(outdir, self.name)
filename_asd = '{}/{}_asd.dat'.format(outdir, self.name)
filename_data = '{}/{}_frequency_domain_data.dat'.format(outdir, self.name)
else:
filename_psd = '{}/{}_{}_psd.dat'.format(outdir, self.name, label)
filename_asd = '{}/{}_{}_asd.dat'.format(outdir, self.name, label)
filename_data = '{}/{}_{}_frequency_domain_data.dat'.format(outdir, self.name, label)
np.savetxt(filename_data,
np.array(
......@@ -642,7 +654,7 @@ class Interferometer(object):
self.strain_data.frequency_domain_strain.real,
self.strain_data.frequency_domain_strain.imag]).T,
header='f real_h(f) imag_h(f)')
np.savetxt(filename_psd,
np.savetxt(filename_asd,
np.array(
[self.strain_data.frequency_array,
self.amplitude_spectral_density_array]).T,
......
......@@ -2,6 +2,7 @@ from .base import GravitationalWaveTransient
from .basic import BasicGravitationalWaveTransient
from .roq import BilbyROQParamsRangeError, ROQGravitationalWaveTransient
from .multiband import MBGravitationalWaveTransient
from .relative import RelativeBinningGravitationalWaveTransient
from ..source import lal_binary_black_hole
from ..waveform_generator import WaveformGenerator
......
......@@ -115,6 +115,36 @@ class GravitationalWaveTransient(Likelihood):
optimal_snr_squared_array = attr.ib()
d_inner_h_squared_tc_array = attr.ib()
def __add__(self, other_snr):
total_d_inner_h = self.d_inner_h + other_snr.d_inner_h
total_optimal_snr_squared = self.optimal_snr_squared + \
np.real(other_snr.optimal_snr_squared)
total_complex_matched_filter_snr = self.complex_matched_filter_snr + \
other_snr.complex_matched_filter_snr
total_d_inner_h_array = self.d_inner_h_array
if other_snr.d_inner_h_array is not None \
and self.d_inner_h_array is not None:
total_d_inner_h_array += other_snr.d_inner_h_array
total_optimal_snr_squared_array = self.optimal_snr_squared_array
if other_snr.optimal_snr_squared_array is not None \
and self.optimal_snr_squared_array is not None:
total_optimal_snr_squared_array += other_snr.optimal_snr_squared_array
total_d_inner_h_squared_tc_array = self.d_inner_h_squared_tc_array
if other_snr.d_inner_h_squared_tc_array is not None \
and self.d_inner_h_squared_tc_array is not None:
total_d_inner_h_squared_tc_array += other_snr.d_inner_h_squared_tc_array
return self.__class__(d_inner_h=total_d_inner_h,
optimal_snr_squared=total_optimal_snr_squared,
complex_matched_filter_snr=total_complex_matched_filter_snr,
d_inner_h_array=total_d_inner_h_array,
optimal_snr_squared_array=total_optimal_snr_squared_array,
d_inner_h_squared_tc_array=total_d_inner_h_squared_tc_array)
def __init__(
self, interferometers, waveform_generator, time_marginalization=False,
distance_marginalization=False, phase_marginalization=False, calibration_marginalization=False, priors=None,
......@@ -132,6 +162,7 @@ class GravitationalWaveTransient(Likelihood):
self.calibration_marginalization = calibration_marginalization
self.priors = priors
self._check_set_duration_and_sampling_frequency_of_waveform_generator()
self._noise_log_likelihood_value = None
self.jitter_time = jitter_time
self.reference_frame = reference_frame
if "geocent" not in time_reference:
......@@ -234,8 +265,10 @@ class GravitationalWaveTransient(Likelihood):
The bilby interferometer object
"""
signal = interferometer.get_detector_response(
waveform_polarizations, self.parameters)
signal = self._compute_full_waveform(
signal_polarizations=waveform_polarizations,
interferometer=interferometer,
)
_mask = interferometer.frequency_mask
if 'recalib_index' in self.parameters:
......@@ -342,7 +375,7 @@ class GravitationalWaveTransient(Likelihood):
else:
self._prior = None
def noise_log_likelihood(self):
def _calculate_noise_log_likelihood(self):
log_l = 0
for interferometer in self.interferometers:
mask = interferometer.frequency_mask
......@@ -353,83 +386,109 @@ class GravitationalWaveTransient(Likelihood):
self.waveform_generator.duration) / 2
return float(np.real(log_l))
def noise_log_likelihood(self):
# only compute likelihood if called for the 1st time
if self._noise_log_likelihood_value is None:
self._noise_log_likelihood_value = self._calculate_noise_log_likelihood()
return self._noise_log_likelihood_value
def log_likelihood_ratio(self):
waveform_polarizations = \
self.waveform_generator.frequency_domain_strain(self.parameters)
if self.time_marginalization and self.jitter_time:
self.parameters['geocent_time'] += self.parameters['time_jitter']
self.parameters.update(self.get_sky_frame_parameters())
if waveform_polarizations is None:
return np.nan_to_num(-np.inf)
d_inner_h = 0.
optimal_snr_squared = 0.
complex_matched_filter_snr = 0.
total_snrs = self._CalculatedSNRs(
d_inner_h=0., optimal_snr_squared=0., complex_matched_filter_snr=0.,
d_inner_h_array=None, optimal_snr_squared_array=None, d_inner_h_squared_tc_array=None)
if self.time_marginalization and self.calibration_marginalization:
if self.jitter_time:
self.parameters['geocent_time'] += self.parameters['time_jitter']
d_inner_h_array = np.zeros(
total_snrs.d_inner_h_array = np.zeros(
(self.number_of_response_curves, len(self.interferometers.frequency_array[0:-1])),
dtype=np.complex128)
optimal_snr_squared_array = np.zeros(self.number_of_response_curves, dtype=np.complex128)
total_snrs.optimal_snr_squared_array = \
np.zeros(self.number_of_response_curves, dtype=np.complex128)
elif self.time_marginalization:
if self.jitter_time:
self.parameters['geocent_time'] += self.parameters['time_jitter']
d_inner_h_array = np.zeros(len(self._times), dtype=np.complex128)
total_snrs.d_inner_h_array = np.zeros(len(self._times), dtype=np.complex128)
elif self.calibration_marginalization:
d_inner_h_array = np.zeros(self.number_of_response_curves, dtype=np.complex128)
optimal_snr_squared_array = np.zeros(self.number_of_response_curves, dtype=np.complex128)
total_snrs.d_inner_h_array = \
np.zeros(self.number_of_response_curves, dtype=np.complex128)
total_snrs.optimal_snr_squared_array = \
np.zeros(self.number_of_response_curves, dtype=np.complex128)
for interferometer in self.interferometers:
per_detector_snr = self.calculate_snrs(
waveform_polarizations=waveform_polarizations,
interferometer=interferometer)
d_inner_h += per_detector_snr.d_inner_h
optimal_snr_squared += np.real(per_detector_snr.optimal_snr_squared)
complex_matched_filter_snr += per_detector_snr.complex_matched_filter_snr
total_snrs += per_detector_snr
if self.time_marginalization or self.calibration_marginalization:
d_inner_h_array += per_detector_snr.d_inner_h_array
log_l = self.compute_log_likelihood_from_snrs(total_snrs)
if self.calibration_marginalization:
optimal_snr_squared_array += per_detector_snr.optimal_snr_squared_array
if self.time_marginalization and self.jitter_time:
self.parameters['geocent_time'] -= self.parameters['time_jitter']
return float(log_l.real)
def compute_log_likelihood_from_snrs(self, total_snrs):
if self.calibration_marginalization and self.time_marginalization:
log_l = self.time_and_calibration_marginalized_likelihood(
d_inner_h_array=d_inner_h_array,
h_inner_h=optimal_snr_squared_array)
if self.jitter_time:
self.parameters['geocent_time'] -= self.parameters['time_jitter']
d_inner_h_array=total_snrs.d_inner_h_array,
h_inner_h=total_snrs.optimal_snr_squared_array)
elif self.calibration_marginalization:
log_l = self.calibration_marginalized_likelihood(
d_inner_h_calibration_array=d_inner_h_array,
h_inner_h=optimal_snr_squared_array)
d_inner_h_calibration_array=total_snrs.d_inner_h_array,
h_inner_h=total_snrs.optimal_snr_squared_array)
elif self.time_marginalization:
log_l = self.time_marginalized_likelihood(
d_inner_h_tc_array=d_inner_h_array,
h_inner_h=optimal_snr_squared)
if self.jitter_time:
self.parameters['geocent_time'] -= self.parameters['time_jitter']
d_inner_h_tc_array=total_snrs.d_inner_h_array,
h_inner_h=total_snrs.optimal_snr_squared)
elif self.distance_marginalization:
log_l = self.distance_marginalized_likelihood(
d_inner_h=d_inner_h, h_inner_h=optimal_snr_squared)
d_inner_h=total_snrs.d_inner_h, h_inner_h=total_snrs.optimal_snr_squared)
elif self.phase_marginalization:
log_l = self.phase_marginalized_likelihood(
d_inner_h=d_inner_h, h_inner_h=optimal_snr_squared)
d_inner_h=total_snrs.d_inner_h, h_inner_h=total_snrs.optimal_snr_squared)
else:
log_l = np.real(d_inner_h) - optimal_snr_squared / 2
log_l = np.real(total_snrs.d_inner_h) - total_snrs.optimal_snr_squared / 2
return float(log_l.real)
return log_l
def compute_per_detector_log_likelihood(self):
waveform_polarizations = \
self.waveform_generator.frequency_domain_strain(self.parameters)
if self.time_marginalization and self.jitter_time:
self.parameters['geocent_time'] += self.parameters['time_jitter']
self.parameters.update(self.get_sky_frame_parameters())
for interferometer in self.interferometers:
per_detector_snr = self.calculate_snrs(
waveform_polarizations=waveform_polarizations,
interferometer=interferometer)
self.parameters['{}_log_likelihood'.format(interferometer.name)] = \
self.compute_log_likelihood_from_snrs(per_detector_snr)
if self.time_marginalization and self.jitter_time:
self.parameters['geocent_time'] -= self.parameters['time_jitter']
return self.parameters.copy()
def generate_posterior_sample_from_marginalized_likelihood(self):
"""
......@@ -557,8 +616,10 @@ class GravitationalWaveTransient(Likelihood):
for ifo in self.interferometers:
ifo_length = len(ifo.frequency_domain_strain)
mask = ifo.frequency_mask
signal = ifo.get_detector_response(
signal_polarizations, self.parameters)
signal = self._compute_full_waveform(
signal_polarizations=signal_polarizations,
interferometer=ifo,
)
signal_long[:ifo_length] = signal
data[:ifo_length] = np.conj(ifo.frequency_domain_strain)
psd[:ifo_length][mask] = ifo.power_spectral_density_array[mask]
......@@ -646,6 +707,23 @@ class GravitationalWaveTransient(Likelihood):
h_inner_h += per_detector_snr.optimal_snr_squared
return d_inner_h, h_inner_h
def _compute_full_waveform(self, signal_polarizations, interferometer):
"""
Project the waveform polarizations against the interferometer
response. This is useful for likelihood classes that don't
use the full frequency array, e.g., the relative binning
likelihood.
Parameters
==========
signal_polarizations: dict
Dictionary containing the waveform evaluated at
:code:`interferometer.frequency_array`.
interferometer: bilby.gw.detector.Interferometer
Interferometer to compute the response with respect to.
"""
return interferometer.get_detector_response(signal_polarizations, self.parameters)
def generate_phase_sample_from_marginalized_likelihood(
self, signal_polarizations=None):
r"""
......@@ -749,32 +827,29 @@ class GravitationalWaveTransient(Likelihood):
signal_polarizations = \
self.waveform_generator.frequency_domain_strain(self.parameters)
d_inner_h = 0.
optimal_snr_squared = 0.
complex_matched_filter_snr = 0.
d_inner_h_array = np.zeros(self.number_of_response_curves, dtype=np.complex128)
optimal_snr_squared_array = np.zeros(self.number_of_response_curves, dtype=np.complex128)
total_snrs = self._CalculatedSNRs(
d_inner_h=0., optimal_snr_squared=0., complex_matched_filter_snr=0.,
d_inner_h_array=np.zeros(self.number_of_response_curves, dtype=np.complex128),
optimal_snr_squared_array=np.zeros(self.number_of_response_curves, dtype=np.complex128))
for interferometer in self.interferometers:
per_detector_snr = self.calculate_snrs(
waveform_polarizations=signal_polarizations,
interferometer=interferometer)
d_inner_h += per_detector_snr.d_inner_h
optimal_snr_squared += np.real(per_detector_snr.optimal_snr_squared)
complex_matched_filter_snr += per_detector_snr.complex_matched_filter_snr
d_inner_h_array += per_detector_snr.d_inner_h_array
optimal_snr_squared_array += per_detector_snr.optimal_snr_squared_array
total_snrs += per_detector_snr
if self.distance_marginalization:
log_l_cal_array = self.distance_marginalized_likelihood(
d_inner_h=d_inner_h_array, h_inner_h=optimal_snr_squared_array)
d_inner_h=total_snrs.d_inner_h_array,
h_inner_h=total_snrs.optimal_snr_squared_array)
elif self.phase_marginalization:
log_l_cal_array = self.phase_marginalized_likelihood(
d_inner_h=d_inner_h_array,
h_inner_h=optimal_snr_squared_array)
d_inner_h=total_snrs.d_inner_h_array,
h_inner_h=total_snrs.optimal_snr_squared_array)
else:
log_l_cal_array = np.real(d_inner_h_array - optimal_snr_squared_array / 2)
log_l_cal_array = \
np.real(total_snrs.d_inner_h_array - total_snrs.optimal_snr_squared_array / 2)
return log_l_cal_array
......
......@@ -8,6 +8,7 @@ from ...core.utils import (
logger, speed_of_light, solar_mass, radius_of_earth,
gravitational_constant, round_up_to_power_of_two
)
from ..prior import CBCPriorDict
class MBGravitationalWaveTransient(GravitationalWaveTransient):
......@@ -21,8 +22,9 @@ class MBGravitationalWaveTransient(GravitationalWaveTransient):
A list of `bilby.detector.Interferometer` instances - contains the detector data and power spectral densities
waveform_generator: `bilby.waveform_generator.WaveformGenerator`
An object which computes the frequency-domain strain of the signal, given some set of parameters
reference_chirp_mass: float
A reference chirp mass for determining the frequency banding
reference_chirp_mass: float, optional
A reference chirp mass for determining the frequency banding. This is set to prior minimum of chirp mass if
not specified. Hence a CBCPriorDict object needs to be passed to priors when this parameter is not specified.
highest_mode: int, optional
The maximum magnetic number of gravitational-wave moments. Default is 2
linear_interpolation: bool, optional
......@@ -72,10 +74,11 @@ class MBGravitationalWaveTransient(GravitationalWaveTransient):
"""
def __init__(
self, interferometers, waveform_generator, reference_chirp_mass, highest_mode=2, linear_interpolation=True,
accuracy_factor=5, time_offset=None, delta_f_end=None, maximum_banding_frequency=None,
minimum_banding_duration=0., distance_marginalization=False, phase_marginalization=False, priors=None,
distance_marginalization_lookup_table=None, reference_frame="sky", time_reference="geocenter"
self, interferometers, waveform_generator, reference_chirp_mass=None, highest_mode=2,
linear_interpolation=True, accuracy_factor=5, time_offset=None, delta_f_end=None,
maximum_banding_frequency=None, minimum_banding_duration=0., distance_marginalization=False,
phase_marginalization=False, priors=None, distance_marginalization_lookup_table=None,
reference_frame="sky", time_reference="geocenter"
):
super(MBGravitationalWaveTransient, self).__init__(
interferometers=interferometers, waveform_generator=waveform_generator, priors=priors,
......@@ -108,7 +111,24 @@ class MBGravitationalWaveTransient(GravitationalWaveTransient):
if isinstance(reference_chirp_mass, int) or isinstance(reference_chirp_mass, float):
self._reference_chirp_mass = reference_chirp_mass
else:
raise TypeError("reference_chirp_mass must be a number")
logger.info(
"No int or float number has been passed to reference_chirp_mass. "
"Checking prior minimum of chirp mass ..."
)
if not isinstance(self.priors, CBCPriorDict):
raise TypeError(
f"priors: {self.priors} is not CBCPriorDict. Prior minimum of chirp mass can not be obtained."
)
self._reference_chirp_mass = self.priors.minimum_chirp_mass
if self._reference_chirp_mass is None:
raise Exception(
"Prior minimum of chirp mass can not be determined as priors does not contain necessary mass "
"parameters."
)
logger.info(
"reference_chirp_mass is automatically set to prior minimum of chirp mass: "
f"{self._reference_chirp_mass}."
)
@property
def highest_mode(self):
......
import numpy as np
from scipy.optimize import differential_evolution
from .base import GravitationalWaveTransient
from ...core.utils import logger
from ...core.prior.base import Constraint
from ...core.prior import DeltaFunction
from ..utils import noise_weighted_inner_product
class RelativeBinningGravitationalWaveTransient(GravitationalWaveTransient):
"""A gravitational-wave transient likelihood object which uses the relative
binning procedure to calculate a fast likelihood. See Zackay et al.
arXiv1806.08792
Parameters
----------
interferometers: list, bilby.gw.detector.InterferometerList
A list of `bilby.detector.Interferometer` instances - contains the
detector data and power spectral densities
waveform_generator: `bilby.waveform_generator.WaveformGenerator`
An object which computes the frequency-domain strain of the signal,
given some set of parameters
fiducial_parameters: dict, optional
A starting guess for initial parameters of the event for finding the
maximum likelihood (fiducial) waveform.
parameter_bounds: dict, optional
Dictionary of bounds (lists) for the initial parameters when finding
the initial maximum likelihood (fiducial) waveform.
distance_marginalization: bool, optional
If true, marginalize over distance in the likelihood.
This uses a look up table calculated at run time.
The distance prior is set to be a delta function at the minimum
distance allowed in the prior being marginalised over.
time_marginalization: bool, optional
If true, marginalize over time in the likelihood.
This uses a FFT to calculate the likelihood over a regularly spaced
grid.
In order to cover the whole space the prior is set to be uniform over
the spacing of the array of times.
If using time marginalisation and jitter_time is True a "jitter"
parameter is added to the prior which modifies the position of the
grid of times.
phase_marginalization: bool, optional
If true, marginalize over phase in the likelihood.
This is done analytically using a Bessel function.
The phase prior is set to be a delta function at phase=0.
priors: dict, optional
If given, used in the distance and phase marginalization.
distance_marginalization_lookup_table: (dict, str), optional
If a dict, dictionary containing the lookup_table, distance_array,
(distance) prior_array, and reference_distance used to construct
the table.
If a string the name of a file containing these quantities.
The lookup table is stored after construction in either the
provided string or a default location:
'.distance_marginalization_lookup_dmin{}_dmax{}_n{}.npz'
jitter_time: bool, optional
Whether to introduce a `time_jitter` parameter. This avoids either
missing the likelihood peak, or introducing biases in the
reconstructed time posterior due to an insufficient sampling frequency.
Default is False, however using this parameter is strongly encouraged.
reference_frame: (str, bilby.gw.detector.InterferometerList, list), optional
Definition of the reference frame for the sky location.
- "sky": sample in RA/dec, this is the default
- e.g., "H1L1", ["H1", "L1"], InterferometerList(["H1", "L1"]):
sample in azimuth and zenith, `azimuth` and `zenith` defined in the
frame where the z-axis is aligned the the vector connecting H1
and L1.
time_reference: str, optional
Name of the reference for the sampled time parameter.
- "geocent"/"geocenter": sample in the time at the Earth's center,
this is the default
- e.g., "H1": sample in the time of arrival at H1
chi: float, optional
Tunable parameter which limits the perturbation of alpha when setting
up the bin range. See https://arxiv.org/abs/1806.08792.
epsilon: float, optional
Tunable parameter which limits the differential phase change in each
bin when setting up the bin range. See https://arxiv.org/abs/1806.08792.
Returns
-------
Likelihood: `bilby.core.likelihood.Likelihood`
A likelihood object, able to compute the likelihood of the data given
some model parameters.
Notes
-----
The relative binning likelihood does not currently support calibration marginalization.
"""
def __init__(self, interferometers,
waveform_generator,
fiducial_parameters=None,
parameter_bounds=None,
maximization_kwargs=None,
update_fiducial_parameters=False,
distance_marginalization=False,
time_marginalization=False,
phase_marginalization=False,
priors=None,
distance_marginalization_lookup_table=None,
jitter_time=True,
reference_frame="sky",
time_reference="geocenter",
chi=1,
epsilon=0.5):
super(RelativeBinningGravitationalWaveTransient, self).__init__(
interferometers=interferometers,
waveform_generator=waveform_generator,
distance_marginalization=distance_marginalization,
phase_marginalization=phase_marginalization,
time_marginalization=time_marginalization,
priors=priors,
distance_marginalization_lookup_table=distance_marginalization_lookup_table,
jitter_time=jitter_time,
reference_frame=reference_frame,
time_reference=time_reference)
if fiducial_parameters is None:
logger.info("Drawing fiducial parameters from prior.")
fiducial_parameters = priors.sample()
fiducial_parameters["fiducial"] = 0
if self.time_marginalization:
fiducial_parameters["geocent_time"] = interferometers.start_time
if self.distance_marginalization:
fiducial_parameters["luminosity_distance"] = self._ref_dist
if self.phase_marginalization:
fiducial_parameters["phase"] = 0.0
self.fiducial_parameters = fiducial_parameters
self.chi = chi
self.epsilon = epsilon
self.gamma = np.array([-5 / 3, -2 / 3, 1, 5 / 3, 7 / 3])
self.maximum_frequency = waveform_generator.frequency_array[-1]
self.fiducial_waveform_obtained = False
self.check_if_bins_are_setup = False
self.fiducial_polarizations = None
self.per_detector_fiducial_waveforms = dict()
self.per_detector_fiducial_waveform_points = dict()
self.bin_freqs = dict()
self.bin_inds = dict()
self.bin_widths = dict()
self.bin_centers = dict()
self.set_fiducial_waveforms(self.fiducial_parameters)
logger.info("Initial fiducial waveforms set up")
self.setup_bins()
self.compute_summary_data()
logger.info("Summary Data Obtained")
if update_fiducial_parameters:
# write a check to make sure prior is not None
logger.info("Using scipy optimization to find maximum likelihood parameters.")
self.parameters_to_be_updated = [key for key in priors if not isinstance(
priors[key], (DeltaFunction, Constraint, float, int))]
logger.info(f"Parameters over which likelihood is maximized: {self.parameters_to_be_updated}")
if parameter_bounds is None:
logger.info("No parameter bounds were given. Using priors instead.")
self.parameter_bounds = self.get_bounds_from_priors(priors)
else:
self.parameter_bounds = self.get_parameter_list_from_dictionary(parameter_bounds)
self.fiducial_parameters = self.find_maximum_likelihood_parameters(
self.parameter_bounds, maximization_kwargs=maximization_kwargs)
self.parameters.update(self.fiducial_parameters)
logger.info(f"Fiducial likelihood: {self.log_likelihood_ratio():.2f}")
def __repr__(self):
return self.__class__.__name__ + '(interferometers={},\n\twaveform_generator={},\n\fiducial_parameters={},' \
.format(self.interferometers, self.waveform_generator, self.fiducial_parameters)
def setup_bins(self):
frequency_array = self.waveform_generator.frequency_array
gamma = self.gamma[:, np.newaxis]
maximum_frequency = frequency_array[0]
minimum_frequency = frequency_array[-1]
for interferometer in self.interferometers:
maximum_frequency = max(maximum_frequency, interferometer.maximum_frequency)
minimum_frequency = min(minimum_frequency, interferometer.minimum_frequency)
maximum_frequency = min(maximum_frequency, self.maximum_frequency)
frequency_array_useful = frequency_array[
(frequency_array >= minimum_frequency)
& (frequency_array <= maximum_frequency)
]
d_alpha = self.chi * 2 * np.pi / np.abs(
(minimum_frequency ** gamma) * np.heaviside(-gamma, 1)
- (maximum_frequency ** gamma) * np.heaviside(gamma, 1)
)
d_phi = np.sum(
np.sign(gamma) * d_alpha * frequency_array_useful ** gamma,
axis=0
)
d_phi_from_start = d_phi - d_phi[0]
self.number_of_bins = int(d_phi_from_start[-1] // self.epsilon)
self.bin_freqs = np.zeros(self.number_of_bins + 1)
self.bin_inds = np.zeros(self.number_of_bins + 1, dtype=int)
for i in range(self.number_of_bins + 1):
bin_index = np.where(d_phi_from_start >= ((i / self.number_of_bins) * d_phi_from_start[-1]))[0][0]
bin_freq = frequency_array_useful[bin_index]
self.bin_freqs[i] = bin_freq
self.bin_inds[i] = np.where(frequency_array >= bin_freq)[0][0]
logger.debug(
f"Set up {self.number_of_bins} bins "
f"between {minimum_frequency} Hz and {maximum_frequency} Hz"
)
self.waveform_generator.waveform_arguments["frequency_bin_edges"] = self.bin_freqs
self.bin_widths = self.bin_freqs[1:] - self.bin_freqs[:-1]
self.bin_centers = (self.bin_freqs[1:] + self.bin_freqs[:-1]) / 2
for interferometer in self.interferometers:
name = interferometer.name
self.per_detector_fiducial_waveform_points[name] = (
self.per_detector_fiducial_waveforms[name][self.bin_inds]
)
def set_fiducial_waveforms(self, parameters):
_fiducial = parameters["fiducial"]
parameters["fiducial"] = 1
self.fiducial_polarizations = self.waveform_generator.frequency_domain_strain(
parameters)
maximum_nonzero_index = np.where(self.fiducial_polarizations["plus"] != 0j)[0][-1]
logger.debug(f"Maximum Nonzero Index is {maximum_nonzero_index}")
maximum_nonzero_frequency = self.waveform_generator.frequency_array[maximum_nonzero_index]
logger.debug(f"Maximum Nonzero Frequency is {maximum_nonzero_frequency}")
self.maximum_frequency = maximum_nonzero_frequency
if self.fiducial_polarizations is None:
raise ValueError(f"Cannot compute fiducial waveforms for {parameters}")
for interferometer in self.interferometers:
logger.debug(f"Maximum Frequency is {interferometer.maximum_frequency}")
wf = interferometer.get_detector_response(self.fiducial_polarizations, parameters)
wf[interferometer.frequency_array > self.maximum_frequency] = 0
self.per_detector_fiducial_waveforms[interferometer.name] = wf
parameters["fiducial"] = _fiducial
def find_maximum_likelihood_parameters(self, parameter_bounds,
iterations=5, maximization_kwargs=None):
if maximization_kwargs is None:
maximization_kwargs = dict()
self.parameters.update(self.fiducial_parameters)
self.parameters["fiducial"] = 0
updated_parameters_list = self.get_parameter_list_from_dictionary(self.fiducial_parameters)
old_fiducial_ln_likelihood = self.log_likelihood_ratio()
logger.info(f"Fiducial ln likelihood ratio: {old_fiducial_ln_likelihood:.2f}")
for it in range(iterations):
logger.info(f"Optimizing fiducial parameters. Iteration : {it + 1}")
output = differential_evolution(
self.lnlike_scipy_maximize,
bounds=parameter_bounds,
x0=updated_parameters_list,
**maximization_kwargs,
)
updated_parameters_list = output['x']
updated_parameters = self.get_parameter_dictionary_from_list(updated_parameters_list)
self.parameters.update(updated_parameters)
self.set_fiducial_waveforms(updated_parameters)
self.setup_bins()
self.compute_summary_data()
new_fiducial_ln_likelihood = self.log_likelihood_ratio()
logger.info(f"Fiducial ln likelihood ratio: {new_fiducial_ln_likelihood:.2f}")
if new_fiducial_ln_likelihood - old_fiducial_ln_likelihood < 0.1:
break
old_fiducial_ln_likelihood = new_fiducial_ln_likelihood
logger.info("Fiducial waveforms updated")
logger.info("Summary Data updated")
return updated_parameters
def lnlike_scipy_maximize(self, parameter_list):
self.parameters.update(self.get_parameter_dictionary_from_list(parameter_list))
return -self.log_likelihood_ratio()
def get_parameter_dictionary_from_list(self, parameter_list):
parameter_dictionary = dict(zip(self.parameters_to_be_updated, parameter_list))
excluded_parameter_keys = set(self.fiducial_parameters) - set(self.parameters_to_be_updated)
for key in excluded_parameter_keys:
parameter_dictionary[key] = self.fiducial_parameters[key]
return parameter_dictionary
def get_parameter_list_from_dictionary(self, parameter_dict):
return [parameter_dict[k] for k in self.parameters_to_be_updated]
def get_bounds_from_priors(self, priors):
bounds = []
for key in self.parameters_to_be_updated:
bounds.append([priors[key].minimum, priors[key].maximum])
return bounds
def compute_summary_data(self):
summary_data = dict()
for interferometer in self.interferometers:
mask = interferometer.frequency_mask
masked_frequency_array = interferometer.frequency_array[mask]
masked_bin_inds = []
for edge in self.bin_freqs:
index = np.where(masked_frequency_array == edge)[0][0]
masked_bin_inds.append(index)
masked_strain = interferometer.frequency_domain_strain[mask]
masked_h0 = self.per_detector_fiducial_waveforms[interferometer.name][mask]
masked_psd = interferometer.power_spectral_density_array[mask]
duration = interferometer.duration
a0, b0, a1, b1 = np.zeros((4, self.number_of_bins), dtype=complex)
for i in range(self.number_of_bins):
start_idx = masked_bin_inds[i]
end_idx = masked_bin_inds[i + 1]
start = masked_frequency_array[start_idx]
stop = masked_frequency_array[end_idx]
idxs = slice(start_idx, end_idx)
strain = masked_strain[idxs]
h0 = masked_h0[idxs]
psd = masked_psd[idxs]
frequencies = masked_frequency_array[idxs]
central_frequency = (start + stop) / 2
delta_frequency = frequencies - central_frequency
a0[i] = noise_weighted_inner_product(h0, strain, psd, duration)
b0[i] = noise_weighted_inner_product(h0, h0, psd, duration)
a1[i] = noise_weighted_inner_product(h0, strain * delta_frequency, psd, duration)
b1[i] = noise_weighted_inner_product(h0, h0 * delta_frequency, psd, duration)
summary_data[interferometer.name] = (a0, a1, b0, b1)
self.summary_data = summary_data
def compute_waveform_ratio_per_interferometer(self, waveform_polarizations, interferometer):
name = interferometer.name
strain = interferometer.get_detector_response(
waveform_polarizations=waveform_polarizations,
parameters=self.parameters,
frequencies=self.bin_freqs,
)
reference_strain = self.per_detector_fiducial_waveform_points[name]
waveform_ratio = strain / reference_strain
r0 = (waveform_ratio[1:] + waveform_ratio[:-1]) / 2
r1 = (waveform_ratio[1:] - waveform_ratio[:-1]) / self.bin_widths
return [r0, r1]
def _compute_full_waveform(self, signal_polarizations, interferometer):
fiducial_waveform = self.per_detector_fiducial_waveforms[interferometer.name]
r0, r1 = self.compute_waveform_ratio_per_interferometer(
waveform_polarizations=signal_polarizations,
interferometer=interferometer,
)
f = interferometer.frequency_array
duplicated_r0, duplicated_r1, duplicated_fm = np.zeros((3, f.shape[0]), dtype=complex)
for i in range(self.number_of_bins):
idxs = slice(self.bin_inds[i], self.bin_inds[i + 1])
duplicated_fm[idxs] = self.bin_centers[i]
duplicated_r0[idxs] = r0[i]
duplicated_r1[idxs] = r1[i]
full_waveform_ratio = duplicated_r0 + duplicated_r1 * (f - duplicated_fm)
return fiducial_waveform * full_waveform_ratio
def calculate_snrs(self, waveform_polarizations, interferometer):
r0, r1 = self.compute_waveform_ratio_per_interferometer(
waveform_polarizations=waveform_polarizations,
interferometer=interferometer,
)
a0, a1, b0, b1 = self.summary_data[interferometer.name]
d_inner_h = np.sum(a0 * np.conjugate(r0) + a1 * np.conjugate(r1))
h_inner_h = np.sum(b0 * np.abs(r0) ** 2 + 2 * b1 * np.real(r0 * np.conjugate(r1)))
optimal_snr_squared = h_inner_h
complex_matched_filter_snr = d_inner_h / (optimal_snr_squared ** 0.5)
if self.time_marginalization:
full_waveform = self._compute_full_waveform(
signal_polarizations=waveform_polarizations,
interferometer=interferometer,
)
d_inner_h_array = 4 / self.waveform_generator.duration * np.fft.fft(
full_waveform[0:-1]
* interferometer.frequency_domain_strain.conjugate()[0:-1]
/ interferometer.power_spectral_density_array[0:-1])
else:
d_inner_h_array = None
return self._CalculatedSNRs(
d_inner_h=d_inner_h, optimal_snr_squared=optimal_snr_squared,
complex_matched_filter_snr=complex_matched_filter_snr,
d_inner_h_array=d_inner_h_array, optimal_snr_squared_array=None,
d_inner_h_squared_tc_array=None)
......@@ -444,6 +444,78 @@ def binary_neutron_star_roq(
phi_12=phi_12, lambda_1=lambda_1, lambda_2=lambda_2, **waveform_kwargs)
def lal_binary_black_hole_relative_binning(
frequency_array, mass_1, mass_2, luminosity_distance, a_1, tilt_1,
phi_12, a_2, tilt_2, phi_jl, theta_jn, phase, fiducial, **kwargs):
""" Source model to go with RelativeBinningGravitationalWaveTransient likelihood.
All parameters are the same as in the usual source models, except `fiducial`
fiducial: float
If fiducial=1, waveform evaluated on the full frequency grid is returned.
If fiducial=0, waveform evaluated at waveform_kwargs["frequency_bin_edges"]
is returned.
"""
waveform_kwargs = dict(
waveform_approximant='IMRPhenomPv2', reference_frequency=50.0,
minimum_frequency=20.0, maximum_frequency=frequency_array[-1],
catch_waveform_errors=False, pn_spin_order=-1, pn_tidal_order=-1,
pn_phase_order=-1, pn_amplitude_order=0)
waveform_kwargs.update(kwargs)
if fiducial == 1:
return _base_lal_cbc_fd_waveform(
frequency_array=frequency_array, mass_1=mass_1, mass_2=mass_2,
luminosity_distance=luminosity_distance, theta_jn=theta_jn, phase=phase,
a_1=a_1, a_2=a_2, tilt_1=tilt_1, tilt_2=tilt_2, phi_jl=phi_jl,
phi_12=phi_12, lambda_1=0.0, lambda_2=0.0, **waveform_kwargs)
else:
waveform_kwargs["frequencies"] = waveform_kwargs.pop("frequency_bin_edges")
return _base_waveform_frequency_sequence(
frequency_array=frequency_array, mass_1=mass_1, mass_2=mass_2,
luminosity_distance=luminosity_distance, theta_jn=theta_jn, phase=phase,
a_1=a_1, a_2=a_2, tilt_1=tilt_1, tilt_2=tilt_2, phi_jl=phi_jl,
phi_12=phi_12, lambda_1=0.0, lambda_2=0.0, **waveform_kwargs)
def lal_binary_neutron_star_relative_binning(
frequency_array, mass_1, mass_2, luminosity_distance, a_1, tilt_1,
phi_12, a_2, tilt_2, phi_jl, lambda_1, lambda_2, theta_jn, phase,
fiducial, **kwargs):
""" Source model to go with RelativeBinningGravitationalWaveTransient likelihood.
All parameters are the same as in the usual source models, except `fiducial`
fiducial: float
If fiducial=1, waveform evaluated on the full frequency grid is returned.
If fiducial=0, waveform evaluated at waveform_kwargs["frequency_bin_edges"]
is returned.
"""
waveform_kwargs = dict(
waveform_approximant='IMRPhenomPv2_NRTidal', reference_frequency=50.0,
minimum_frequency=20.0, maximum_frequency=frequency_array[-1],
catch_waveform_errors=False, pn_spin_order=-1, pn_tidal_order=-1,
pn_phase_order=-1, pn_amplitude_order=0)
waveform_kwargs.update(kwargs)
if fiducial == 1:
return _base_lal_cbc_fd_waveform(
frequency_array=frequency_array, mass_1=mass_1, mass_2=mass_2,
luminosity_distance=luminosity_distance, theta_jn=theta_jn, phase=phase,
a_1=a_1, a_2=a_2, tilt_1=tilt_1, tilt_2=tilt_2, phi_12=phi_12,
phi_jl=phi_jl, lambda_1=lambda_1, lambda_2=lambda_2, **waveform_kwargs)
else:
waveform_kwargs["frequencies"] = waveform_kwargs.pop("frequency_bin_edges")
return _base_waveform_frequency_sequence(
frequency_array=frequency_array, mass_1=mass_1, mass_2=mass_2,
luminosity_distance=luminosity_distance, theta_jn=theta_jn, phase=phase,
a_1=a_1, a_2=a_2, tilt_1=tilt_1, tilt_2=tilt_2, phi_jl=phi_jl,
phi_12=phi_12, lambda_1=lambda_1, lambda_2=lambda_2, **waveform_kwargs)
def _base_roq_waveform(
frequency_array, mass_1, mass_2, luminosity_distance, a_1, tilt_1,
phi_12, a_2, tilt_2, lambda_1, lambda_2, phi_jl, theta_jn, phase,
......
#!/usr/bin/env python
"""
Tutorial to demonstrate running parameter estimation on GW190425
Tutorial to demonstrate running parameter estimation on GW190425.
This example estimates all 17 parameters of the binary neutron star system using
commonly used prior distributions. This will take several hours to run. The
data is obtained using gwpy, see [1] for information on how to access data on
the LIGO Data Grid instead.
commonly used prior distributions. We shall use the relative binning likelihood.
This will take around an hour to run. The data is obtained using gwpy,
see [1] for information on how to access data on the LIGO Data Grid instead.
[1] https://gwpy.github.io/docs/stable/timeseries/remote-access.html
"""
......@@ -16,7 +16,7 @@ logger = bilby.core.utils.logger
outdir = "outdir"
label = "GW190425"
# Note you can get trigger times using the gwosc package, e.g.:
# Note you can get trigger times using the gwosc package, e.g.
# > from gwosc import datasets
# > datasets.event_gps("GW190425")
trigger_time = 1240215503.0
......@@ -29,6 +29,29 @@ post_trigger_duration = 2 # Time between trigger time and end of segment
end_time = trigger_time + post_trigger_duration
start_time = end_time - duration
# The fiducial parameters are taken to me the max likelihood sample from the
# posterior sample release of LIGO-Virgo
# https://www.gw-openscience.org/eventapi/html/O3_Discovery_Papers/GW190425/
fiducial_parameters = {
"a_1": 0.018,
"a_2": 0.016,
"chirp_mass": 1.48658,
"dec": 0.438,
"geocent_time": 1240215503.039,
"lambda_1": 446.941,
"lambda_2": 43.386,
"luminosity_distance": 206.751,
"mass_ratio": 0.8955,
"phase": 3.0136566567608765,
"phi_12": 4.319,
"phi_jl": 5.07,
"psi": 0.281,
"ra": 4.2,
"theta_jn": 0.185,
"tilt_1": 0.879,
"tilt_2": 0.514,
}
psd_duration = 1024
psd_start_time = start_time - psd_duration
psd_end_time = start_time
......@@ -64,6 +87,7 @@ ifo_list.plot_data(outdir=outdir, label=label)
# You can overwrite this using the syntax below in the file,
# or choose a fixed value by just providing a float value as the prior.
priors = bilby.gw.prior.BBHPriorDict(filename="GW190425.prior")
priors["fiducial"] = 0
# Add the geocent time prior
priors["geocent_time"] = bilby.core.prior.Uniform(
......@@ -76,7 +100,7 @@ priors["geocent_time"] = bilby.core.prior.Uniform(
# the waveform approximant and reference frequency and a parameter conversion
# which allows us to sample in chirp mass and ratio rather than component mass
waveform_generator = bilby.gw.WaveformGenerator(
frequency_domain_source_model=bilby.gw.source.lal_binary_neutron_star,
frequency_domain_source_model=bilby.gw.source.lal_binary_neutron_star_relative_binning,
parameter_conversion=bilby.gw.conversion.convert_to_lal_binary_neutron_star_parameters,
waveform_arguments={
"waveform_approximant": "IMRPhenomPv2_NRTidalv2",
......@@ -87,13 +111,14 @@ waveform_generator = bilby.gw.WaveformGenerator(
# In this step, we define the likelihood. Here we use the standard likelihood
# function, passing it the data and the waveform generator.
# Note, phase_marginalization is formally invalid with a precessing waveform such as IMRPhenomPv2
likelihood = bilby.gw.likelihood.GravitationalWaveTransient(
likelihood = bilby.gw.likelihood.RelativeBinningGravitationalWaveTransient(
ifo_list,
waveform_generator,
priors=priors,
time_marginalization=True,
phase_marginalization=False,
time_marginalization=False,
phase_marginalization=True,
distance_marginalization=True,
fiducial_parameters=fiducial_parameters,
)
# Finally, we run the sampler. This function takes the likelihood and prior
......@@ -108,6 +133,6 @@ result = bilby.run_sampler(
check_point_delta_t=600,
check_point_plot=True,
npool=1,
conversion_function=bilby.gw.conversion.generate_all_bbh_parameters,
conversion_function=bilby.gw.conversion.generate_all_bns_parameters,
)
result.plot_corner()
#!/usr/bin/env python
"""
Tutorial to demonstrate running parameter estimation on a reduced parameter
space for an injected signal, using the relative binning likelihood.
This example estimates the masses using a uniform prior in both component masses
and distance using a uniform in comoving volume prior on luminosity distance
between luminosity distances of 100Mpc and 5Gpc, the cosmology is Planck15.
"""
import bilby
import numpy as np
from tqdm.auto import trange
# Set the duration and sampling frequency of the data segment that we're
# going to inject the signal into
duration = 4.0
sampling_frequency = 2048.0
minimum_frequency = 20
# Specify the output directory and the name of the simulation.
outdir = "outdir"
label = "relative"
bilby.core.utils.setup_logger(outdir=outdir, label=label)
# Set up a random seed for result reproducibility. This is optional!
np.random.seed(88170235)
# We are going to inject a binary black hole waveform. We first establish a
# dictionary of parameters that includes all of the different waveform
# parameters, including masses of the two black holes (mass_1, mass_2),
# spins of both black holes (a, tilt, phi), etc.
injection_parameters = dict(
mass_1=36.0,
mass_2=29.0,
a_1=0.4,
a_2=0.3,
tilt_1=0.5,
tilt_2=1.0,
phi_12=1.7,
phi_jl=0.3,
luminosity_distance=2000.0,
theta_jn=0.4,
psi=2.659,
phase=1.3,
geocent_time=1126259642.413,
ra=1.375,
dec=-1.2108,
fiducial=1,
)
# Fixed arguments passed into the source model
waveform_arguments = dict(
waveform_approximant="IMRPhenomXP",
reference_frequency=50.0,
minimum_frequency=minimum_frequency,
)
# Create the waveform_generator using a LAL BinaryBlackHole source function
waveform_generator = bilby.gw.WaveformGenerator(
duration=duration,
sampling_frequency=sampling_frequency,
# frequency_domain_source_model=bilby.gw.source.lal_binary_black_hole,
frequency_domain_source_model=bilby.gw.source.lal_binary_black_hole_relative_binning,
parameter_conversion=bilby.gw.conversion.convert_to_lal_binary_black_hole_parameters,
waveform_arguments=waveform_arguments,
)
# Set up interferometers. In this case we'll use two interferometers
# (LIGO-Hanford (H1), LIGO-Livingston (L1). These default to their design
# sensitivity
ifos = bilby.gw.detector.InterferometerList(["H1", "L1"])
ifos.set_strain_data_from_power_spectral_densities(
sampling_frequency=sampling_frequency,
duration=duration,
start_time=injection_parameters["geocent_time"] - 2,
)
ifos.inject_signal(
waveform_generator=waveform_generator, parameters=injection_parameters
)
# Set up a PriorDict, which inherits from dict.
# By default we will sample all terms in the signal models. However, this will
# take a long time for the calculation, so for this example we will set almost
# all of the priors to be equall to their injected values. This implies the
# prior is a delta function at the true, injected value. In reality, the
# sampler implementation is smart enough to not sample any parameter that has
# a delta-function prior.
# The above list does *not* include mass_1, mass_2, theta_jn and luminosity
# distance, which means those are the parameters that will be included in the
# sampler. If we do nothing, then the default priors get used.
priors = bilby.gw.prior.BBHPriorDict()
for key in [
"a_1",
"a_2",
"tilt_1",
"tilt_2",
"phi_12",
"phi_jl",
"psi",
"ra",
"dec",
"geocent_time",
"phase",
]:
priors[key] = injection_parameters[key]
priors["fiducial"] = 0
# Perform a check that the prior does not extend to a parameter space longer than the data
priors.validate_prior(duration, minimum_frequency)
# Initialise the likelihood by passing in the interferometer data (ifos) and
# the waveform generator
likelihood = bilby.gw.likelihood.RelativeBinningGravitationalWaveTransient(
interferometers=ifos,
waveform_generator=waveform_generator,
priors=priors,
distance_marginalization=True,
fiducial_parameters=injection_parameters,
)
# Run sampler. In this case we're going to use the `dynesty` sampler
result = bilby.run_sampler(
likelihood=likelihood,
priors=priors,
sampler="nestle",
npoints=100,
injection_parameters=injection_parameters,
outdir=outdir,
label=label,
conversion_function=bilby.gw.conversion.generate_all_bbh_parameters,
)
alt_waveform_generator = bilby.gw.WaveformGenerator(
duration=duration,
sampling_frequency=sampling_frequency,
frequency_domain_source_model=bilby.gw.source.lal_binary_black_hole,
# frequency_domain_source_model=lal_binary_black_hole_relative_binning,
parameter_conversion=bilby.gw.conversion.convert_to_lal_binary_black_hole_parameters,
waveform_arguments=waveform_arguments,
)
alt_likelihood = bilby.gw.likelihood.GravitationalWaveTransient(
interferometers=ifos,
waveform_generator=alt_waveform_generator,
)
likelihood.distance_marginalization = False
weights = list()
for ii in trange(len(result.posterior)):
parameters = dict(result.posterior.iloc[ii])
likelihood.parameters.update(parameters)
alt_likelihood.parameters.update(parameters)
weights.append(
alt_likelihood.log_likelihood_ratio() - likelihood.log_likelihood_ratio()
)
weights = np.exp(weights)
print(
f"Reweighting efficiency is {np.mean(weights)**2 / np.mean(weights**2) * 100:.2f}%"
)
print(f"Binned vs unbinned log Bayes factor {np.log(np.mean(weights)):.2f}")
# Make a corner plot.
# result.plot_corner()