Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • john-veitch/bilby
  • duncanmmacleod/bilby
  • colm.talbot/bilby
  • lscsoft/bilby
  • matthew-pitkin/bilby
  • salvatore-vitale/tupak
  • charlie.hoy/bilby
  • bfarr/bilby
  • virginia.demilio/bilby
  • vivien/bilby
  • eric-howell/bilby
  • sebastian-khan/bilby
  • rhys.green/bilby
  • moritz.huebner/bilby
  • joseph.mills/bilby
  • scott.coughlin/bilby
  • matthew.carney/bilby
  • hyungwon.lee/bilby
  • monica.rizzo/bilby
  • christopher-berry/bilby
  • lindsay.demarchi/bilby
  • kaushik.rao/bilby
  • charles.kimball/bilby
  • andrew.matas/bilby
  • juan.calderonbustillo/bilby
  • patrick-meyers/bilby
  • hannah.middleton/bilby
  • eve.chase/bilby
  • grant.meadors/bilby
  • khun.phukon/bilby
  • sumeet.kulkarni/bilby
  • daniel.reardon/bilby
  • cjhaster/bilby
  • sylvia.biscoveanu/bilby
  • james-clark/bilby
  • meg.millhouse/bilby
  • joshua.willis/bilby
  • nikhil.sarin/bilby
  • paul.easter/bilby
  • youngmin/bilby
  • daniel-williams/bilby
  • shanika.galaudage/bilby
  • bruce.edelman/bilby
  • avi.vajpeyi/bilby
  • isobel.romero-shaw/bilby
  • andrew.kim/bilby
  • dominika.zieba/bilby
  • jonathan.davies/bilby
  • marc.arene/bilby
  • srishti.tiwari/bilby-tidal-heating-eccentric
  • aditya.vijaykumar/bilby
  • michael.williams/bilby
  • cecilio.garcia-quiros/bilby
  • rory-smith/bilby
  • maite.mateu-lucena/bilby
  • wushichao/bilby
  • kaylee.desoto/bilby
  • brandon.piotrzkowski/bilby
  • rossella.gamba/bilby
  • hunter.gabbard/bilby
  • deep.chatterjee/bilby
  • tathagata.ghosh/bilby
  • arunava.mukherjee/bilby
  • philip.relton/bilby
  • reed.essick/bilby
  • pawan.gupta/bilby
  • francisco.hernandez/bilby
  • rhiannon.udall/bilby
  • leo.tsukada/bilby
  • will-farr/bilby
  • vijay.varma/bilby
  • jeremy.baier/bilby
  • joshua.brandt/bilby
  • ethan.payne/bilby
  • ka-lok.lo/bilby
  • antoni.ramos-buades/bilby
  • oliviastephany.wilk/bilby
  • jack.heinzel/bilby
  • samson.leong/bilby-psi4
  • viviana.caceres/bilby
  • nadia.qutob/bilby
  • michael-coughlin/bilby
  • hemantakumar.phurailatpam/bilby
  • boris.goncharov/bilby
  • sama.al-shammari/bilby
  • siqi.zhong/bilby
  • jocelyn-read/bilby
  • marc.penuliar/bilby
  • stephanie.letourneau/bilby
  • alexandresebastien.goettel/bilby
  • alec.gunny/bilby
  • serguei.ossokine/bilby
  • pratyusava.baral/bilby
  • sophie.hourihane/bilby
  • eunsub/bilby
  • james.hart/bilby
  • pratyusava.baral/bilby-tg
  • zhaozc/bilby
  • pratyusava.baral/bilby_SoG
  • tomasz.baka/bilby
  • nicogerardo.bers/bilby
  • soumen.roy/bilby
  • isaac.mcmahon/healpix-redundancy
  • asamakai.baker/bilby-frequency-dependent-antenna-pattern-functions
  • anna.puecher/bilby
  • pratyusava.baral/bilby-x-g
  • thibeau.wouters/bilby
  • christian.adamcewicz/bilby
  • raffi.enficiaud/bilby
109 results
Show changes
Commits on Source (63)
Showing
with 1289 additions and 316 deletions
......@@ -234,8 +234,8 @@ docs:
stage: docs
image: containers.ligo.org/lscsoft/bilby/v2-bilby-python39
before_script:
- conda install -c conda-forge pandoc -y
- python -m pip install --upgrade ipykernel ipython jupyter nbconvert
- conda install -c conda-forge pandoc ipython jupyter nbconvert
- python -m pip install ipykernel
- python -m ipykernel install
script:
# Make the documentation
......@@ -268,13 +268,16 @@ pages:
.build-container: &build-container
stage: deploy
image: docker:20.10.12
image: docker:20.10.23
needs: ["containers"]
except:
refs:
- schedules
changes:
- containers/*
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
changes:
compare_to: 'refs/heads/main'
paths:
- containers/*
when: manual
- if: $CI_PIPELINE_SOURCE == "schedule"
script:
- cd containers
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
......@@ -304,8 +307,8 @@ pypi-release:
TWINE_USERNAME: $PYPI_USERNAME
TWINE_PASSWORD: $PYPI_PASSWORD
before_script:
- pip install twine setuptools_scm
- python setup.py sdist
- python -m pip install twine setuptools_scm build
- python -m build --sdist --wheel --outdir dist/ .
script:
- twine upload dist/*
only:
......
......@@ -88,3 +88,4 @@ Virginia d'Emilio
Vivien Raymond
Ka-Lok Lo
Isaac Legred
Marc Penuliar
# All notable changes will be documented in this file
## [2.0.0] 2023-02-29
Version 2.0.0 release of Bilby
This major version release has a significant change to the behaviour of the `dynesty` wrapper.
There are also a number of bugfixes and some new features in sampling and GW utilities.
### Added
- Add marginalized time reconstruction for the ROQ likelihood (!1196)
- Generate the `dynesty` posterior using rejection sampling by default (!1203)
- Add optimization for mass ratio prior peaking at equal masses (!1204)
- Add option to sample over a number of precomputed calibration curves (!1215)
### Changes
- Optimize weight calculation for `MultiBandGravitationalWaveTransient` (!1171)
- Add compatibility with pymc 5 (!1191)
- A bug fix of the stored prior when using a marginalized likelihood (!1193)
- Various bug fixes to improve the reliability of the `RelativeBinningGravitationalWaveTransient` (!1198, !1211)
- A hack fix for samplers that are not compatible with `numpy>1.23` (!1194)
- Updates to some reference noise curves (!1206, !1207)
- Fix the broken time+calibration marginalization (!1201)
- Fix a bug when reading GW frame files (!1202)
- Fix the normalization of the whitened strain attribute of `Interferometer` (!1205)
- Optimize ROQ waveform and calibration calls (!1216)
- Add different proposal distribution and MCMC length for `dynesty` (!1187, !1222)
## [1.4.1] 2022-12-07
Version 1.4.1 release of Bilby
......
from distutils.version import LooseVersion
import numpy as np
import pandas as pd
from packaging import version
from ..core.sampler.base_sampler import SamplerError
from ..core.utils import logger
......@@ -512,7 +511,7 @@ class Sample(object):
def calculate_tau(x, autocorr_c=5):
import emcee
if LooseVersion(emcee.__version__) < LooseVersion("3"):
if version.parse(emcee.__version__) < version.parse("3"):
raise SamplerError("bilby-mcmc requires emcee > 3.0 for autocorr analysis")
if np.all(np.diff(x) == 0):
......
......@@ -282,8 +282,7 @@ class Bilby_MCMC(MCMCSampler):
return result
def setup_chain_set(self):
if os.path.isfile(self.resume_file) and self.resume is True:
self.read_current_state()
if self.read_current_state() and self.resume is True:
self.ptsampler.pool = self.pool
else:
self.init_ptsampler()
......@@ -366,10 +365,24 @@ class Bilby_MCMC(MCMCSampler):
os.remove(self.resume_file)
def read_current_state(self):
"""Read the existing resume file
Returns
-------
success: boolean
If true, resume file was successfully loaded, otherwise false
"""
if os.path.isfile(self.resume_file) is False:
return False
import dill
with open(self.resume_file, "rb") as file:
self.ptsampler = dill.load(file)
ptsampler = dill.load(file)
if type(ptsampler) != BilbyPTMCMCSampler:
logger.debug("Malformed resume file, ignoring")
return False
self.ptsampler = ptsampler
if self.ptsampler.pt_inputs != self.pt_inputs:
msg = (
f"pt_inputs has changed: {self.ptsampler.pt_inputs} "
......@@ -385,6 +398,7 @@ class Bilby_MCMC(MCMCSampler):
f"with {self.ptsampler.position} steps "
f"setup:\n{self.get_setup_string()}"
)
return True
def write_current_state(self):
import dill
......
......@@ -274,10 +274,10 @@ def reweight(result, label=None, new_likelihood=None, new_prior=None,
old_likelihood=old_likelihood, old_prior=old_prior,
resume_file=resume_file, n_checkpoint=n_checkpoint)
weights = np.exp(ln_weights)
if use_nested_samples:
weights *= result.posterior['weights']
ln_weights += np.log(result.posterior["weights"])
weights = np.exp(ln_weights)
# Overwrite the likelihood and prior evaluations
result.posterior["log_likelihood"] = new_log_likelihood_array
......@@ -679,7 +679,7 @@ class Result(object):
Writes the Result to a file.
Supported formats are: `json`, `hdf5`, `arviz`, `pickle`
Supported formats are: `json`, `hdf5`, `pickle`
Parameters
==========
......
......@@ -259,7 +259,7 @@ def run_sampler(
# Initial save of the sampler in case of failure in samples_to_posterior
if save:
result.save_to_file(extension=save, gzip=gzip)
result.save_to_file(extension=save, gzip=gzip, outdir=outdir)
if None not in [result.injection_parameters, conversion_function]:
result.injection_parameters = conversion_function(result.injection_parameters)
......@@ -275,7 +275,7 @@ def run_sampler(
if save:
# The overwrite here ensures we overwrite the initially stored data
result.save_to_file(overwrite=True, extension=save, gzip=gzip)
result.save_to_file(overwrite=True, extension=save, gzip=gzip, outdir=outdir)
if plot:
result.plot_corner()
......
import datetime
import distutils.dir_util
import os
import shutil
import signal
......@@ -972,8 +971,10 @@ class _TemporaryFileSamplerMixin:
f"Overwriting {self.outputfiles_basename} with {self.temporary_outputfiles_basename}"
)
outputfiles_basename_stripped = self.outputfiles_basename.rstrip("/")
distutils.dir_util.copy_tree(
self.temporary_outputfiles_basename, outputfiles_basename_stripped
shutil.copytree(
self.temporary_outputfiles_basename,
outputfiles_basename_stripped,
dirs_exist_ok=True,
)
def _setup_run_directory(self):
......@@ -988,8 +989,10 @@ class _TemporaryFileSamplerMixin:
self.temporary_outputfiles_basename = temporary_outputfiles_basename
if os.path.exists(self.outputfiles_basename):
distutils.dir_util.copy_tree(
self.outputfiles_basename, self.temporary_outputfiles_basename
shutil.copytree(
self.outputfiles_basename,
self.temporary_outputfiles_basename,
dirs_exist_ok=True,
)
check_directory_exists_and_if_not_mkdir(temporary_outputfiles_basename)
......
This diff is collapsed.
This diff is collapsed.
import os
import shutil
from collections import namedtuple
from distutils.version import LooseVersion
from shutil import copyfile
import numpy as np
from packaging import version
from pandas import DataFrame
from ..utils import check_directory_exists_and_if_not_mkdir, logger, safe_file_dump
......@@ -79,12 +79,7 @@ class Emcee(MCMCSampler):
burn_in_act=3,
**kwargs,
):
import emcee
if LooseVersion(emcee.__version__) > LooseVersion("2.2.1"):
self.prerelease = True
else:
self.prerelease = False
self._check_version()
super(Emcee, self).__init__(
likelihood=likelihood,
priors=priors,
......@@ -95,7 +90,6 @@ class Emcee(MCMCSampler):
skip_import_verification=skip_import_verification,
**kwargs,
)
self._check_version()
self.resume = resume
self.pos0 = pos0
self.nburn = nburn
......@@ -106,11 +100,10 @@ class Emcee(MCMCSampler):
def _check_version(self):
import emcee
if LooseVersion(emcee.__version__) > LooseVersion("2.2.1"):
self.prerelease = True
else:
if version.parse(emcee.__version__) < version.parse("3"):
self.prerelease = False
return emcee
else:
self.prerelease = True
def _translate_kwargs(self, kwargs):
kwargs = super()._translate_kwargs(kwargs)
......
import numpy as np
from pandas import DataFrame
from .base_sampler import NestedSampler, signal_wrapper
......@@ -71,6 +70,11 @@ class Nestle(NestedSampler):
"""
import nestle
if nestle.__version__ == "0.2.0":
# This is a very ugly hack to support numpy>=1.24
nestle.np.float = float
nestle.np.int = int
out = nestle.sample(
loglikelihood=self.log_likelihood,
prior_transform=self.prior_transform,
......@@ -107,21 +111,8 @@ class Nestle(NestedSampler):
bilby.core.result.Result: Dummy container for sampling results.
"""
import nestle
kwargs = self.kwargs.copy()
kwargs["maxiter"] = 2
nestle.sample(
loglikelihood=self.log_likelihood,
prior_transform=self.prior_transform,
ndim=self.ndim,
**kwargs
)
self.result.samples = np.random.uniform(0, 1, (100, self.ndim))
self.result.log_evidence = np.nan
self.result.log_evidence_err = np.nan
self.calc_likelihood_count()
return self.result
self.kwargs["maxiter"] = 2
return self.run_sampler()
def write_current_state(self):
"""
......
......@@ -408,6 +408,10 @@ class Ptemcee(MCMCSampler):
"""Either initialize the sampler or read in the resume file"""
import ptemcee
if ptemcee.__version__ == "1.0.0":
# This is a very ugly hack to support numpy>=1.24
ptemcee.sampler.np.float = float
if os.path.isfile(self.resume_file) and self.resume is True:
import dill
......
from distutils.version import StrictVersion
import numpy as np
from ...gw.likelihood import BasicGravitationalWaveTransient, GravitationalWaveTransient
......@@ -124,18 +122,23 @@ class Pymc(MCMCSampler):
@staticmethod
def _import_external_sampler():
import pymc
from pymc.aesaraf import floatX
from pymc import floatX
from pymc.step_methods import STEP_METHODS
return pymc, STEP_METHODS, floatX
@staticmethod
def _import_aesara():
import aesara # noqa
import aesara.tensor as tt
from aesara.compile.ops import as_op # noqa
return aesara, tt, as_op
def _import_tensor():
try:
import pytensor as tensor # noqa
import pytensor.tensor as tt
from pytensor.compile.ops import as_op # noqa
except ImportError:
import aesara as tensor # noqa
import aesara.tensor as tt
from aesara.compile.ops import as_op # noqa
return tensor, tt, as_op
def _verify_parameters(self):
"""
......@@ -251,8 +254,8 @@ class Pymc(MCMCSampler):
"""
# check prior is a Sine
pymc, STEP_METHODS, floatX = self._import_external_sampler()
aesara, tt, as_op = self._import_aesara()
pymc, _, floatX = self._import_external_sampler()
_, tt, _ = self._import_tensor()
if isinstance(self.priors[key], Sine):
class PymcSine(pymc.Continuous):
......@@ -296,8 +299,8 @@ class Pymc(MCMCSampler):
"""
# check prior is a Cosine
pymc, STEP_METHODS, floatX = self._import_external_sampler()
aesara, tt, as_op = self._import_aesara()
pymc, _, floatX = self._import_external_sampler()
_, tt, _ = self._import_tensor()
if isinstance(self.priors[key], Cosine):
class PymcCosine(pymc.Continuous):
......@@ -340,8 +343,8 @@ class Pymc(MCMCSampler):
"""
# check prior is a PowerLaw
pymc, STEP_METHODS, floatX = self._import_external_sampler()
aesara, tt, as_op = self._import_aesara()
pymc, _, floatX = self._import_external_sampler()
_, tt, _ = self._import_tensor()
if isinstance(self.priors[key], PowerLaw):
# check power law is set
......@@ -405,8 +408,7 @@ class Pymc(MCMCSampler):
"""
# check prior is a PowerLaw
pymc, STEP_METHODS, floatX = self._import_external_sampler()
aesara, tt, as_op = self._import_aesara()
pymc, _, _ = self._import_external_sampler()
if isinstance(self.priors[key], MultivariateGaussian):
# get names of multivariate Gaussian parameters
mvpars = self.priors[key].mvg.names
......@@ -627,29 +629,25 @@ class Pymc(MCMCSampler):
self.kwargs["step"] = pymc.__dict__[step_methods[curmethod]](
**args
)
else:
# re-add step_kwargs if no step methods are set
if len(step_kwargs) > 0 and StrictVersion(
pymc.__version__
) < StrictVersion("3.7"):
self.kwargs["step_kwargs"] = step_kwargs
# check whether only NUTS step method has been assigned
if np.all([sm.lower() == "nuts" for sm in methodslist]):
# in this case we can let PyMC autoinitialise NUTS, so remove the step methods and re-add nuts_kwargs
self.kwargs["step"] = None
if len(nuts_kwargs) > 0 and StrictVersion(pymc.__version__) < StrictVersion(
"3.7"
):
self.kwargs["nuts_kwargs"] = nuts_kwargs
elif len(nuts_kwargs) > 0:
if len(nuts_kwargs) > 0:
# add NUTS kwargs to standard kwargs
self.kwargs.update(nuts_kwargs)
with self.pymc_model:
# perform the sampling
trace = pymc.sample(**self.kwargs)
# perform the sampling and then convert to inference data
trace = pymc.sample(**self.kwargs, return_inferencedata=False)
ikwargs = dict(
model=self.pymc_model,
save_warmup=not self.kwargs["discard_tuned_samples"],
log_likelihood=True,
)
trace = pymc.to_inference_data(trace, **ikwargs)
posterior = trace.posterior.to_dataframe().reset_index()
self.result.samples = posterior[self.search_parameter_keys]
......@@ -697,10 +695,6 @@ class Pymc(MCMCSampler):
args = {}
return args, nuts_kwargs
def _pymc_version(self):
pymc, _, _ = self._import_external_sampler()
return pymc.__version__
def set_prior(self):
"""
Set the PyMC prior distributions.
......@@ -709,7 +703,7 @@ class Pymc(MCMCSampler):
self.setup_prior_mapping()
self.pymc_priors = dict()
pymc, STEP_METHODS, floatX = self._import_external_sampler()
pymc, _, _ = self._import_external_sampler()
# initialise a dictionary of multivariate Gaussian parameters
self.multivariate_normal_sets = {}
......@@ -804,9 +798,9 @@ class Pymc(MCMCSampler):
Convert any bilby likelihoods to PyMC distributions.
"""
# create aesara Op for the log likelihood if not using a predefined model
pymc, STEP_METHODS, floatX = self._import_external_sampler()
aesara, tt, as_op = self._import_aesara()
# create Op for the log likelihood if not using a predefined model
pymc, _, _ = self._import_external_sampler()
_, tt, _ = self._import_tensor()
class LogLike(tt.Op):
......@@ -838,7 +832,7 @@ class Pymc(MCMCSampler):
(theta,) = inputs
return [g[0] * self.logpgrad(theta)]
# create aesara Op for calculating the gradient of the log likelihood
# create Op for calculating the gradient of the log likelihood
class LogLikeGrad(tt.Op):
itypes = [tt.dvector]
......@@ -993,7 +987,7 @@ class Pymc(MCMCSampler):
f"Unknown key '{key}' when setting GravitationalWaveTransient likelihood"
)
# convert to aesara tensor variable
# convert to tensor variable
values = tt.as_tensor_variable(list(parameters.values()))
pymc.DensityDist(
......
......@@ -244,8 +244,19 @@ class UnsortedInterp2d(interp2d):
if isinstance(y, np.ndarray) and y.size == 1:
y = float(y)
if isinstance(x, np.ndarray) and isinstance(y, np.ndarray):
original_shapes = (x.shape, y.shape)
if x.shape != y.shape:
raise ValueError("UnsortedInterp2d received unequally shaped arrays")
while x.ndim > y.ndim:
y = np.expand_dims(y, -1)
while y.ndim > x.ndim:
x = np.expand_dims(x, -1)
try:
x = x * np.ones(y.shape)
y = y * np.ones(x.shape)
except ValueError:
raise ValueError(
f"UnsortedInterp2d received incompatibly shaped arrays: {original_shapes}"
)
elif isinstance(x, np.ndarray) and not isinstance(y, np.ndarray):
y = y * np.ones_like(x)
elif not isinstance(x, np.ndarray) and isinstance(y, np.ndarray):
......
......@@ -99,16 +99,21 @@ def env_package_list(as_dataframe=False):
# if a conda-meta directory exists, this is a conda environment, so
# use conda to print the package list
if (Path(prefix) / "conda-meta").is_dir():
pkgs = json.loads(subprocess.check_output([
"conda",
"list",
"--prefix", prefix,
"--json"
]))
conda_detected = (Path(prefix) / "conda-meta").is_dir()
if conda_detected:
try:
pkgs = json.loads(subprocess.check_output([
"conda",
"list",
"--prefix", prefix,
"--json"
]))
except (FileNotFoundError, subprocess.CalledProcessError):
# When a conda env is in use but conda is unavailable
conda_detected = False
# otherwise try and use Pip
else:
if not conda_detected:
try:
import pip # noqa: F401
except ModuleNotFoundError: # no pip?
......
import functools
import os
from distutils.spawn import find_executable
import shutil
from .log import logger
......@@ -53,7 +53,7 @@ def latex_plot_format(func):
_old_tex = rcParams["text.usetex"]
_old_serif = rcParams["font.serif"]
_old_family = rcParams["font.family"]
if find_executable("latex"):
if shutil.which("latex"):
rcParams["text.usetex"] = True
else:
rcParams["text.usetex"] = False
......
......@@ -846,17 +846,17 @@ def _generate_all_cbc_parameters(sample, defaults, base_conversion,
)
if priors is not None:
misnamed_marginalizations = dict(
distance="luminosity_distance",
time="geocent_time",
calibration="recalib_index",
luminosity_distance="distance",
geocent_time="time",
recalib_index="calibration",
)
for par in marginalized_parameters:
name = misnamed_marginalizations.get(par, par)
if (
getattr(likelihood, f'{par}_marginalization', False)
and name in likelihood.priors
getattr(likelihood, f'{name}_marginalization', False)
and par in likelihood.priors
):
priors[name] = likelihood.priors[name]
priors[par] = likelihood.priors[par]
if (
not getattr(likelihood, "reference_frame", "sky") == "sky"
......@@ -1407,7 +1407,7 @@ def compute_snrs(sample, likelihood, npool=1):
if likelihood is not None:
if isinstance(sample, dict):
likelihood.parameters.update(sample)
signal_polarizations = likelihood.waveform_generator.frequency_domain_strain(sample)
signal_polarizations = likelihood.waveform_generator.frequency_domain_strain(likelihood.parameters.copy())
for ifo in likelihood.interferometers:
per_detector_snr = likelihood.calculate_snrs(signal_polarizations, ifo)
sample['{}_matched_filter_snr'.format(ifo.name)] =\
......@@ -1461,7 +1461,7 @@ def _compute_snrs(args):
sample = dict(sample).copy()
likelihood.parameters.update(sample)
signal_polarizations = likelihood.waveform_generator.frequency_domain_strain(
sample
likelihood.parameters.copy()
)
snrs = list()
for ifo in likelihood.interferometers:
......
""" Functions for adding calibration factors to waveform templates.
"""
import copy
import os
import numpy as np
import pandas as pd
from scipy.interpolate import interp1d
from ...core.utils.log import logger
from ...core.prior.dict import PriorDict
from ..prior import CalibrationPriorDict
def read_calibration_file(filename, frequency_array, number_of_response_curves, starting_index=0):
"""
......@@ -30,6 +37,7 @@ def read_calibration_file(filename, frequency_array, number_of_response_curves,
"""
import tables
logger.info(f"Reading calibration draws from {filename}")
calibration_file = tables.open_file(filename, 'r')
calibration_amplitude = \
calibration_file.root.deltaR.draws_amp_rel[starting_index:number_of_response_curves + starting_index]
......@@ -51,7 +59,12 @@ def read_calibration_file(filename, frequency_array, number_of_response_curves,
calibration_frequencies, calibration_draws, kind='cubic',
bounds_error=False, fill_value=1)(frequency_array)
return calibration_draws
try:
parameter_draws = pd.read_hdf(filename, key="CalParams")
except KeyError:
parameter_draws = None
return calibration_draws, parameter_draws
def write_calibration_file(filename, frequency_array, calibration_draws, calibration_parameter_draws=None):
......@@ -73,6 +86,7 @@ def write_calibration_file(filename, frequency_array, calibration_draws, calibra
"""
import tables
logger.info(f"Writing calibration draws to {filename}")
calibration_file = tables.open_file(filename, 'w')
deltaR_group = calibration_file.create_group(calibration_file.root, 'deltaR')
......@@ -210,3 +224,196 @@ class CubicSpline(Recalibrate):
calibration_factor = (1 + delta_amplitude) * (2 + 1j * delta_phase) / (2 - 1j * delta_phase)
return calibration_factor
class Precomputed(Recalibrate):
name = "precomputed"
def __init__(self, label, curves, frequency_array, parameters=None):
"""
A class for accessing an array of precomputed recalibration curves.
Parameters
==========
label: str
The label for the interferometer, e.g., H1. The corresponding
parameter is :code:`recalib_index_{label}`.
curves: array-like
Array with shape (n_curves, n_frequencies) with the recalibration
curves.
frequency_array: array-like
Array of frequencies at which the curves are evaluated.
"""
self.label = label
self.curves = curves
self.frequency_array = frequency_array
self.parameters = parameters
super(Precomputed, self).__init__(prefix=f"recalib_index_{self.label}")
def get_calibration_factor(self, frequency_array, **params):
idx = int(params.get(self.prefix, None))
if idx is None:
raise KeyError(f"Calibration index for {self.label} not found.")
if not np.array_equal(frequency_array, self.frequency_array):
raise ValueError("Frequency grid passed to calibrator doesn't match.")
return self.curves[idx]
@classmethod
def constant_uncertainty_spline(
cls, amplitude_sigma, phase_sigma, frequency_array, n_nodes, label, n_curves
):
priors = CalibrationPriorDict.constant_uncertainty_spline(
amplitude_sigma=amplitude_sigma,
phase_sigma=phase_sigma,
minimum_frequency=frequency_array[0],
maximum_frequency=frequency_array[-1],
n_nodes=n_nodes,
label=label,
)
parameters = pd.DataFrame(priors.sample(n_curves))
curves = curves_from_spline_and_prior(
label=label,
frequency_array=frequency_array,
n_points=n_nodes,
parameters=parameters,
n_curves=n_curves
)
return cls(
label=label,
curves=np.array(curves),
frequency_array=frequency_array,
parameters=parameters,
)
@classmethod
def from_envelope_file(
cls, envelope, frequency_array, n_nodes, label, n_curves
):
priors = CalibrationPriorDict.from_envelope_file(
envelope_file=envelope,
minimum_frequency=frequency_array[0],
maximum_frequency=frequency_array[-1],
n_nodes=n_nodes,
label=label,
)
parameters = pd.DataFrame(priors.sample(n_curves))
curves = curves_from_spline_and_prior(
label=label,
frequency_array=frequency_array,
n_points=n_nodes,
parameters=parameters,
n_curves=n_curves,
)
return cls(
label=label,
curves=np.array(curves),
frequency_array=frequency_array,
parameters=parameters,
)
@classmethod
def from_calibration_file(cls, label, filename, frequency_array, n_curves, starting_index=0):
curves, parameters = read_calibration_file(
filename=filename,
frequency_array=frequency_array,
number_of_response_curves=n_curves,
starting_index=starting_index,
)
return cls(
label=label,
curves=np.array(curves),
frequency_array=frequency_array,
parameters=parameters,
)
def build_calibration_lookup(
interferometers,
lookup_files=None,
priors=None,
number_of_response_curves=1000,
starting_index=0,
):
if lookup_files is None and priors is None:
raise ValueError(
"One of calibration_lookup_table or priors must be specified for "
"building calibration marginalization lookup table."
)
elif lookup_files is None:
lookup_files = dict()
draws = dict()
parameters = dict()
for interferometer in interferometers:
name = interferometer.name
frequencies = interferometer.frequency_array
frequencies = frequencies[interferometer.frequency_mask]
filename = lookup_files.get(name, f"{name}_calibration_file.h5")
if os.path.exists(filename):
draws[name], parameters[name] = read_calibration_file(
filename,
frequencies,
number_of_response_curves,
starting_index,
)
elif isinstance(interferometer.calibration_model, Precomputed):
model = interferometer.calibration_model
idxs = np.arange(number_of_response_curves, dtype=int) + starting_index
draws[name] = model.curves[idxs]
parameters[name] = pd.DataFrame(model.parameters.iloc[idxs])
parameters[name][model.prefix] = idxs
else:
if priors is None:
raise ValueError(
"Priors must be passed to generate calibration response curves "
"for cubic spline."
)
draws[name], parameters[name] = _generate_calibration_draws(
interferometer=interferometer,
priors=priors,
n_curves=number_of_response_curves,
)
write_calibration_file(filename, frequencies, draws[name], parameters[name])
interferometer.calibration_model = Recalibrate()
return draws, parameters
def _generate_calibration_draws(interferometer, priors, n_curves):
name = interferometer.name
frequencies = interferometer.frequency_array
frequencies = frequencies[interferometer.frequency_mask]
calibration_priors = PriorDict()
for key in priors.keys():
if "recalib" in key and name in key:
calibration_priors[key] = copy.copy(priors[key])
parameters = pd.DataFrame(calibration_priors.sample(n_curves))
draws = np.array(curves_from_spline_and_prior(
parameters=parameters,
label=name,
n_points=interferometer.calibration_model.n_points,
frequency_array=frequencies,
n_curves=n_curves,
))
return draws, parameters
def curves_from_spline_and_prior(parameters, label, n_points, frequency_array, n_curves):
spline = CubicSpline(
prefix=f"recalib_{label}_",
minimum_frequency=frequency_array[0],
maximum_frequency=frequency_array[-1],
n_points=n_points,
)
curves = list()
for ii in range(n_curves):
curves.append(spline.get_calibration_factor(
frequency_array=frequency_array,
**parameters.iloc[ii]
))
return curves
......@@ -2,7 +2,7 @@
# LIGO-T980044-10
# https://dcc.ligo.org/LIGO-P1600143/public
name = 'ET'
power_spectral_density = PowerSpectralDensity(psd_file='ET_B_psd.txt')
power_spectral_density = PowerSpectralDensity(psd_file='ET_D_psd.txt')
length = 10
minimum_frequency = 10
maximum_frequency = 2048
......