Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • john-veitch/bilby
  • duncanmmacleod/bilby
  • colm.talbot/bilby
  • lscsoft/bilby
  • matthew-pitkin/bilby
  • salvatore-vitale/tupak
  • charlie.hoy/bilby
  • bfarr/bilby
  • virginia.demilio/bilby
  • vivien/bilby
  • eric-howell/bilby
  • sebastian-khan/bilby
  • rhys.green/bilby
  • moritz.huebner/bilby
  • joseph.mills/bilby
  • scott.coughlin/bilby
  • matthew.carney/bilby
  • hyungwon.lee/bilby
  • monica.rizzo/bilby
  • christopher-berry/bilby
  • lindsay.demarchi/bilby
  • kaushik.rao/bilby
  • charles.kimball/bilby
  • andrew.matas/bilby
  • juan.calderonbustillo/bilby
  • patrick-meyers/bilby
  • hannah.middleton/bilby
  • eve.chase/bilby
  • grant.meadors/bilby
  • khun.phukon/bilby
  • sumeet.kulkarni/bilby
  • daniel.reardon/bilby
  • cjhaster/bilby
  • sylvia.biscoveanu/bilby
  • james-clark/bilby
  • meg.millhouse/bilby
  • joshua.willis/bilby
  • nikhil.sarin/bilby
  • paul.easter/bilby
  • youngmin/bilby
  • daniel-williams/bilby
  • shanika.galaudage/bilby
  • bruce.edelman/bilby
  • avi.vajpeyi/bilby
  • isobel.romero-shaw/bilby
  • andrew.kim/bilby
  • dominika.zieba/bilby
  • jonathan.davies/bilby
  • marc.arene/bilby
  • srishti.tiwari/bilby-tidal-heating-eccentric
  • aditya.vijaykumar/bilby
  • michael.williams/bilby
  • cecilio.garcia-quiros/bilby
  • rory-smith/bilby
  • maite.mateu-lucena/bilby
  • wushichao/bilby
  • kaylee.desoto/bilby
  • brandon.piotrzkowski/bilby
  • rossella.gamba/bilby
  • hunter.gabbard/bilby
  • deep.chatterjee/bilby
  • tathagata.ghosh/bilby
  • arunava.mukherjee/bilby
  • philip.relton/bilby
  • reed.essick/bilby
  • pawan.gupta/bilby
  • francisco.hernandez/bilby
  • rhiannon.udall/bilby
  • leo.tsukada/bilby
  • will-farr/bilby
  • vijay.varma/bilby
  • jeremy.baier/bilby
  • joshua.brandt/bilby
  • ethan.payne/bilby
  • ka-lok.lo/bilby
  • antoni.ramos-buades/bilby
  • oliviastephany.wilk/bilby
  • jack.heinzel/bilby
  • samson.leong/bilby-psi4
  • viviana.caceres/bilby
  • nadia.qutob/bilby
  • michael-coughlin/bilby
  • hemantakumar.phurailatpam/bilby
  • boris.goncharov/bilby
  • sama.al-shammari/bilby
  • siqi.zhong/bilby
  • jocelyn-read/bilby
  • marc.penuliar/bilby
  • stephanie.letourneau/bilby
  • alexandresebastien.goettel/bilby
  • alec.gunny/bilby
  • serguei.ossokine/bilby
  • pratyusava.baral/bilby
  • sophie.hourihane/bilby
  • eunsub/bilby
  • james.hart/bilby
  • pratyusava.baral/bilby-tg
  • zhaozc/bilby
  • pratyusava.baral/bilby_SoG
  • tomasz.baka/bilby
  • nicogerardo.bers/bilby
  • soumen.roy/bilby
  • isaac.mcmahon/healpix-redundancy
  • asamakai.baker/bilby-frequency-dependent-antenna-pattern-functions
  • anna.puecher/bilby
  • pratyusava.baral/bilby-x-g
  • thibeau.wouters/bilby
  • christian.adamcewicz/bilby
  • raffi.enficiaud/bilby
109 results
Show changes
Commits on Source (46)
Showing
with 573 additions and 98 deletions
[run]
omit =
test/core/example_test.py
test/gw/example_test.py
test/gw/noise_realisation_test.py
test/gw/other_test.py
test/integration/example_test.py
test/integration/noise_realisation_test.py
test/integration/other_test.py
......@@ -91,8 +91,8 @@ python-3.7-samplers:
script:
- python -m pip install .
- pytest test/core/sampler/sampler_run_test.py --durations 10
- pytest test/gw/sample_from_the_prior_test.py
- pytest test/integration/sampler_run_test.py --durations 10
- pytest test/integration/sample_from_the_prior_test.py
# test samplers on python 3.6
python-3.6-samplers:
......@@ -101,7 +101,18 @@ python-3.6-samplers:
script:
- python -m pip install .
- pytest test/core/sampler/sampler_run_test.py
- pytest test/integration/sampler_run_test.py
# Test containers are up to date
containers:
stage: test
image: bilbydev/v2-dockerfile-test-suite-python37
script:
- cd containers
- python write_dockerfiles.py
# Fail if differences exist. If this fails, you may need to run
# write_dockerfiles.py and commit the changes.
- git diff --exit-code
# Tests run at a fixed schedule rather than on push
scheduled-python-3.7:
......@@ -113,9 +124,8 @@ scheduled-python-3.7:
- python -m pip install .
# Run tests which are only done on schedule
- pytest test/core/example_test.py
- pytest test/gw/example_test.py
- pytest test/gw/sample_from_the_prior_test.py
- pytest test/integration/example_test.py
- pytest test/integration/sample_from_the_prior_test.py
plotting:
stage: test
......
# All notable changes will be documented in this file
## [1.0.2] 2020-09-14
Version 1.0.2 release of bilby
### Added
- Template for the docker files (!783)
- New delta_phase parameter (!850)
- Normalization factor to time-domain waveform plot (!867)
- JSON encoding for int and float types (!866)
- Various minor formatting additions (!870)
### Changes
- Switched to the conda-forge version of multinest and ultranest (!783)
- Updates KAGRA - K1 interferometer information (!861)
- Restructures to tests to be uniform across project (!834)
- Fix to distance and phase marginalization method (!875)
- Fixed roundoff of in-plane spins samples with vectorisation (!864)
- Fix to reference distance and interpolant behavior (!858)
- Fix to constraint prior sampling method (!863)
- Clean up of code (!854)
- Various minor bug, test and plotting fixes (!859, !874, !872, !865)
## [1.0.1] 2020-08-29
Version 1.0.1 release of bilby
......
......@@ -337,7 +337,8 @@ class SymmetricLogUniform(Prior):
-------
float: Prior probability of val
"""
return (np.nan_to_num(0.5 / np.abs(val) / np.log(self.maximum / self.minimum)) *
val = np.abs(val)
return (np.nan_to_num(0.5 / val / np.log(self.maximum / self.minimum)) *
self.is_in_prior_range(val))
def ln_prob(self, val):
......@@ -354,6 +355,18 @@ class SymmetricLogUniform(Prior):
"""
return np.nan_to_num(- np.log(2 * np.abs(val)) - np.log(np.log(self.maximum / self.minimum)))
def cdf(self, val):
val = np.atleast_1d(val)
norm = 0.5 / np.log(self.maximum / self.minimum)
cdf = np.zeros((len(val)))
lower_indices = np.where(np.logical_and(-self.maximum <= val, val <= -self.minimum))[0]
upper_indices = np.where(np.logical_and(self.minimum <= val, val <= self.maximum))[0]
cdf[lower_indices] = -norm * np.log(-val[lower_indices] / self.maximum)
cdf[np.where(np.logical_and(-self.minimum < val, val < self.minimum))] = 0.5
cdf[upper_indices] = 0.5 + norm * np.log(val[upper_indices] / self.minimum)
cdf[np.where(self.maximum < val)] = 1
return cdf
class Cosine(Prior):
......
......@@ -444,9 +444,6 @@ class Constraint(Prior):
def prob(self, val):
return (val > self.minimum) & (val < self.maximum)
def ln_prob(self, val):
return np.log((val > self.minimum) & (val < self.maximum))
class PriorException(Exception):
""" General base class for all prior exceptions """
......@@ -365,23 +365,21 @@ class PriorDict(dict):
return sample
else:
needed = np.prod(size)
constraint_keys = list()
for ii, key in enumerate(keys[-1::-1]):
for key in keys.copy():
if isinstance(self[key], Constraint):
constraint_keys.append(-ii - 1)
for ii in constraint_keys[-1::-1]:
del keys[ii]
del keys[keys.index(key)]
all_samples = {key: np.array([]) for key in keys}
_first_key = list(all_samples.keys())[0]
while len(all_samples[_first_key]) < needed:
samples = self.sample_subset(keys=keys, size=needed)
keep = np.array(self.evaluate_constraints(samples), dtype=bool)
for key in samples:
all_samples[key] = np.hstack(
[all_samples[key], samples[key][keep].flatten()])
all_samples = {key: np.reshape(all_samples[key][:needed], size)
for key in all_samples
if not isinstance(self[key], Constraint)}
for key in keys:
all_samples[key] = np.hstack([
all_samples[key], samples[key][keep].flatten()
])
all_samples = {
key: np.reshape(all_samples[key][:needed], size) for key in keys
}
return all_samples
def normalize_constraint_factor(self, keys):
......
......@@ -705,16 +705,20 @@ class Result(object):
"""
latex_labels = []
for k in keys:
if k in self.search_parameter_keys:
idx = self.search_parameter_keys.index(k)
latex_labels.append(self.parameter_labels_with_unit[idx])
elif k in self.parameter_labels:
latex_labels.append(k)
for key in keys:
if key in self.search_parameter_keys:
idx = self.search_parameter_keys.index(key)
label = self.parameter_labels_with_unit[idx]
elif key in self.parameter_labels:
label = key
else:
label = None
logger.debug(
'key {} not a parameter label or latex label'.format(k))
latex_labels.append(' '.join(k.split('_')))
'key {} not a parameter label or latex label'.format(key)
)
if label is None:
label = key.replace("_", " ")
latex_labels.append(label)
return latex_labels
@property
......@@ -1640,24 +1644,26 @@ class ResultList(list):
The result object with the combined evidences.
"""
self.check_nested_samples()
if result.use_ratio:
log_bayes_factors = np.array([res.log_bayes_factor for res in self])
result.log_bayes_factor = logsumexp(log_bayes_factors, b=1. / len(self))
result.log_evidence = result.log_bayes_factor + result.log_noise_evidence
result_weights = np.exp(log_bayes_factors - np.max(log_bayes_factors))
else:
log_evidences = np.array([res.log_evidence for res in self])
result.log_evidence = logsumexp(log_evidences, b=1. / len(self))
result_weights = np.exp(log_evidences - np.max(log_evidences))
# Combine evidences
log_evidences = np.array([res.log_evidence for res in self])
result.log_evidence = logsumexp(log_evidences, b=1. / len(self))
result.log_bayes_factor = result.log_evidence - result.log_noise_evidence
# Propogate uncertainty in combined evidence
log_errs = [res.log_evidence_err for res in self if np.isfinite(res.log_evidence_err)]
if len(log_errs) > 0:
result.log_evidence_err = 0.5 * logsumexp(2 * np.array(log_errs), b=1. / len(self))
else:
result.log_evidence_err = np.nan
# Combined posteriors with a weighting
result_weights = np.exp(log_evidences - np.max(log_evidences))
posteriors = list()
for res, frac in zip(self, result_weights):
selected_samples = (np.random.uniform(size=len(res.posterior)) < frac)
posteriors.append(res.posterior[selected_samples])
# remove original nested_samples
result.nested_samples = None
result.sampler_kwargs = None
......
......@@ -3,7 +3,8 @@ import sys
import datetime
from collections import OrderedDict
from ..utils import command_line_args, logger
import bilby
from ..utils import command_line_args, logger, loaded_modules_dict
from ..prior import PriorDict, DeltaFunction
from .base_sampler import Sampler, SamplingMarginalisedParameterError
......@@ -20,6 +21,7 @@ from .pymc3 import Pymc3
from .pymultinest import Pymultinest
from .ultranest import Ultranest
from .fake_sampler import FakeSampler
from .dnest4 import DNest4
from . import proposal
IMPLEMENTED_SAMPLERS = {
......@@ -27,7 +29,7 @@ IMPLEMENTED_SAMPLERS = {
'emcee': Emcee, 'kombine': Kombine, 'nestle': Nestle, 'ptemcee': Ptemcee,
'ptmcmcsampler': PTMCMCSampler, 'pymc3': Pymc3, 'pymultinest': Pymultinest,
'pypolychord': PyPolyChord, 'ultranest': Ultranest,
'fake_sampler': FakeSampler}
'fake_sampler': FakeSampler, 'dnest4': DNest4}
if command_line_args.sampler_help:
sampler = command_line_args.sampler_help
......@@ -107,7 +109,7 @@ def run_sampler(likelihood, priors=None, label='label', outdir='outdir',
Returns
-------
result
result: bilby.core.result.Result
An object containing the results
"""
......@@ -140,6 +142,7 @@ def run_sampler(likelihood, priors=None, label='label', outdir='outdir',
if meta_data is None:
meta_data = dict()
meta_data['likelihood'] = likelihood.meta_data
meta_data["loaded_modules"] = loaded_modules_dict()
if command_line_args.bilby_zero_likelihood_mode:
from bilby.core.likelihood import ZeroLikelihood
......
from __future__ import absolute_import
import os
import tempfile
import shutil
import distutils.dir_util
import signal
import time
import datetime
import numpy as np
import pandas as pd
from ..utils import check_directory_exists_and_if_not_mkdir, logger
from .base_sampler import NestedSampler
class _DNest4Model(object):
def __init__(self, log_likelihood_func, from_prior_func, widths, centers, highs, lows):
"""Initialize the DNest4 model.
Args:
log_likelihood_func (function): The loglikelihood function to use
during the Nested Sampling run.
from_prior_func (function): The function to use when randomly
selecting parameter vectors from the prior space.
widths (numpy.array): The approximate widths of the prior
distrbutions.
centers (numpy.array): The approximate center points of the prior
distributions.
"""
self._log_likelihood = log_likelihood_func
self._from_prior = from_prior_func
self._widths = widths
self._centers = centers
self._highs = highs
self._lows = lows
self._n_dim = len(widths)
return
def log_likelihood(self, coords):
"""The model's log_likelihood function"""
return self._log_likelihood(coords)
def from_prior(self):
"""The model's function to select random points from the prior space."""
return self._from_prior()
def perturb(self, coords):
"""The perturb function to perform Monte Carlo trial moves."""
idx = np.random.randint(self._n_dim)
coords[idx] += (self._widths[idx] * (np.random.uniform(size=1) - 0.5))
cw = self._widths[idx]
cc = self._centers[idx]
coords[idx] = self.wrap(coords[idx], (cc - 0.5 * cw), cc + 0.5 * cw)
return 0.0
def wrap(self, x, a, b):
assert b > a
return (x - a) % (b - a) + a
class DNest4(NestedSampler):
"""
Bilby wrapper of DNest4
Parameters
----------
TBD
Other Parameters
----------------
num_particles: int
The number of points to use in the Nested Sampling active population.
max_num_levels: int
The max number of diffusive likelihood levels that DNest4 should initialize
during the Diffusive Nested Sampling run.
backend: str
The python DNest4 backend for storing the output.
Options are: 'memory' and 'csv'. If 'memory' the
DNest4 outputs are stored in memory during the run. If 'csv' the
DNest4 outputs are written out to filse with a CSV format during
the run.
num_steps: int
The number of MCMC iterations to run
new_level_interval: int
The number of moves to run before creating a new diffusive likelihood level
lam: float
Set the backtracking scale length
beta: float
Set the strength of effect to force the histogram to equal bin counts
seed: int
Set the seed for the C++ random number generator
verbose: Bool
If True, prints information during run
TO DO: add equivalent args for num_particles (nlive, etc.)
Add sampling time functions
"""
default_kwargs = dict(
max_num_levels=20,
num_steps=500, # Number of iterations
new_level_interval=10000,
num_per_step=10000,
thread_steps=1,
num_particles=1000,
lam=10.0,
beta=100,
seed=None,
verbose=True,
outputfiles_basename=None,
# backend_callback=None, # for checkpointing in dnest5
backend='memory', # csv is currently bugged right now
# could change max_num_levels based on snr
)
def __init__(
self,
likelihood,
priors,
outdir="outdir",
label="label",
use_ratio=False,
plot=False,
exit_code=77,
skip_import_verification=False,
temporary_directory=True,
resume=True,
**kwargs
):
super(DNest4, self).__init__(
likelihood=likelihood,
priors=priors,
outdir=outdir,
label=label,
use_ratio=use_ratio,
plot=plot,
skip_import_verification=skip_import_verification,
exit_code=exit_code,
**kwargs
)
self.num_particles = self.kwargs["num_particles"]
self.max_num_levels = self.kwargs["max_num_levels"]
self._verbose = self.kwargs["verbose"]
self._backend = self.kwargs["backend"]
self.use_temporary_directory = temporary_directory
signal.signal(signal.SIGTERM, self.write_current_state_and_exit)
signal.signal(signal.SIGINT, self.write_current_state_and_exit)
signal.signal(signal.SIGALRM, self.write_current_state_and_exit)
# Get the estimates of the prior distributions' widths and centers.
widths = []
centers = []
highs = []
lows = []
samples = self.priors.sample(size=10000)
for key in self.search_parameter_keys:
pts = samples[key]
low = pts.min()
high = pts.max()
width = high - low
center = (high + low) / 2.0
widths.append(width)
centers.append(center)
highs.append(high)
lows.append(low)
self._widths = np.array(widths)
self._centers = np.array(centers)
self._highs = np.array(highs)
self._lows = np.array(lows)
self._from_prior = self.get_random_draw_from_prior
self._dnest4_model = _DNest4Model(self.log_likelihood, self._from_prior, self._widths,
self._centers, self._highs, self._lows)
def _set_backend(self):
import dnest4
if self._backend == 'csv':
# for CSVBackend, which is output data to disk
backend = dnest4.backends.CSVBackend("{}/dnest4{}/".format(self.outdir, self.label), sep=" ")
# change to original
else:
# for the MemoryBackend, which is output data to memory
backend = dnest4.backends.MemoryBackend()
return backend
def _set_dnest4_kwargs(self):
dnest4_keys = ["num_steps", "new_level_interval", "lam", "beta", "seed"]
self.dnest4_kwargs = {key: self.kwargs[key] for key in dnest4_keys}
return self.dnest4_kwargs
def run_sampler(self):
import dnest4
self._set_dnest4_kwargs()
backend = self._set_backend()
self._verify_kwargs_against_default_kwargs()
self._setup_run_directory()
self._check_and_load_sampling_time_file()
self.start_time = time.time()
self.sampler = dnest4.DNest4Sampler(self._dnest4_model, backend=backend)
out = self.sampler.sample(self.max_num_levels,
num_particles=self.num_particles,
**self.dnest4_kwargs)
for i, sample in enumerate(out):
if self._verbose and ((i + 1) % 100 == 0):
stats = self.sampler.postprocess()
logger.info("Iteration: {0} log(Z): {1}".format(i + 1, stats['log_Z']))
self._calculate_and_save_sampling_time()
self._clean_up_run_directory()
stats = self.sampler.postprocess(resample=1)
self.result.log_evidence = stats['log_Z']
self._information = stats['H']
self.result.log_evidence_err = np.sqrt(self._information / self.num_particles)
if self._backend == 'memory':
self._last_live_sample_info = pd.DataFrame(self.sampler.backend.sample_info[-1])
self.result.log_likelihood_evaluations = self._last_live_sample_info['log_likelihood']
self.result.samples = np.array(self.sampler.backend.posterior_samples)
print("here")
print(self.sampler.backend.posterior_samples)
print(self.result.samples)
else:
sample_info_path = './' + self.kwargs["outputfiles_basename"] + '/sample_info.txt'
sample_info = np.genfromtxt(sample_info_path, comments='#', names=True)
self.result.log_likelihood_evaluations = sample_info['log_likelihood']
self.result.samples = np.array(self.sampler.backend.posterior_samples)
self.result.sampler_output = out
self.result.outputfiles_basename = self.outputfiles_basename
self.result.sampling_time = datetime.timedelta(seconds=self.total_sampling_time)
self.calc_likelihood_count()
return self.result
def _translate_kwargs(self, kwargs):
if 'num_steps' not in kwargs:
for equiv in self.walks_equiv_kwargs:
if equiv in kwargs:
kwargs['num_steps'] = kwargs.pop(equiv)
def _verify_kwargs_against_default_kwargs(self):
self.outputfiles_basename = self.kwargs.pop("outputfiles_basename", None)
# if self.kwargs['backend_callback'] is None:
# self.kwargs['backend_callback'] = self._backend_callback
NestedSampler._verify_kwargs_against_default_kwargs(self)
# def _backend_callback(self, *args, **kwargs):
# if self.use_temporary_directory:
# self._copy_temporary_directory_contents_to_proper_path()
# self._calculate_and_save_sampling_time()
def _setup_run_directory(self):
"""
If using a temporary directory, the output directory is moved to the
temporary directory.
"""
if self.use_temporary_directory:
temporary_outputfiles_basename = tempfile.TemporaryDirectory().name
self.temporary_outputfiles_basename = temporary_outputfiles_basename
if os.path.exists(self.outputfiles_basename):
distutils.dir_util.copy_tree(self.outputfiles_basename, self.temporary_outputfiles_basename)
check_directory_exists_and_if_not_mkdir(temporary_outputfiles_basename)
self.kwargs["outputfiles_basename"] = self.temporary_outputfiles_basename
logger.info("Using temporary file {}".format(temporary_outputfiles_basename))
else:
check_directory_exists_and_if_not_mkdir(self.outputfiles_basename)
self.kwargs["outputfiles_basename"] = self.outpuxtfiles_basename
logger.info("Using output file {}".format(self.outputfiles_basename))
def _check_and_load_sampling_time_file(self):
self.time_file_path = self.kwargs["outputfiles_basename"] + '/sampling_time.dat'
if os.path.exists(self.time_file_path):
with open(self.time_file_path, 'r') as time_file:
self.total_sampling_time = float(time_file.readline())
else:
self.total_sampling_time = 0
def _calculate_and_save_sampling_time(self):
current_time = time.time()
new_sampling_time = current_time - self.start_time
self.total_sampling_time += new_sampling_time
with open(self.time_file_path, 'w') as time_file:
time_file.write(str(self.total_sampling_time))
self.start_time = current_time
def _clean_up_run_directory(self):
if self.use_temporary_directory:
self._move_temporary_directory_to_proper_path()
self.kwargs["outputfiles_basename"] = self.outputfiles_basename
@property
def outputfiles_basename(self):
return self._outputfiles_basename
@outputfiles_basename.setter
def outputfiles_basename(self, outputfiles_basename):
if outputfiles_basename is None:
outputfiles_basename = "{}/dnest4{}/".format(self.outdir, self.label)
if not outputfiles_basename.endswith("/"):
outputfiles_basename += "/"
check_directory_exists_and_if_not_mkdir(self.outdir)
self._outputfiles_basename = outputfiles_basename
@property
def temporary_outputfiles_basename(self):
return self._temporary_outputfiles_basename
@temporary_outputfiles_basename.setter
def temporary_outputfiles_basename(self, temporary_outputfiles_basename):
if not temporary_outputfiles_basename.endswith("/"):
temporary_outputfiles_basename = "{}/".format(
temporary_outputfiles_basename
)
self._temporary_outputfiles_basename = temporary_outputfiles_basename
if os.path.exists(self.outputfiles_basename):
shutil.copytree(
self.outputfiles_basename, self.temporary_outputfiles_basename
)
def write_current_state_and_exit(self, signum=None, frame=None):
""" Write current state and exit on exit_code """
logger.info(
"Run interrupted by signal {}: checkpoint and exit on {}".format(
signum, self.exit_code
)
)
self._calculate_and_save_sampling_time()
if self.use_temporary_directory:
self._move_temporary_directory_to_proper_path()
os._exit(self.exit_code)
def _move_temporary_directory_to_proper_path(self):
"""
Move the temporary back to the proper path
Anything in the proper path at this point is removed including links
"""
self._copy_temporary_directory_contents_to_proper_path()
shutil.rmtree(self.temporary_outputfiles_basename)
def _copy_temporary_directory_contents_to_proper_path(self):
"""
Copy the temporary back to the proper path.
Do not delete the temporary directory.
"""
logger.info(
"Overwriting {} with {}".format(
self.outputfiles_basename, self.temporary_outputfiles_basename
)
)
if self.outputfiles_basename.endswith('/'):
outputfiles_basename_stripped = self.outputfiles_basename[:-1]
else:
outputfiles_basename_stripped = self.outputfiles_basename
distutils.dir_util.copy_tree(self.temporary_outputfiles_basename, outputfiles_basename_stripped)
from __future__ import absolute_import
import datetime
import dill
import os
......@@ -326,10 +324,6 @@ class Dynesty(NestedSampler):
def run_sampler(self):
import dynesty
logger.info("Using dynesty version {}".format(dynesty.__version__))
if self.kwargs['live_points'] is None:
self.kwargs['live_points'] = (
self.get_initial_points_from_prior(
self.kwargs['nlive']))
if self.kwargs.get("sample", "rwalk") == "rwalk":
logger.info(
......@@ -351,10 +345,21 @@ class Dynesty(NestedSampler):
self._setup_pool()
self.sampler = dynesty.NestedSampler(
loglikelihood=_log_likelihood_wrapper,
prior_transform=_prior_transform_wrapper,
ndim=self.ndim, **self.sampler_init_kwargs)
if self.resume:
self.resume = self.read_saved_state(continuing=True)
if self.resume:
logger.info('Resume file successfully loaded.')
else:
if self.kwargs['live_points'] is None:
self.kwargs['live_points'] = (
self.get_initial_points_from_prior(self.kwargs['nlive'])
)
self.sampler = dynesty.NestedSampler(
loglikelihood=_log_likelihood_wrapper,
prior_transform=_prior_transform_wrapper,
ndim=self.ndim, **self.sampler_init_kwargs
)
if self.check_point:
out = self._run_external_sampler_with_checkpointing()
......@@ -424,10 +429,6 @@ class Dynesty(NestedSampler):
def _run_external_sampler_with_checkpointing(self):
logger.debug("Running sampler with checkpointing")
if self.resume:
resume_file_loaded = self.read_saved_state(continuing=True)
if resume_file_loaded:
logger.info('Resume file successfully loaded.')
old_ncall = self.sampler.ncall
sampler_kwargs = self.sampler_function_kwargs.copy()
......
......@@ -4,6 +4,7 @@ from distutils.spawn import find_executable
import logging
import os
import shutil
import sys
from math import fmod
import argparse
import inspect
......@@ -977,6 +978,10 @@ class BilbyJsonEncoder(json.JSONEncoder):
def default(self, obj):
from .prior import MultivariateGaussianDist, Prior, PriorDict
from ..gw.prior import HealPixMapPriorDist
if isinstance(obj, np.integer):
return int(obj)
if isinstance(obj, np.floating):
return float(obj)
if isinstance(obj, PriorDict):
return {'__prior_dict__': True, 'content': obj._get_json_dict()}
if isinstance(obj, (MultivariateGaussianDist, HealPixMapPriorDist, Prior)):
......@@ -1265,6 +1270,15 @@ def get_function_path(func):
return func
def loaded_modules_dict():
module_names = sys.modules.keys()
vdict = {}
for key in module_names:
if "." not in key:
vdict[key] = str(getattr(sys.modules[key], "__version__", "N/A"))
return vdict
class IllegalDurationAndSamplingFrequencyException(Exception):
pass
......
......@@ -248,6 +248,14 @@ def convert_to_lal_binary_black_hole_parameters(parameters):
converted_parameters[angle] =\
np.arccos(converted_parameters[cos_angle])
if "delta_phase" in original_keys:
converted_parameters["phase"] = np.mod(
converted_parameters["delta_phase"]
- np.sign(np.cos(converted_parameters["theta_jn"]))
* converted_parameters["psi"],
2 * np.pi
)
added_keys = [key for key in converted_parameters.keys()
if key not in original_keys]
......@@ -1009,18 +1017,19 @@ def generate_component_spins(sample):
['theta_jn', 'phi_jl', 'tilt_1', 'tilt_2', 'phi_12', 'a_1', 'a_2',
'mass_1', 'mass_2', 'reference_frequency', 'phase']
if all(key in output_sample.keys() for key in spin_conversion_parameters):
output_sample['iota'], output_sample['spin_1x'],\
output_sample['spin_1y'], output_sample['spin_1z'], \
output_sample['spin_2x'], output_sample['spin_2y'],\
output_sample['spin_2z'] =\
transform_precessing_spins(
output_sample['theta_jn'], output_sample['phi_jl'],
output_sample['tilt_1'], output_sample['tilt_2'],
output_sample['phi_12'], output_sample['a_1'],
output_sample['a_2'],
output_sample['mass_1'] * solar_mass,
output_sample['mass_2'] * solar_mass,
output_sample['reference_frequency'], output_sample['phase'])
(
output_sample['iota'], output_sample['spin_1x'],
output_sample['spin_1y'], output_sample['spin_1z'],
output_sample['spin_2x'], output_sample['spin_2y'],
output_sample['spin_2z']
) = np.vectorize(bilby_to_lalsimulation_spins)(
output_sample['theta_jn'], output_sample['phi_jl'],
output_sample['tilt_1'], output_sample['tilt_2'],
output_sample['phi_12'], output_sample['a_1'], output_sample['a_2'],
output_sample['mass_1'] * solar_mass,
output_sample['mass_2'] * solar_mass,
output_sample['reference_frequency'], output_sample['phase']
)
output_sample['phi_1'] =\
np.fmod(2 * np.pi + np.arctan2(
......
# KAGRA at design sensitvity.
# https://arxiv.org/pdf/1102.5421.pdf
# WARNING: I think there's a typo in that reference for the orientation.
# https://dcc.ligo.org/LIGO-P1200087-v42/public
#Kagra detector location: https://gwdoc.icrr.u-tokyo.ac.jp/cgi-bin/DocDB/ShowDocument?docid=3824
#The detector physical location needs to be converted into detector constants according to the BILBY convention.
#check the detector constant LAL value (careful about the convention) (https://lscsoft.docs.ligo.org/lalsuite/lal/group___detector_constants.html)
name = 'K1'
power_spectral_density = PowerSpectralDensity(psd_file='KAGRA_design_psd.txt')
length = 3
minimum_frequency = 20
maximum_frequency = 2048
latitude = 36 + 15. / 60 + 00. / 3600
longitude = 137 + 10. / 60 + 48. / 3600
elevation = 414.0
xarm_azimuth = 25.0
yarm_azimuth = 115.0
latitude = 36 + 24 / 60 + 42.69722 / 3600
longitude = 137 + 18 / 60 + 21.44171 / 3600
elevation = 414.181
xarm_azimuth = 90 - 60.39623489157727
yarm_azimuth = 90 + 29.60357629670688
......@@ -38,6 +38,9 @@ class Interferometer(object):
vertex = PropertyAccessor('geometry', 'vertex')
detector_tensor = PropertyAccessor('geometry', 'detector_tensor')
duration = PropertyAccessor('strain_data', 'duration')
sampling_frequency = PropertyAccessor('strain_data', 'sampling_frequency')
start_time = PropertyAccessor('strain_data', 'start_time')
frequency_array = PropertyAccessor('strain_data', 'frequency_array')
time_array = PropertyAccessor('strain_data', 'time_array')
minimum_frequency = PropertyAccessor('strain_data', 'minimum_frequency')
......
......@@ -398,7 +398,7 @@ def load_interferometer(filename):
with open(filename, 'r') as parameter_file:
lines = parameter_file.readlines()
for line in lines:
if line[0] == '#':
if line[0] == '#' or line[0] == '\n':
continue
split_line = line.split('=')
key = split_line[0].strip()
......
......@@ -29,7 +29,7 @@ path_to_eos_tables = os.path.join(os.path.dirname(__file__), 'eos_tables')
list_of_eos_tables = os.listdir(path_to_eos_tables)
valid_eos_files = [i for i in list_of_eos_tables if 'LAL' in i]
valid_eos_file_paths = [os.path.join(path_to_eos_tables, filename) for filename in valid_eos_files]
valid_eos_names = [i.split('_')[-1].strip('.dat') for i in valid_eos_files]
valid_eos_names = [i.split('_', maxsplit=1)[-1].strip('.dat') for i in valid_eos_files]
valid_eos_dict = dict(zip(valid_eos_names, valid_eos_file_paths))
......
......@@ -167,6 +167,9 @@ class GravitationalWaveTransient(Likelihood):
self.distance_prior_array = np.array(
[self.priors['luminosity_distance'].prob(distance)
for distance in self._distance_array])
if self.phase_marginalization:
max_bound = np.ceil(10 + np.log10(self._dist_multiplier))
self._setup_phase_marginalization(max_bound=max_bound)
self._setup_distance_marginalization(
distance_marginalization_lookup_table)
for key in ['redshift', 'comoving_distance']:
......@@ -597,8 +600,13 @@ class GravitationalWaveTransient(Likelihood):
@property
def _ref_dist(self):
""" Smallest distance contained in priors """
return self._distance_array[0]
""" Median distance in priors """
return self.priors['luminosity_distance'].rescale(0.5)
@property
def _dist_multiplier(self):
''' Maximum value of ref_dist/dist_array '''
return self._ref_dist / self._distance_array[0]
@property
def _optimal_snr_squared_ref_array(self):
......@@ -632,7 +640,7 @@ class GravitationalWaveTransient(Likelihood):
self._create_lookup_table()
self._interp_dist_margd_loglikelihood = UnsortedInterp2d(
self._d_inner_h_ref_array, self._optimal_snr_squared_ref_array,
self._dist_margd_loglikelihood_array, kind='cubic')
self._dist_margd_loglikelihood_array, kind='cubic', fill_value=-np.inf)
@property
def cached_lookup_table_filename(self):
......@@ -714,10 +722,10 @@ class GravitationalWaveTransient(Likelihood):
self._dist_margd_loglikelihood_array -= log_norm
self.cache_lookup_table()
def _setup_phase_marginalization(self):
def _setup_phase_marginalization(self, min_bound=-5, max_bound=10):
self._bessel_function_interped = interp1d(
np.logspace(-5, 10, int(1e6)), np.logspace(-5, 10, int(1e6)) +
np.log([i0e(snr) for snr in np.logspace(-5, 10, int(1e6))]),
np.logspace(-5, max_bound, int(1e6)), np.logspace(-5, max_bound, int(1e6)) +
np.log([i0e(snr) for snr in np.logspace(-5, max_bound, int(1e6))]),
bounds_error=False, fill_value=(0, np.nan))
def _setup_time_marginalization(self):
......@@ -1363,7 +1371,7 @@ class ROQGravitationalWaveTransient(GravitationalWaveTransient):
def get_binary_black_hole_likelihood(interferometers):
""" A rapper to quickly set up a likelihood for BBH parameter estimation
""" A wrapper to quickly set up a likelihood for BBH parameter estimation
Parameters
----------
......
......@@ -436,7 +436,8 @@ class CompactBinaryCoalescenceResult(CoreResult):
go.Scatter(
x=plot_times,
y=infft(
interferometer.whitened_frequency_domain_strain,
interferometer.whitened_frequency_domain_strain *
np.sqrt(2. / interferometer.sampling_frequency),
sampling_frequency=interferometer.strain_data.sampling_frequency)[time_idxs],
fill=None,
mode='lines', line_color=DATA_COLOR,
......@@ -461,7 +462,8 @@ class CompactBinaryCoalescenceResult(CoreResult):
color=DATA_COLOR, label='ASD')
axs[1].plot(
plot_times, infft(
interferometer.whitened_frequency_domain_strain,
interferometer.whitened_frequency_domain_strain *
np.sqrt(2. / interferometer.sampling_frequency),
sampling_frequency=interferometer.strain_data.sampling_frequency)[time_idxs],
color=DATA_COLOR, alpha=0.3)
logger.debug('Plotted interferometer data.')
......@@ -474,7 +476,8 @@ class CompactBinaryCoalescenceResult(CoreResult):
fd_waveform = interferometer.get_detector_response(wf_pols, params)
fd_waveforms.append(fd_waveform[frequency_idxs])
td_waveform = infft(
fd_waveform / interferometer.amplitude_spectral_density_array,
fd_waveform * np.sqrt(2. / interferometer.sampling_frequency) /
interferometer.amplitude_spectral_density_array,
self.sampling_frequency)[time_idxs]
td_waveforms.append(td_waveform)
fd_waveforms = asd_from_freq_series(
......@@ -601,7 +604,7 @@ class CompactBinaryCoalescenceResult(CoreResult):
hf_inj_det = interferometer.get_detector_response(
hf_inj, self.injection_parameters)
ht_inj_det = infft(
hf_inj_det /
hf_inj_det * np.sqrt(2. / interferometer.sampling_frequency) /
interferometer.amplitude_spectral_density_array,
self.sampling_frequency)[time_idxs]
if format == "html":
......
......@@ -279,6 +279,16 @@ def optimal_snr_squared(signal, power_spectral_density, duration):
return noise_weighted_inner_product(signal, signal, power_spectral_density, duration)
def overlap(signal_a, signal_b, power_spectral_density=None, delta_frequency=None,
lower_cut_off=None, upper_cut_off=None, norm_a=None, norm_b=None):
low_index = int(lower_cut_off / delta_frequency)
up_index = int(upper_cut_off / delta_frequency)
integrand = np.conj(signal_a) * signal_b
integrand = integrand[low_index:up_index] / power_spectral_density[low_index:up_index]
integral = (4 * delta_frequency * integrand) / norm_a / norm_b
return sum(integral).real
__cached_euler_matrix = None
__cached_delta_x = None
......
FROM bilbydev/bilby-test-suite-python37
LABEL name="bilby" \
maintainer="Gregory Ashton <gregory.ashton@ligo.org>" \
date="20190130"
RUN pip install bilby