Commit b76d5cde authored by Moritz Huebner's avatar Moritz Huebner

Merge branch 'master' into waveform_generator_starting_time

parents 11868f5d e1df746c
......@@ -29,11 +29,11 @@ exitcode-jessie:
# Run tests and collect coverage data
- coverage --version
- coverage erase
- coverage run --source=tupak/ -a test/conversion_tests.py
- coverage run --source=tupak/ -a test/detector_tests.py
- coverage run --source=tupak/ -a test/prior_tests.py
- coverage run --source=tupak/ -a test/sampler_tests.py
- coverage run --source=tupak/ -a test/waveform_generator_tests.py
- coverage run --source /usr/local/lib/python2.7/dist-packages/tupak/ -a test/conversion_tests.py
- coverage run --source /usr/local/lib/python2.7/dist-packages/tupak/ -a test/detector_tests.py
- coverage run --source /usr/local/lib/python2.7/dist-packages/tupak/ -a test/prior_tests.py
- coverage run --source /usr/local/lib/python2.7/dist-packages/tupak/ -a test/sampler_tests.py
- coverage run --source /usr/local/lib/python2.7/dist-packages/tupak/ -a test/waveform_generator_tests.py
- coverage html
- coverage-badge -o coverage_badge.svg -f
# Run all other tests (no coverage data collected)
......
# All notable changes will be documented in this file
## Unreleased
Changes currently on master, but not under a tag.
### Added
- Clean up of `detectors.py`: adds an `InterferometerStrainData` to handle strain data and `InterferometerSet` to handle multiple interferometers. All data setting should primarily be done through the `Interferometer.set_strain_data..` methods.
- Adds documentation on setting data (https://monash.docs.ligo.org/tupak/transient-gw-data.html)
- Checkpointing for `dynesty`: the sampling will be checkpointed every 10 minutes (approximately) and can be resumed.
- Add functionality to plot multiple results in a corner plot, see `tupak.core.result.plot_multiple()`.
- Likelihood evaluations are now saved along with the posteriors.
## [0.2.0] 2018-06-17
First `pip` installable version https://pypi.org/project/TUPAK/ .
### Added
- Reoriganisation of the directory into `tupak.core` and `tupak.gw`.
- Reading of frame files.
- Major effort to update all docstrings and add some documentation.
- Marginalized likelihoods.
- Examples of searches for gravitational waves from a Supernova and using a sine-Gaussian.
- A `PriorSet` to handle sets of priors and allows reading in from a standardised prior file (see https://monash.docs.ligo.org/tupak/prior.html).
- A standardised file for storing detector data.
### Removed
- All chainconsumer dependency as this was causing issues.
......@@ -3,7 +3,7 @@ Basics of parameter estimation
==============================
In this example, we'll go into some of the basics of parameter estimation and
how they are implemented in `tupak`.
how they are implemented in :code:`tupak`.
Firstly, consider a situation where you have discrete data :math:`\{y_0,
y_1,\ldots, y_n\}` taken at a set of times :math:`\{t_0, t_1, \ldots, y_n\}`.
......@@ -20,7 +20,7 @@ least squares estimator for example).
Here, we will describe a Bayesian approach using `nested sampling
<https://en.wikipedia.org/wiki/Nested_sampling_algorithm>`_ which might feel
like overkill for this problem. However, it is a useful way to introduce some
of the basic features of `tupak` before seeing them in more complicated
of the basic features of :code:`tupak` before seeing them in more complicated
settings.
The maths
......@@ -31,7 +31,7 @@ by
.. math::
P(m, c| \{y_i, t_i\}, H) \propto P(\{y_i, t_i\}| m, c, H) \times P(m, c| H)
P(m, c| \{y_i, t_i\}, H) \propto P(\{y_i, t_i\}| m, c, H) \times P(m, c| H)\,.
where the first term on the right-hand-side is the *likelihood* while the
second is the *prior*. In the model :math:`H`, the likelihood of the data point
......@@ -41,37 +41,37 @@ Gaussian distributed as such:
.. math::
P(y_i, t_i| m, c, H) = \frac{1}{\sqrt{2\pi\sigma^2}}
\mathrm{exp}\left(\frac{-(y_i - (t_i x + c))^2}{2\sigma^2}\right)
\mathrm{exp}\left(\frac{-(y_i - (t_i x + c))^2}{2\sigma^2}\right) \,.
Next, we assume that all data points are independent. As such,
.. math::
P(\{y_i, t_i\}| m, c, H) = \prod_{i=1}^n P(y_i, t_i| m, c, H)
P(\{y_i, t_i\}| m, c, H) = \prod_{i=1}^n P(y_i, t_i| m, c, H) \,.
When solving problems on a computer, it is often convienient to work with
the log-likelihood. Indeed, a `tupak` *likelihood* must have a `log_likelihood()`
the log-likelihood. Indeed, a :code:`tupak` *likelihood* must have a `log_likelihood()`
method. For the normal distribution, the log-likelihood for *n* data points is
.. math::
\log(P(\{y_i, t_i\}| m, c, H)) = -\frac{1}{2}\left[
\sum_{i=1}^n \left(\frac{(y_i - (t_i x + c))}{\sigma}\right)^2
+ n\log\left(2\pi\sigma^2\right)\right]
+ n\log\left(2\pi\sigma^2\right)\right] \,.
Finally, we need to specify a *prior*. In this case we will use uncorrelated
uniform priors
.. math::
P(m, c| H) = P(m| H) \times P(c| H) = \textrm{Unif}(0, 5) \times \textrm{Unif}(-2, 2)
P(m, c| H) = P(m| H) \times P(c| H) = \textrm{Unif}(0, 5) \times \textrm{Unif}(-2, 2)\,.
the choice of prior in general should be guided by physical knowledge about the
The choice of prior in general should be guided by physical knowledge about the
system and not the data in question.
The key point to take away here is that the **likelihood** and **prior** are
the inputs to figuring out the **posterior**. There are many ways to go about
this, we will now show how to do so in `tupak`. In this case, we explicitly
this, we will now show how to do so in :code:`tupak`. In this case, we explicitly
show how to write the `GaussianLikelihood` so that one can see how the
maths above gets implemented. For the prior, this is done implicitly by the
naming of the priors.
......
......@@ -2,9 +2,10 @@
Compact binary coalescence parameter estimation
===============================================
In this example, we demonstrate how to generate simulated data for a binary
black hold coalescence observed by the two LGIO interferometers at Hanford,
Livingston and the Virgo detector.
In this example, which can be found `here
<https://git.ligo.org/Monash/tupak/blob/master/examples/injection_examples/basic_tutorial.py>`_,
we demonstrate how to generate simulated data for a binary black hole
coalescence observed by the two LIGO interferometers at Hanford and Livingston.
.. literalinclude:: /../examples/injection_examples/basic_tutorial.py
:language: python
......
......@@ -10,11 +10,12 @@ Welcome to tupak's documentation!
installation
basics-of-parameter-estimation
compact-binary-coalescence-parameter-estimation
prior
likelihood
samplers
tupak-output
compact-binary-coalescence-parameter-estimation
transient-gw-data
writing-documentation
hyperparameters
......
......@@ -29,7 +29,7 @@ Clone the repository, install the requirements, and then install the software:
$ pip install -r requirements.txt
$ python setup.py install
Once you have run these steps, you have `tupak` installed. You can now try to
Once you have run these steps, you have :code:`tupak` installed. You can now try to
run the examples.
.. note::
......
......@@ -7,9 +7,9 @@ Likelihood
`tupak` likelihood objects are used in calculating the likelihood of the data
for some specific set of parameters. In mathematical notation, the likelihood
can be generically written as :math:`\mathcal{L}(d| \theta)`. How this is
coded up will depend on the problem, but `tupak` expects all likelihood
coded up will depend on the problem, but :code:`tupak` expects all likelihood
objects to have a `parameters` attribute (a dictionary of key-value pairs) and
a `log_likelihood()` method. In thie page, we'll discuss how to write your own
a `log_likelihood()` method. In this page, we'll discuss how to write your own
Likelihood, and the standard likelihoods in :code:`tupak`.
The simplest likelihood
......@@ -80,9 +80,9 @@ General likelihood for fitting a function :math:`y(x)` to some data with known n
------------------------------------------------------------------------------------
The previous example was rather simplistic, Let's now consider that we have some
dependent data :math:`\vec{y}=y_1, y_2, \ldots y_N$` measured at
dependent data :math:`\vec{y}=y_1, y_2, \ldots y_N` measured at
:math:`\vec{x}=x_1, x_2, \ldots, x_N`. We believe that the data is generated
by additive Gaussian noise with a known variance :math:`sigma^2` and a function
by additive Gaussian noise with a known variance :math:`\sigma^2` and a function
:math:`y(x; \theta)` where :math:`\theta` are some unknown parameters; that is
.. math::
......@@ -90,7 +90,7 @@ by additive Gaussian noise with a known variance :math:`sigma^2` and a function
y_i = y(x_i; \theta) + n_i
where :math:`n_i` is drawn from a normal distribution with zero mean and
standard deviation :math:`sigma`. As such, :math:`y_i - y(x_i; \theta)`
standard deviation :math:`\sigma`. As such, :math:`y_i - y(x_i; \theta)`
itself will have a likelihood
.. math::
......@@ -100,7 +100,7 @@ itself will have a likelihood
\mathrm{exp}\left(\frac{-(y_i - y(x_i; \theta))^2}{2\sigma^2}\right)
As with the previous case, the likelihood for all the data is the produce over
As with the previous case, the likelihood for all the data is the product over
the likelihood for each data point.
In :code:`tupak`, we can code this up as a likelihood in the following way::
......@@ -143,7 +143,7 @@ In :code:`tupak`, we can code this up as a likelihood in the following way::
This likelihood can be given any python function, the data (in the form of
:code:`x` and :code:`y`) and the standard deviation of the noise. The
parameters are inferred from the arguments to the :code:`function` argument,
for example if, when instatiating the likelihood you passed in a the following
for example if, when instantiating the likelihood you passed in the following
function::
def f(x, a, b):
......@@ -167,7 +167,7 @@ In the last example, we considered only cases with known noise (e.g., a
prespecified standard deviation. We now present a general function which can
handle unknown noise (in which case you need to specify a prior for
:math:`\sigma`, or known noise (in which case you pass the known noise in when
instatiating the likelihood::
instantiating the likelihood::
class GaussianLikelihood(tupak.Likelihood):
def __init__(self, x, y, function, sigma=None):
......@@ -181,7 +181,7 @@ instatiating the likelihood::
The data to analyse
function:
The python function to fit to the data. Note, this must take the
dependent variable as its first argument. The other arguments are
dependent variable as its first argument. The other arguments
will require a prior and will be sampled over (unless a fixed
value is given).
sigma: None, float, array_like
......
......@@ -12,7 +12,7 @@ Typically, these are passed into :ref:`run_sampler <run_sampler>` as a regular
`python dictionary
<https://docs.python.org/2/tutorial/datastructures.html#dictionaries>`_.
The keys of the priors objects should reference the model parameters (in
The keys of the priors objects should reference the model parameters, in
particular, the :code:`parameters` attribute of the :ref:`likelihood`. Each key
can be either
......@@ -20,7 +20,7 @@ can be either
this is a Delta-function prior,
- or a :code:`tupak.prior.Prior` instance.
If the later, it will be sampled during the parameter estimation. Here is a
If the latter, it will be sampled during the parameter estimation. Here is a
simple example that sets a uniform prior for :code:`a`, and a fixed value for
:code:`b`::
......@@ -44,14 +44,23 @@ hole and defined in a file like this
You can define your own default prior and pass a string pointing to that file
to :ref:`run_sampler <run_sampler>`.
Complete list of available prior classes
Available prior classes
----------------------------------------
We have provided a number of standard priors. You can define your own by
subclassing the :code:`tupak.prior.Prior` class. Here is the complete list of
those implemented:
We have provided a number of standard priors. Here we document a few of them,
note that this list is incomplete.
.. automodule:: tupak.core.prior
.. autoclass:: tupak.core.prior.Uniform
:members:
:special-members:
.. autoclass:: tupak.core.prior.Gaussian
:members:
:special-members:
You can also define your own by
subclassing the :code:`tupak.prior.Prior` class.
.. autoclass:: tupak.core.prior.Prior
:members:
:special-members:
.. _transient_gw_data:
=====================================
Transient gravitational wave data I/O
=====================================
This document describs how :code:`tupak` handles interferometer data and how
you can load data.
What is used by the likelihood?
-------------------------------
First up, the :ref:`likelihood` used for transient gravitational wave searches
is either :code:`tupak.gw.likelihood.GravitationalWaveTransient` or
:code:`tupak.gw.likelihood.BasicGravitationalWaveTransient`. Both of these take
an argument `interferometers` which is a list of
`tupak.gw.detector.Interferometer` objects. These objects know about the
geometry of the detector, the noise properties of the detector, and the
segment of data which is to be analysed. In the following, we'll describe
difference ways to set this data.
Making an Interferometer
------------------------
First up, you can easily get one of the known interferometers using this command::
>>> H1 = tupak.gw.detector.get_empty_interferometer('H1')
By default, these will have power spectral densities based on typical behaviour
of the detector. The strain data (i.e. the data about the segment of interferomer
data which we want to analyse) is in an attribute :code:`H1.strain_data`. The
following is a list of ways to set this strain data.
Setting the strain data
-----------------------
Setting the strain data directly
================================
If you have an array of the frequency-domain strain data, you can set it
directly like this::
>>> H1.set_strain_data_from_frequency_domain_strain(frequency_domain_strain,
sampling_frequency=sampling_frequency,
duration=duration,
start_time=start_time)
Where the given arguments are things you have already defined in your python
script. If you'd prefer to give the :code:`frequency_array` to which the
data corresponds instead of the :code:`sampling_frequency` and :code:`duration`
this can also be done::
>>> H1.set_strain_data_from_frequency_domain_strain(frequency_domain_strain,
sampling_frequency=sampling_frequency,
duration=duration,
start_time=start_time)
Here is the full API:
.. automethod:: tupak.gw.detector.Interferometer.set_strain_data_from_frequency_domain_strain
Setting the strain data from a frame file
=========================================
To set the data from a frame file, use this method
.. automethod:: tupak.gw.detector.Interferometer.set_strain_data_from_frame_file
Setting the strain data to be Gaussian noise
============================================
Often, for testing, you may want to just generate a realization of coloured
Gaussian noise from the power spectral density. This can be done using this
method:
.. automethod:: tupak.gw.detector.Interferometer.set_strain_data_from_power_spectral_density
Setting the strain data to be zero noise
========================================
.. automethod:: tupak.gw.detector.Interferometer.set_strain_data_from_zero_noise
Injecting a signal
------------------
If you wish to inject a signal into the data, you can use this function
.. automethod:: tupak.gw.detector.Interferometer.inject_signal
Helper functions
----------------
To help setting things up, we provide a few convienience functions for typical
operations:
.. autofunction:: tupak.gw.detector.get_interferometer_with_fake_noise_and_injection
.. autofunction:: tupak.gw.detector.get_interferometer_with_open_data
.. autofunction:: tupak.gw.detector.get_event_data
......@@ -2,7 +2,7 @@
Tupak output
============
In this document, we will describe what :code::code:`tupak` outputs, where it is stored,
In this document, we will describe what :code:`tupak` outputs, where it is stored,
and how you can access it.
When you call :code:`run_sampler`, there are two arguments :code:`outdir` and :code:`label` which
......@@ -83,7 +83,7 @@ This shows the different data that is stored in the :code:`h5` file. You can thi
the file like a python dictionary - its a bag with lots of different kind of
data which can be accessed via a :code:`key` (a string). We use `deepdish
<http://deepdish.io/>`_ to handle the saving of :code:`h5` files in :code:`tupak`. In python,
can load any :code:`h5` file and access its contents like a dictionary::
you can load any :code:`h5` file and access its contents like a dictionary::
>>> import deepdish
>>> output = deepdish.io.load('outdir/label_result.h5')
......@@ -98,7 +98,7 @@ Reading in a result file
------------------------
Rather than reading in the raw :code:`h5` file, may find it more convienient to
instead load a :code:`*result.h5` as a :code:`tupak` :code:`result` object (i.e., like the output
of :code:`run_sampler`. To do this::
of :code:`run_sampler`). To do this::
>>> import tupak
>>> result = tupak.result.read_in_result(outdir=outdir, label=label)
......
......@@ -59,7 +59,7 @@ all of the new/changed files::
$ git commit -m "Adding my documentation for the feature"
$ git push origin adding-my-new-documentation
Then, on the web interface create an merge request.
Then, on the web interface create a merge request.
Using reStructured text
-----------------------
......
......@@ -7,7 +7,6 @@ import tupak
import numpy as np
# First set up logging and some output directories and labels
tupak.core.utils.setup_logger()
outdir = 'outdir'
label = 'create_your_own_source_model'
sampling_frequency = 4096
......
......@@ -8,7 +8,6 @@ and then recovered.
import tupak
import numpy as np
tupak.core.utils.setup_logger()
# define the time-domain model
......
......@@ -8,7 +8,6 @@ import numpy as np
import tupak.gw.prior
tupak.core.utils.setup_logger()
time_duration = 4.
sampling_frequency = 2048.
......
......@@ -7,7 +7,6 @@ from __future__ import division, print_function
import tupak
import numpy as np
tupak.core.utils.setup_logger()
time_duration = 4.
sampling_frequency = 2048.
......
......@@ -36,8 +36,8 @@ prior = tupak.gw.prior.BBHPriorSet(filename='GW150914.prior')
# creates the frequency-domain strain. In this instance, we are using the
# `lal_binary_black_hole model` source model. We also pass other parameters:
# the waveform approximant and reference frequency.
waveform_generator = tupak.WaveformGenerator(time_duration=interferometers[0].duration,
sampling_frequency=interferometers[0].sampling_frequency,
waveform_generator = tupak.WaveformGenerator(time_duration=interferometers.duration,
sampling_frequency=interferometers.sampling_frequency,
frequency_domain_source_model=tupak.gw.source.lal_binary_black_hole,
waveform_arguments={'waveform_approximant': 'IMRPhenomPv2',
'reference_frequency': 50})
......
......@@ -8,7 +8,6 @@ import tupak
import numpy as np
# A few simple setup steps
tupak.core.utils.setup_logger()
label = 'gaussian_example'
outdir = 'outdir'
......@@ -47,7 +46,6 @@ class SimpleGaussianLikelihood(tupak.Likelihood):
likelihood = SimpleGaussianLikelihood(data)
priors = dict(mu=tupak.core.prior.Uniform(0, 5, 'mu'),
sigma=tupak.core.prior.Uniform(0, 10, 'sigma'))
priors['mu'] = 1
# And run sampler
result = tupak.run_sampler(
......
......@@ -8,7 +8,6 @@ import numpy as np
import inspect
import matplotlib.pyplot as plt
tupak.core.utils.setup_logger()
outdir = 'outdir'
......
......@@ -12,9 +12,9 @@ import matplotlib.pyplot as plt
import inspect
# A few simple setup steps
tupak.core.utils.setup_logger()
label = 'linear_regression'
outdir = 'outdir'
tupak.utils.check_directory_exists_and_if_not_mkdir(outdir)
# First, we define our "signal model", in this case a simple linear function
......@@ -22,7 +22,7 @@ def model(time, m, c):
return time * m + c
# New we define the injection parameters which we make simulated data with
# Now we define the injection parameters which we make simulated data with
injection_parameters = dict(m=0.5, c=0.2)
# For this example, we'll use standard Gaussian noise
......
......@@ -11,9 +11,9 @@ import numpy as np
import matplotlib.pyplot as plt
# A few simple setup steps
tupak.core.utils.setup_logger()
label = 'linear_regression_unknown_noise'
outdir = 'outdir'
tupak.utils.check_directory_exists_and_if_not_mkdir(outdir)
# First, we define our "signal model", in this case a simple linear function
......@@ -21,7 +21,7 @@ def model(time, m, c):
return time * m + c
# New we define the injection parameters which we make simulated data with
# Now we define the injection parameters which we make simulated data with
injection_parameters = dict(m=0.5, c=0.2)
# For this example, we'll inject standard Gaussian noise
......
......@@ -29,8 +29,8 @@ np.random.seed(170801)
# read in a signal to inject from a txt file.
injection_parameters = dict(file_path='MuellerL15_example_inj.txt',
luminosity_distance=10.0, ra=1.375,
dec=-1.2108, geocent_time=1126259642.413,
luminosity_distance=7.0, ra=4.6499,
dec=-0.5063, geocent_time=1126259642.413,
psi=2.659)
# Create the waveform_generator using a supernova source function
......@@ -61,7 +61,7 @@ simulation_parameters = dict(
search_waveform_generator = tupak.gw.waveform_generator.WaveformGenerator(
time_duration=time_duration, sampling_frequency=sampling_frequency,
frequency_domain_source_model=tupak.gw.source.supernova_pca_model,
parameters=simulation_parameters)
waveform_arguments=simulation_parameters)
# Set up prior
priors = dict()
......@@ -74,8 +74,9 @@ priors['pc_coeff2'] = tupak.core.prior.Uniform(-1, 1, 'pc_coeff2')
priors['pc_coeff3'] = tupak.core.prior.Uniform(-1, 1, 'pc_coeff3')
priors['pc_coeff4'] = tupak.core.prior.Uniform(-1, 1, 'pc_coeff4')
priors['pc_coeff5'] = tupak.core.prior.Uniform(-1, 1, 'pc_coeff5')
priors['ra'] = tupak.core.prior.create_default_prior(name='ra')
priors['dec'] = tupak.core.prior.create_default_prior(name='dec')
priors['ra'] = tupak.core.prior.Uniform(minimum=0, maximum=2 * np.pi,
name='ra')
priors['dec'] = tupak.core.prior.Sine(name='dec')
priors['geocent_time'] = tupak.core.prior.Uniform(
injection_parameters['geocent_time'] - 1,
injection_parameters['geocent_time'] + 1,
......
......@@ -88,8 +88,8 @@ class TestConvertToLALBBHParams(unittest.TestCase):
self.search_keys.append('cos_tilt_1')
self.parameters['cos_tilt_1'] = 1
self.parameters, _ = tupak.gw.conversion.convert_to_lal_binary_black_hole_parameters(self.parameters, self.search_keys, remove=True)
self.assertDictEqual(self.parameters, dict(tilt_1 = 42))
self.assertDictEqual(self.parameters, dict(tilt_1=42, cos_tilt_1=1))
if __name__=='__main__':
if __name__ == '__main__':
unittest.main()
\ No newline at end of file
This diff is collapsed.
......@@ -43,7 +43,7 @@ IFO = tupak.gw.detector.get_interferometer_with_fake_noise_and_injection(name='H
time_duration=time_duration, plot=False,
sampling_frequency=sampling_frequency)
hf_signal_and_noise = IFO.data
hf_signal_and_noise = IFO.strain_data.frequency_domain_strain
frequencies = tupak.core.utils.create_frequency_series(
sampling_frequency=sampling_frequency, duration=time_duration)
......
......@@ -8,19 +8,21 @@ class TestNoiseRealisation(unittest.TestCase):
def test_averaged_noise(self):
time_duration = 1.
sampling_frequency = 4096.
factor = np.sqrt(2./time_duration)
factor = np.sqrt(2 / time_duration)
n_avg = 1000
psd_avg = 0
interferometer = tupak.gw.detector.get_empty_interferometer('H1')
for x in range(0, n_avg):
interferometer.set_data(sampling_frequency, time_duration, from_power_spectral_density=True)
psd_avg += abs(interferometer.data)**2
interferometer.set_strain_data_from_power_spectral_density(
sampling_frequency=sampling_frequency, duration=time_duration)
psd_avg += abs(interferometer.strain_data.frequency_domain_strain)**2
psd_avg = psd_avg/n_avg
asd_avg = np.sqrt(abs(psd_avg))
psd_avg = psd_avg / n_avg
asd_avg = np.sqrt(abs(psd_avg)) * interferometer.frequency_mask
a = interferometer.amplitude_spectral_density_array/factor
a = np.nan_to_num(interferometer.amplitude_spectral_density_array / factor * interferometer.frequency_mask)
b = asd_avg
print(a, b)
self.assertTrue(np.allclose(a, b, rtol=1e-1))
def test_noise_normalisation(self):
......@@ -35,8 +37,9 @@ class TestNoiseRealisation(unittest.TestCase):
muf, frequency_array = tupak.core.utils.nfft(mu, sampling_frequency)
for x in range(0, n_avg):
interferometer = tupak.gw.detector.get_empty_interferometer('H1')
interferometer.set_data(sampling_frequency, time_duration, from_power_spectral_density=True)
hf_tmp = interferometer.data
interferometer.set_strain_data_from_power_spectral_density(
sampling_frequency=sampling_frequency, duration=time_duration)
hf_tmp = interferometer.strain_data.frequency_domain_strain
psd = interferometer.power_spectral_density
snr[x] = tupak.gw.utils.inner_product(hf_tmp, muf, frequency_array, psd) \
/ np.sqrt(tupak.gw.utils.inner_product(muf, muf, frequency_array, psd))
......
......@@ -2,14 +2,16 @@
tupak
=====
Tupak is The User friendly Parameter estimAtion Kode
Tupak is The User friendly Parameter estimAtion Kode.
The aim of tupak is to provide user friendly interface to perform parameter
estimation. It is primarily designed and built for inference of compact
binary coalescence events in interferometric data, but it can also be used for
more general problems.
For installation instructions see https://git.ligo.org/Monash/tupak
The code, and many examples are hosted at https://git.ligo.org/Monash/tupak.
For installation instructions see
https://monash.docs.ligo.org/tupak/installation.html.
"""
......
......@@ -61,7 +61,7 @@ class GaussianLikelihood(Likelihood):
The data to analyse
function:
The python function to fit to the data. Note, this must take the
dependent variable as its first argument. The other arguments are
dependent variable as its first argument. The other arguments
will require a prior and will be sampled over (unless a fixed
value is given).
sigma: None, float, array_like
......
......@@ -65,6 +65,20 @@ class PriorSet(dict):
prior[key] = eval(val)
self.update(prior)
def convert_floats_to_delta_functions(self):
""" Convert all float parameters to delta functions """
for key in self:
if isinstance(self[key], Prior):
continue
elif isinstance(self[key], float) or isinstance(self[key], int):
self[key] = DeltaFunction(self[key])
logging.debug(
"{} converted to delta function prior.".format(key))
else:
logging.debug(
"{} cannot be converted to delta function prior."
.format(key))
def fill_priors(self, likelihood, default_priors_file=None):
"""
Fill dictionary of priors based on required parameters of likelihood
......@@ -91,17 +105,7 @@ class PriorSet(dict):
"""
for key in self:
if isinstance(self[key], Prior):
continue
elif isinstance(self[key], float) or isinstance(self[key], int):
self[key] = DeltaFunction(self[key])
logging.debug(
"{} converted to delta function prior.".format(key))
else:
logging.debug(
"{} cannot be converted to delta function prior."
.format(key))
self.convert_floats_to_delta_functions()
missing_keys = set(likelihood.parameters) - set(self.keys())
......@@ -138,6 +142,7 @@ class PriorSet(dict):
-------
dict: Dictionary of the samples
"""
self.convert_floats_to_delta_functions()