Commit 93d0b52b authored by Daniel Williams's avatar Daniel Williams 🤖
Browse files

Merge branch 'review' into 'master'

O3a final / O3b Review

See merge request !5
parents e7e30cc7 cd59b2af
Pipeline #234104 passed with stages
in 1 minute and 9 seconds
* @daniel-williams
[Reviewers]
* @patricia-schmidt @marialuisa.chiofalo
asimov/pipelines/rift.py @richard.udall
\ No newline at end of file
v0.3.0
======
First fully reviewed version.
Reviewed support for:
- bilby
- RIFT
- bayeswave
job creation.
Prepared for the O3a and O3b parameter estimation projects.
include asimov/asimov.conf
\ No newline at end of file
include asimov/asimov.conf
include asimov/gitlabissue.md
include asimov/configs/*.ini
include asimov/priors/*.template
\ No newline at end of file
0.3.0
=====
(25.4.2021) We have performed line-by-line reviews of the core functions of asimov, the PE automation pipeline for O3a final. The review covered the generation of the workflow, DAGs and command lines for BayesWave, bilby (via bilby_pipe) and RIFT. The handling of LALInference was not reviewed here.
The reviewed code packages and files are listed in the table below. All requested fixes were implemented on the review branch and are signed off, ready to be merged upstream into master (!5). Several new logger messages were added. All requests and fixes are documented in the relevant issues and MRs. Significant changes affect the
BayesWave settings for PSD generation
PE summary generation for results generation
Passing of a bilby prior file directly via the ini file instead of the CL
Removal of the ROQ priors as default
Passing of the minimum template frequency for RIFT
A new set of integration tests was added but we note that these do not allow for a complete end-to-end test of the asimov pipeline as the interfacing with gitlab was removed (on purpose); hence the git issue generation and gitlab interactions are not covered in these tests. They serve the purpose of demonstrating the correct generation of ini files and DAGs/CLs.
A complete end-to-end test with all fixes implemented on the review branch was performed on a test-trigger event (copy of GW190426l) with the BayesWave dependencies removed: pe/O3/o3b-pe-coordination#122 with the generated ini files ProdF{6,7,8} available at https://git.ligo.org/daniel-williams/S190426l/-/tree/master/C01_offline
The correct end-to-end workflow was produced.
The MR !5 will be merged upstream, with the reviewed version of asimov corresponding to version v0.3.0. This version of asimov should be used for all O3 production analysis.
In addition, we have also reviewed and signed off on default inputs specific to the O3a final analysis. While many of these defaults are also applicable to the O3b analysis, e.g. default calibration C01_offline, channel names etc., we recommend a thorough re-check of these settings specific for O3b.
For the future, we recommend further improvement in logging messages, the writing of a documentation and a workflow-chart. We also note that the spin magnitude priors are currently hard-coded for RIFT. Hence, we recommend implementing the BBH magnitudes as the default option but allowing for these to be populated via the prior metadata. Extensions to consistently handling BNS and NSBH as well as allowing for other schedulers will likely be required in the future.
......@@ -4,10 +4,11 @@ token = "fake"
[general]
calibration = C01
git_default = ""
git_default = .
rundir_default = working
calibration_directory=C01_offline
webroot =
webroot = pages/
state_vector = "{'L1: 'L1:DCS-CALIB_STATE_VECTOR_C01', 'H1': 'H1:DCS-CALIB_STATE_VECTOR_C01', 'V1': 'V1:DQ_ANALYSIS_STATE_VECTOR'"}
[olivaw]
tracking_repository = "fake"
......@@ -24,13 +25,18 @@ results_store = results/
environment = /cvmfs/oasis.opensciencegrid.org/ligo/sw/conda/envs/igwn-py38-20210107
accounting = ligo.prod.o3.cbc.pe.lalinference
[bayeswave]
niter = 4000000
memory = 8192
postmemory = 16384
accounting = ligo.prod.o3.cbc.pe.lalinference
[rift]
environment = /cvmfs/oasis.opensciencegrid.org/ligo/sw/conda/envs/igwn-py38-20210107
[bilby]
environment = /cvmfs/oasis.opensciencegrid.org/ligo/sw/conda/envs/igwn-py38-20210107
priors = /home/daniel.williams/events/O3/o3b-pe-coordination/prior-files
accounting = ligo.dev.o3.cbc.pe.lalinference
accounting = ligo.prod.o3.cbc.pe.lalinference
[mattermost]
webhook_url = https://chat.ligo.org/hooks/i5k56qcs1fnaiqa1i9qrsdexby
......
......@@ -5,7 +5,7 @@ import json
from math import floor
import ligo.gracedb
import gwpy
import ast
import gwpy.timeseries
from gwdatafind import find_urls
from gwpy.segments import DataQualityFlag
......@@ -103,8 +103,10 @@ def create(name, oldname=None, gid=None, superevent=None, repo=None):
if config.get("ledger", "engine") == "gitlab":
_, repository = connect_gitlab()
from pkg_resources import resource_filename
issue_template = resource_filename('asimov', 'gitlabissue.md')
gitlab.EventIssue.create_issue(repository,
event, issue_template="/home/daniel.williams/repositories/asimov/scripts/outline.md")
event, issue_template=issue_template)
elif config.get("ledger", "engine") == "yamlfile":
ledger = Ledger(config.get("ledger", "location"))
......@@ -154,6 +156,7 @@ def configurator(event, json_data=None):
new_data = {"quality": {}, "priors": {}}
new_data["quality"]["sample-rate"] = int(data["srate"])
new_data["quality"]["lower-frequency"] = {}
# Factor 0.875 to account for PSD roll off
new_data["quality"]["upper-frequency"] = int(0.875 * data["srate"]/2)
new_data["quality"]["start-frequency"] = data['f_start']
new_data["quality"]["segment-length"] = int(data['seglen'])
......@@ -201,9 +204,10 @@ def checkifo(event):
print(f"No {ifo} data found.")
continue
state_vector_channel = {"L1": "L1:DCS-CALIB_STATE_VECTOR_C01",
"H1": "H1:DCS-CALIB_STATE_VECTOR_C01",
"V1": "V1:DQ_ANALYSIS_STATE_VECTOR"}
if "state vector" in event.meta:
state_vector_channel = event.meta['state vector']
else:
state_vector_channel = ast.literal_eval(config.get("data", "state-vector"))
state = gwpy.timeseries.StateVector.read(
datacache, state_vector_channel[ifo], start=gpsstart, end=gpsend,
......
......@@ -33,14 +33,14 @@ def build(event):
ready_productions = event.event_object.get_all_latest()
for production in ready_productions:
click.echo(f"\tWorking on production {production.name}")
if production.status in {"running", "stuck", "wait", "finished", "uploaded"}: continue
if production.status in {"running", "stuck", "wait", "finished", "uploaded", "cancelled", "stopped"}: continue
try:
configuration = production.get_configuration()
except ValueError:
try:
rundir = config.get("general", "rundir_default")
templates = os.path.join(rundir, config.get("templating", "directory"))
production.make_config(f"{production.name}.ini", template_directory=templates)
production.make_config(f"{production.name}.ini")
click.echo(f"Production config {production.name} created.")
logger.info("Run configuration created.", production=production)
......@@ -72,7 +72,7 @@ def submit(event, update):
logger = logging.AsimovLogger(event=event.event_object)
ready_productions = event.event_object.get_all_latest()
for production in ready_productions:
if production.status.lower() in {"running", "stuck", "wait", "processing", "uploaded", "finished", "manual"}: continue
if production.status.lower() in {"running", "stuck", "wait", "processing", "uploaded", "finished", "manual", "cancelled", "stopped"}: continue
if production.status.lower() == "restart":
if production.pipeline.lower() in known_pipelines:
pipe = known_pipelines[production.pipeline.lower()](production, "C01_offline")
......
......@@ -52,7 +52,9 @@ def create(event, pipeline, family, comment, needs, template, status, approximan
family_entries = [int(name.split(family)[1]) for name in names if family in name]
#
if "bayeswave" in needs:
bw_entries = [production.name for production in event_prods if "bayeswave" in production.pipeline.lower()]
bw_entries = [production.name for production in event_prods
if ("bayeswave" in production.pipeline.lower())
and (production.review.status not in {"REJECTED", "DEPRECATED"})]
needs = bw_entries
#
production = {"comment": comment, "pipeline": pipeline, "status": status}
......
......@@ -124,7 +124,6 @@ def clone(location):
if config.get("ledger", "engine") == "yamlfile":
shutil.copyfile(os.path.join(location, config.get("ledger", "location")), "ledger.yml")
elif config.get("ledger", "engine") == "gitlab":
_, repository = connect_gitlab(config)
events = gitlab.find_events(repository,
......
[input]
dataseed=1234
seglen={{ production.meta['quality']['segment-length'] }}
window={{ production.meta['quality']['window-length'] }}
flow={{ production.meta['quality']['lower-frequency'] }}
srate={{ production.meta['quality']['sample-rate'] }}
PSDlength={{ production.meta['quality']['psd-length'] }}
padding=0.0
ifo-list={{ production.meta['interferometers'] }}
segment-start={{ production.meta['quality']['segment start'] }}
[engine]
install_path={{ config["pipelines"]["environment"] }}/bin
bayeswave=%(install_path)s/BayesWave
bayeswave_post=%(install_path)s/BayesWavePost
megaplot=%(install_path)s/megaplot.py
megasky=%(install_path)s/megasky.py
[datafind]
# To come from lalinference
channel-list = {'H1': '{{ production.meta['data']['channels']['H1'] }}','L1': '{{ production.meta['data']['channels']['L1'] }}', 'V1': '{{ production.meta['data']['channels']['V1'] }}'}
frtype-list = {'H1': '{{ production.meta['data']['frame-types']['H1'] }}', 'L1': '{{ production.meta['data']['frame-types']['L1'] }}', 'V1': '{{ production.meta['data']['frame-types']['V1'] }}'}
url-type=file
veto-categories=[1]
[bayeswave_options]
; command line options for BayesWave. See BayesWave --help
Dmax=100
updatedGeocenterPSD=
Niter = {{ config["bayeswave"]["niter"] }}
cleanOnly=
bayesLine =
[bayeswave_post_options]
; command line options for BayesWavePost. See BayesWavePost --help
0noise=
lite =
bayesLine =
[condor]
; see e.g., https://ldas-gridmon.ligo.caltech.edu/ldg_accounting/user
universe=vanilla
checkpoint=
bayeswave-request-memory= {{ config["bayeswave"]["memory"] }}
bayeswave_post-request-memory= {{ config["bayeswave"]["postmemory"] }}
datafind=/usr/bin/gw_data_find
ligolw_print=/usr/bin/ligolw_print
segfind=/usr/bin/ligolw_segment_query_dqsegdb
accounting-group = {{ config["bayeswave"]["accounting"] }}
[segfind]
; See e.g., https://wiki.ligo.org/viewauth/DetChar/DataQuality/AligoFlags
segment-url=https://segments.ligo.org
[segments]
; See e.g., https://wiki.ligo.org/viewauth/DetChar/DataQuality/AligoFlags
; https://wiki.ligo.org/viewauth/LSC/JRPComm/ObsRun1#Resource_information_40Data_44_segments_44_etc._41
l1-analyze = L1:DMT-ANALYSIS_READY:1
h1-analyze = H1:DMT-ANALYSIS_READY:1
;v1-analyze = V1:ITF_SCIENCEMODE
## This file was written with bilby_pipe version 1.0.1: (CLEAN) c4c6d92 2020-08-26 02:52:43 -0500
{% if production.event.repository %}
{% assign repo_dir = production.event.repository.directory %}
{% else %}
{% assign repo_dir = "." %}
{% endif %}
################################################################################
## Calibration arguments
# Which calibration model and settings to use.
################################################################################
# Choice of calibration model, if None, no calibration is used
calibration-model=None
# Dictionary pointing to the spline calibration envelope files
spline-calibration-envelope-dict=None
# Number of calibration nodes
spline-calibration-nodes=5
# Dictionary of the amplitude uncertainties for the constant uncertainty model
calibration-model=CubicSpline
spline-calibration-envelope-dict={ {% if production.meta['interferometers'] contains "H1" %}H1:{{ repo_dir }}/{{ production.meta['calibration']['H1'] }},{% endif %}{% if production.meta['interferometers'] contains "L1" %}L1:{{ repo_dir }}/{{ production.meta['calibration']['L1'] }},{% endif %}{% if production.meta['interferometers'] contains "V1" %}V1:{{ repo_dir }}/{{ production.meta['calibration']['V1'] }}{% endif %} }
spline-calibration-nodes=10
spline-calibration-amplitude-uncertainty-dict=None
# Dictionary of the phase uncertainties for the constant uncertainty model
spline-calibration-phase-uncertainty-dict=None
################################################################################
## Data generation arguments
# How to generate the data, e.g., from a list of gps times or simulated Gaussian noise.
################################################################################
# Ignores the check to see if data queried from GWpy (ie not gaussian noise) is obtained from time when the IFOs are in science mode.
ignore-gwpy-data-quality-check=True
# Tuple of the (start, step, number) of GPS start times. For example, (10, 1, 3) produces the gps start times [10, 11, 12]. If given, gps-file is ignored.
gps-tuple=None
# File containing segment GPS start times. This can be a multi-column file if (a) it is comma-separated and (b) the zeroth column contains the gps-times to use
gps-file=None
# File containing detector timeslides. Requires a GPS time file to also be provided. One column for each detector. Order of detectors specified by `--detectors` argument. Number of timeslides must correspond to the number of GPS times provided.
timeslide-file=None
# Dictionary containing detector timeslides: applies a fixed offset per detector. E.g. to apply +1s in H1, {H1: 1}
timeslide-dict=None
# Either a GPS trigger time, or the event name (e.g. GW150914). For event names, the gwosc package is used to identify the trigger time
trigger-time=None
# If true, use simulated Gaussian noise
trigger-time={{ production.meta['event time'] }}
gaussian-noise=False
# Number of simulated segments to use with gaussian-noise Note, this must match the number of injections specified
n-simulation=0
# Dictionary of paths to gwf, or hdf5 data files
data-dict=None
# If given, the data format to pass to `gwpy.timeseries.TimeSeries.read(), see gwpy.github.io/docs/stable/timeseries/io.html
data-format=None
# Channel dictionary: keys relate to the detector with values the channel name, e.g. 'GDS-CALIB_STRAIN'. For GWOSC open data, set the channel-dict keys to 'GWOSC'. Note, the dictionary should follow basic python dict syntax.
channel-dict=None
channel-dict={ {% if production.meta['interferometers'] contains "H1" %}{{ production.meta['data']['channels']['H1'] }},{% endif %} {% if production.meta['interferometers'] contains "L1" %}{{ production.meta['data']['channels']['L1'] }},{% endif %}{% if production.meta['interferometers'] contains "V1" %}{{ production.meta['data']['channels']['V1'] }}{% endif %} }
################################################################################
## Detector arguments
# How to set up the interferometers and power spectral density.
################################################################################
# Run the analysis for all detectors together and for each detector separately
coherence-test=False
# The names of detectors to use. If given in the ini file, detectors are specified by `detectors=[H1, L1]`. If given at the command line, as `--detectors H1 --detectors L1`
detectors=None
# The duration of data around the event to use
duration=4
# Random seed used during data generation. If no generation seed provided, a random seed between 1 and 1e6 is selected. If a seed is provided, it is used as the base seed and all generation jobs will have their seeds set as {generation_seed = base_seed + job_idx}.
detectors={{ production.meta['interferometers'] }}
duration={{ production.meta['quality']['segment-length'] }}
generation-seed=None
# Dictionary of PSD files to use
psd-dict=None
# Fractional overlap of segments used in estimating the PSD
psd-dict={ {% if production.meta['interferometers'] contains "H1" %}H1:{{ production.psds['H1'] }},{% endif %} {% if production.meta['interferometers'] contains "L1" %}L1:{{ production.psds['L1'] }},{% endif %} {% if production.meta['interferometers'] contains "V1" %}V1:{{ production.psds['V1'] }}{% endif %} }
psd-fractional-overlap=0.5
# Time (in s) after the trigger_time to the end of the segment
post-trigger-duration=2.0
# None
sampling-frequency=4096
# Sets the psd duration (up to the psd-duration-maximum). PSD duration calculated by psd-length x duration [s]. Default is 32.
psd-length=32
# The maximum allowed PSD duration in seconds, default is 1024s.
sampling-frequency={{ production.meta['quality']['sample-rate'] | round }}
psd-length={{ production.meta['quality']['psd-length'] | round }}
psd-maximum-duration=1024
# PSD method see gwpy.timeseries.TimeSeries.psd for options
psd-method=median
# Start time of data (relative to the segment start) used to generate the PSD. Defaults to psd-duration before the segment start time
psd-start-time=None
# The maximum frequency, given either as a float for all detectors or as a dictionary (see minimum-frequency)
maximum-frequency=None
# The minimum frequency, given either as a float for all detectors or as a dictionary where all keys relate the detector with values of the minimum frequency, e.g. {H1: 10, L1: 20}. If the waveform generation should start the minimum frequency for any of the detectors, add another entry to the dictionary, e.g., {H1: 40, L1: 60, waveform: 20}.
minimum-frequency=20
# Use a zero noise realisation
minimum-frequency={ {% if production.meta['interferometers'] contains "H1" %}'H1': {{ production.quality['lower-frequency']['H1'] }},{% endif %} {% if production.meta['interferometers'] contains "L1" %}'L1': {{ production.quality['lower-frequency']['L1']}},{% endif %} {% if production.meta['interferometers'] contains "V1" %} 'V1': {{ production.quality['lower-frequency']['V1']}} {% endif %} }
maximum-frequency={ {% if production.meta['interferometers'] contains "H1" %}'H1': {{ production.meta['quality']['high-frequency'] }},{% endif %} {% if production.meta['interferometers'] contains "L1" %}'L1': {{ production.meta['quality']['high-frequency'] }},{% endif %} {% if production.meta['interferometers'] contains "V1" %} 'V1': {{ production.meta['quality']['high-frequency'] }} {% endif %} }
zero-noise=False
# Roll off duration of tukey window in seconds, default is 0.4s
tukey-roll-off=0.4
# Resampling method to use: lal matches the resampling used by lalinference/BayesWave
resampling-method=lal
################################################################################
## Injection arguments
# Whether to include software injections and how to generate them.
################################################################################
# Create data from an injection file
injection=False
# A single injection dictionary given in the ini file
injection-dict=None
# Injection file to use. See `bilby_pipe_create_injection_file --help` for supported formats
injection-file=None
# Specific injections rows to use from the injection_file, e.g. `injection_numbers=[0,3] selects the zeroth and third row
injection-numbers=None
# The name of the waveform approximant to use to create injections. If none is specified, then the `waveform-approximant` will be usedas the `injection-waveform-approximant`.
injection-waveform-approximant=None
################################################################################
## Job submission arguments
# How the jobs should be formatted, e.g., which job scheduler to use.
################################################################################
# Accounting group to use (see, https://accounting.ligo.org/user)
accounting={{ production.submission['accounting_group'] }}
# Output label
label=label
# Run the job locally, i.e., not through a batch submission
accounting={{ config['pipelines']['accounting'] }}
label={{ production.name }}
local=False
# Run the data generation job locally. This may be useful for running on a cluster where the compute nodes do not have internet access. For HTCondor, this is done using the local universe, for slurm, the jobs will be run at run-time
local-generation=False
# Run the plot job locally
local-plot=False
# Output directory
outdir={{ production.meta['rundir'] }}
# Time after which the job will self-evict when scheduler=condor. After this, condor will restart the job. Default is 28800. This is used to decrease the chance of HTCondor hard evictions
outdir={{ production.rundir }}
periodic-restart-time=28800
# Memory allocation request (GB), defaults is 4GB
request-memory=4.0
# Memory allocation request (GB) for data generation step
request-memory=8.0
request-memory-generation=None
# Use multi-processing (for available samplers: dynesty, ptemcee, cpnest)
request-cpus=1
# Singularity image to use
request-cpus=16
singularity-image=None
# Format submission script for specified scheduler. Currently implemented: SLURM
scheduler=condor
# Space-separated #SBATCH command line args to pass to slurm (slurm only)
scheduler-args=None
# Space-separated list of modules to load at runtime (slurm only)
scheduler-module=None
# Python environment to activate (slurm only)
scheduler-env=None
#
scheduler-analysis-time=7-00:00:00
# Attempt to submit the job after the build
submit=False
# Job priorities allow a user to sort their HTCondor jobs to determine which are tried to be run first. A job priority can be any integer: larger values denote better priority. By default HTCondor job priority=0.
condor-job-priority=0
# If true, use HTCondor file transfer mechanism, default is Truefor non-condor schedulers, this option is ignored
transfer-files=True
# If given, an alternative path for the log output
transfer-files=False
log-directory=None
# Flag for online PE settings
online-pe=False
# If true, format condor submission for running on OSG, default is False
osg=False
################################################################################
## Likelihood arguments
# Options for setting up the likelihood.
################################################################################
# Boolean. If true, use a distance-marginalized likelihood
distance-marginalization=False
# Path to the distance-marginalization lookup table
distance-marginalization-lookup-table=None
# Boolean. If true, use a phase-marginalized likelihood
distance-marginalization=True
distance-marginalization-lookup-table=TD.npz
phase-marginalization=False
# Boolean. If true, use a time-marginalized likelihood
time-marginalization=False
# Boolean. If true, and using a time-marginalized likelihood 'time jittering' will be performed
time-marginalization=True
jitter-time=True
# Reference frame for the sky parameterisation, either 'sky' (default) or, e.g., 'H1L1'
reference-frame=sky
# Time parameter to sample in, either 'geocent' (default) or, e.g., 'H1'
reference-frame={% if production.meta['interferometers'] contains "H1" %}H1{% endif %}{% if production.meta['interferometers'] contains "L1" %}L1{% endif %}{% if production.meta['interferometers'] contains "V1" %}V1{% endif %}
time-reference=geocent
# The likelihood. Can be one of [GravitationalWaveTransient, ROQGravitationalWaveTransient] or python path to a bilby likelihood class available in the users installation. Need to specify --roq-folder if ROQ likelihood used
likelihood-type=GravitationalWaveTransient
# The data for ROQ
roq-folder=None
# If given, the ROQ weights to use (rather than building them). This must be given along with the roq-folder for checking
roq-weights=None
# Rescaling factor for the ROQ, default is 1 (no rescaling)
roq-scale-factor=1
# Additional keyword arguments to pass to the likelihood. Any arguments which are named bilby_pipe arguments, e.g., distance_marginalization should NOT be included. This is only used if you are not using the GravitationalWaveTransient or ROQGravitationalWaveTransient likelihoods
extra-likelihood-kwargs=None
################################################################################
## Output arguments
# What kind of output/summary to generate.
################################################################################
# Create diagnostic and posterior plots
create-plots=False
# Create calibration posterior plot
create-plots=True
plot-calibration=False
# Create intrinsic and extrinsic posterior corner plots
plot-corner=False
# Create 1-d marginal posterior plots
plot-marginal=False
# Create posterior skymap
plot-skymap=False
# Create waveform posterior plot
plot-waveform=False
# Format for making bilby_pipe plots, can be [png, pdf, html]. If specified format is not supported, will default to png.
plot-format=png
# Create a PESummary page
create-summary=False
# Email for notifications
email=None
# Notification setting for HTCondor jobs. One of 'Always','Complete','Error','Never'. If defined by 'Always', the owner will be notified whenever the job produces a checkpoint, as well as when the job completes. If defined by 'Complete', the owner will be notified when the job terminates. If defined by 'Error', the owner will only be notified if the job terminates abnormally, or if the job is placed on hold because of a failure, and not by user request. If defined by 'Never' (the default), the owner will not receive e-mail, regardless to what happens to the job. Note, an `email` arg is also required for notifications to be emailed.
notification=Never
# If given, add results to an directory with an an existing summary.html file
existing-dir=None
# Directory to store summary pages. If not given, defaults to outdir/results_page
webdir={{ production.meta['webdir'] }}
# Arguments (in the form of a dictionary) to pass to the summarypages executable
webdir={{ config['general']['webroot'] }}/{{ production.event.name }}/{{ production.name }}
summarypages-arguments=None
################################################################################
## Prior arguments
# Specify the prior settings.
################################################################################
# The name of the prior set to base the prior on. Can be one of[PriorDict, BBHPriorDict, BNSPriorDict, CalibrationPriorDict]
default-prior=BBHPriorDict
# The symmetric width (in s) around the trigger time to search over the coalesence time
deltaT=0.2
# The prior file
prior-file=None
# A dictionary of priors
prior-dict=None
# Convert a flat-in chirp mass and mass-ratio prior file to flat in component mass during the post-processing. Note, the prior must be uniform in Mc and q with constraints in m1 and m2 for this to work
prior-file=./{{production.name}}.prior
convert-to-flat-in-component-mass=False
################################################################################
## Post processing arguments
# What post-processing to perform.
################################################################################
# An executable name for postprocessing. A single postprocessing job is run as a child of all analysis jobs
postprocessing-executable=None
# Arguments to pass to the postprocessing executable
postprocessing-arguments=None
# An executable name for postprocessing. A single postprocessing job is run as a child for each analysis jobs: note the difference with respect postprocessing-executable
single-postprocessing-executable=None
# Arguments to pass to the single postprocessing executable. The str '$RESULT' will be replaced by the path to the individual result file
single-postprocessing-arguments=None
################################################################################
## Sampler arguments
# None
################################################################################
# Sampler to use
sampler=dynesty
# Random sampling seed
sampling-seed=None
# Number of identical parallel jobs to run per event
n-parallel=1
# Dictionary of sampler-kwargs to pass in, e.g., {nlive: 1000} OR pass pre-defined set of sampler-kwargs {Default, FastTest}
sampler-kwargs=Default
n-parallel=4
sampler-kwargs={'nlive': 2000, 'sample': 'rwalk', 'walks': 100, 'nact': 50, 'check_point_delta_t':1800, 'check_point_plot':True}
################################################################################
## Waveform arguments
# Setting for the waveform generator
################################################################################
# The waveform generator class, should be a python path. This will not be able to use any arguments not passed to the default.
waveform-generator=bilby.gw.waveform_generator.WaveformGenerator
# The reference frequency
reference-frequency=20
# The name of the waveform approximant to use for PE.
reference-frequency={{ production.meta['quality']['reference-frequency'] }}
waveform-approximant={{ production.meta['approximant'] }}
# Turns on waveform error catching
catch-waveform-errors=False
# Post-newtonian order to use for the spin
catch-waveform-errors=True
pn-spin-order=-1
# Post-Newtonian order to use for tides
pn-tidal-order=-1
# post-Newtonian order to use for the phase
pn-phase-order=-1
# Post-Newtonian order to use for the amplitude. Also used to determine the waveform starting frequency.
pn-amplitude-order=0
# Array of modes to use for the waveform. Should be a list of lists, eg. [[2,2], [2,-2]]
mode-array=None
# Name of the frequency domain source model. Can be one of[lal_binary_black_hole, lal_binary_neutron_star,lal_eccentric_binary_black_hole_no_spins, sinegaussian, supernova, supernova_pca_model] or any python path to a bilby source function the users installation, e.g. examp.source.bbh
pn-amplitude-order={{ production.meta['priors']['amp order'] }}
frequency-domain-source-model=lal_binary_black_hole
{% if production.event.repository %}
{% assign repo_dir = production.event.repository.directory %}
{% else %}
{% assign repo_dir = "." %}
{% endif %}
[analysis]
ifos={{ production.meta['interferometers'] }}
engine={{ production.meta['engine'] }}
nparallel=4
roq = False
coherence-test=False
upload-to-gracedb=False
singularity=False
osg=False
[paths]
webdir={{ config['general']['webroot'] }}/{{ production.event.name }}/{{ production.name }}
[input]
max-psd-length=10000
padding=16
minimum_realizations_number=8
events=all
analyse-all-time=False
timeslides=False
ignore-gracedb-psd=True
threshold-snr=3
gps-time-file =
ignore-state-vector = True
[condor]
lalsuite-install={{ config["pipelines"]["environment"] }}
datafind=%(lalsuite-install)s/bin/gw_data_find
mergeNSscript=%(lalsuite-install)s/bin/lalinference_nest2pos
mergeMCMCscript=%(lalsuite-install)s/bin/cbcBayesMCMC2pos
combinePTMCMCh5script=%(lalsuite-install)s/bin/cbcBayesCombinePTMCMCh5s
resultspage=%(lalsuite-install)s/bin/cbcBayesPostProc
segfind=%(lalsuite-install)s/bin/ligolw_segment_query
ligolw_print=%(lalsuite-install)s/bin/ligolw_print
coherencetest=%(lalsuite-install)s/bin/lalinference_coherence_test
lalinferencenest=%(lalsuite-install)s/bin/lalinference_nest
lalinferencemcmc=%(lalsuite-install)s/bin/lalinference_mcmc
lalinferencebambi=%(lalsuite-install)s/bin/lalinference_bambi
lalinferencedatadump=%(lalsuite-install)s/bin/lalinference_datadump
ligo-skymap-from-samples=%(lalsuite-install)s/bin/ligo-skymap-from-samples
ligo-skymap-plot=%(lalsuite-install)s/bin/ligo-skymap-plot
processareas=%(lalsuite-install)s/bin/process_areas
computeroqweights=%(lalsuite-install)s/bin/lalinference_compute_roq_weights
mpiwrapper=%(lalsuite-install)s/bin/lalinference_mpi_wrapper
gracedb=%(lalsuite-install)s/bin/gracedb
ppanalysis=%(lalsuite-install)s/bin/cbcBayesPPAnalysis
pos_to_sim_inspiral=%(lalsuite-install)s/bin/cbcBayesPosToSimInspiral
mpirun = %(lalsuite-install)s/bin/mpirun
accounting_group={{ config["pipelines"]["accounting"]}}
accounting_group_user=daniel.williams
[datafind]
url-type=file
types = {'H1': '{{ production.meta['data']['frame-types']['H1'] }}', 'L1': '{{ production.meta['data']['frame-types']['L1'] }}', 'V1': '{{ production.meta['data']['frame-types']['V1'] }}'}
[data]