Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • steffen.grunewald/gstlal
  • sumedha.biswas/gstlal
  • spiir-group/gstlal
  • madeline-wade/gstlal
  • hunter.schuler/gstlal
  • adam-mercer/gstlal
  • amit.reza/gstlal
  • alvin.li/gstlal
  • duncanmmacleod/gstlal
  • rebecca.ewing/gstlal
  • javed.sk/gstlal
  • leo.tsukada/gstlal
  • brian.bockelman/gstlal
  • ed-maros/gstlal
  • koh.ueno/gstlal
  • leo-singer/gstlal
  • lscsoft/gstlal
17 results
Show changes
Showing
with 1437 additions and 107 deletions
doc/source/_static/img/mr-respond.png

132 KiB

{%- set logo = "gstlal.png" %}
{% extends "!layout.html" %}
GstLAL API
============
.. toctree::
:maxdepth: 2
:glob:
gstlal/python-modules/*modules
gstlal-inspiral/python-modules/*modules
gstlal-burst/python-modules/*modules
gstlal-ugly/python-modules/*modules
This diff is collapsed.
......@@ -21,10 +21,12 @@ import sys
sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('../../gstlal/python'))
sys.path.insert(0, os.path.abspath('../../gstlal-inspiral/python'))
#sys.path.insert(0, os.path.abspath('../../gstlal-burst/python'))
sys.path.insert(0, os.path.abspath('../../gstlal-calibration/python'))
sys.path.insert(0, os.path.abspath('../../gstlal-burst/python'))
sys.path.insert(0, os.path.abspath('../../gstlal-ugly/python'))
# on_rtd is whether we are on readthedocs.org, this line of code grabbed
# from docs.readthedocs.org
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
# -- General configuration ------------------------------------------------
......@@ -35,16 +37,25 @@ sys.path.insert(0, os.path.abspath('../../gstlal-ugly/python'))
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['sphinx.ext.autodoc',
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.imgmath',
# 'sphinx.ext.imgmath',
'sphinx.ext.ifconfig',
'sphinx.ext.viewcode',
'sphinx.ext.githubpages',
'sphinx.ext.graphviz']
'sphinx.ext.graphviz',
'sphinx.ext.mathjax',
'myst_parser',
]
myst_enable_extensions = [
"amsmath",
"dollarmath",
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
......@@ -52,8 +63,8 @@ templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
source_suffix = ['.rst', '.md']
# source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
......@@ -61,7 +72,7 @@ master_doc = 'index'
# General information about the project.
# FIXME get from autotools
project = u'GstLAL'
copyright = u'2018, GstLAL developers'
copyright = u'2021, GstLAL developers'
author = u'GstLAL developers'
# The version info for the project you're documenting, acts as replacement for
......@@ -69,10 +80,9 @@ author = u'GstLAL developers'
# built documents.
#
# The short X.Y version.
# FIXME get from autotools
version = u'1.x'
#version = u'1.x'
# The full version, including alpha/beta/rc tags.
release = u'1.x'
release = ''
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
......@@ -98,20 +108,32 @@ todo_include_todos = True
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'classic'#'alabaster'
html_logo = "gstlal.png"
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
#html_theme_options = {}
def setup(app):
app.add_stylesheet('css/my_theme.css')
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
if not on_rtd: # only import and set the theme if we're building docs locally
import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# Custom sidebar templates, maps document names to template names.
#html_sidebars = { '**': ['navigation.html', 'relations.html', 'searchbox.html'] }
#html_last_updated_fmt = None
# Add a favicon to doc pages
html_favicon = '_static/favicon.ico'
# -- Options for HTMLHelp output ------------------------------------------
......
# Container Development Environment
The container development workflow consists of a few key points:
- Build tools provided by and used within a writable gstlal container.
- Editor/git used in or outside of the container as desired.
- Applications are run in the development container.
The benefits of developing in a writable container:
- Your builds do not depend on the software installed on the system, you don't have to worry about behavior changes due to system package updates.
- Your build environment is the same as that of everyone else using the same base container. This makes for easier collaboration.
- Others can run your containers and get the same results. You don't have to worry about environment mis-matches.
## Create a writable container
The base of a development environment is a gstlal container. It is typical to start with the
current master build. However, you can use the build tools to overwite the install in the container so the
choice of branch in your gstlal repository matters more than the container that you start with. The job of
the container is to provide a well-defined set of dependencies.
```bash
singularity build --sandbox --fix-perms CONTAINER_NAME docker://containers.ligo.org/lscsoft/gstlal:master
```
This will creat a directory named CONTAINER_NAME. That directory is a *singularity container*.
## Check out gstlal
In a directory of your choice, under your home directory, run:
```
git clone https://git.ligo.org/lscsoft/gstlal DIRNAME
```
This will create a git directory named DIRNAME which is referred to in the following as your "gstlal dir". The gstlal dir
contains several directories that contain components that can be built independently (e.g., `gstlal`, `gstlal-inspiral`, `gstlal-ugly`, ...).
A common practice is to run the clone command in the CONTAINER_NAME directory and use `src` as `DIRNAME`. In this case, when you run your
container, your source will be available in the directory `/src`.
## Develop
Edit and make changes under your gstlal dir using editors and git outside of the container (or inside if you prefer).
## Build a component
To build a component:
1. cd to your gstlal directory
2. Run your container:
```
singularity run --writable -B $TMPDIR CONTAINER_NAME /bin/bash
```
3. cd to the component directory under your gstlal dir.
4. Initialize the build system for your component. You only need to do this once per container per component directory:
```
./00init.sh
./configure --prefix=/usr --libdir=/usr/lib64
```
The arguments to configure are required so that you overwrite the build of gstlal in your container.
Some components have dependencies on others. You should build GstLAL components in the following order:
1. `gstlal`
2. `gstlal-ugly`
3. `gstlal-inspiral`, `gstlal-burst`, `gstlal-calibrarion` (in any order)
For example, if you want to build `gstlal-ugly`, you should build `gstlal` first.
5. Run make and make install
```
make
make install
```
Note that the container is writable, so your installs will persist after you exit the container and run it again.
## Run your code
You can run your code in the following ways:
1. Run your container using singularity and issue commands interactively "inside the container":
```
singularity run --writable -B $TMPDIR PATH_TO_CONTAINER /bin/bash
/bin/gstlal_reference_psd --channel-name=H1=foo --data-source=white --write-psd=out.psd.xml --gps-start-time=1185493488 --gps-end-time=1185493788
```
2. Use `singularity exec` and give your command on the singularity command line:
```
singularity exec --writable -B $TMPDIR PATH_TO_CONTAINER /bin/gstlal_reference_psd --channel-name=H1=foo --data-source=white --write-psd=out.psd.xml --gps-start-time=1185493488 --gps-end-time=1185493788
```
3. Use your container in a new or existing [container-based gstlal workflow](/gstlal/cbc_analysis.html) on a cluster with a shared filesystem where your container resides. For example, you can run on the CIT cluster or on the PSU cluster, but not via the OSG (you can run your container as long as your container is available on the shared filesystem of the cluster where you want to run). In order to run your code on the OSG, you would have to arrange to have your container published to cvmfs.
# Contributing Workflow
## Git Branching
The `gstlal` team uses the standard git-branch-and-merge workflow, which has brief description
at [GitLab](https://docs.gitlab.com/ee/gitlab-basics/feature_branch_workflow.html) and a full description
at [BitBucket](https://www.atlassian.com/git/tutorials/comparing-workflows/feature-branch-workflow). As depicted below,
the workflow involves the creation of new branches for changes, the review of those branches through the Merge Request
process, and then the merging of the new changes into the main branch.
![git-flow](_static/img/git-flow.png)
### Git Workflow
In general the steps for working with feature branches are:
1. Create a new branch from master: `git checkout -b feature-short-desc`
1. Edit code (and tests)
1. Commit changes: `git commit . -m "comment"`
1. Push branch: `git push origin feature-short-desc`
1. Create merge request on GitLab
## Merge Requests
### Creating a Merge Request
Once you push feature branch, GitLab will prompt on gstlal repo [home page](). Click “Create Merge Request”, or you can
also go to the branches page (Repository > Branches) and select “Merge Request” next to your branch.
![mr-create](_static/img/mr-create.png)
When creating a merge request:
1. Add short, descriptive title
1. Add description
- (Uses markdown .md-file style)
- Summary of additions / changes
- Describe any tests run (other than CI)
1. Click “Create Merge Request”
![mr-create](_static/img/mr-create-steps.png)
### Collaborating on merge requests
The Overview page give a general summary of the merge request, including:
1. Link to other page to view changes in detail (read below)
1. Code Review Request
1. Test Suite Status
1. Discussion History
1. Commenting
![mr-overview](_static/img/mr-overview.png)
#### Leaving a Review
The View Changes page gives a detailed look at the changes made on the feature branch, including:
1. List of files changed
1. Changes
- Red = removed
- Green = added
1. Click to leave comment on line
1. Choose “Start a review”
![mr-changes](_static/img/mr-changes.png)
After review started:
1. comment pending
1. Submit review
![mr-changes](_static/img/mr-change-submit.png)
#### Responding to Reviews
Reply to code review comments as needed Use “Start a review” to submit all replies at once
![mr-changes](_static/img/mr-respond.png)
Resolve threads when discussion on a particular piece of code is complete
![mr-changes](_static/img/mr-resolve.png)
### Merging the Merge Request
Merging:
1. Check all tests passed
1. Check all review comments resolved
1. Check at least one review approval
1. Before clicking “Merge”
- Check “Delete source branch”
- Check “Squash commits” if branch history not tidy
1. Click “Merge”
1. Celebrate
![mr-merge](_static/img/mr-merge.png)
# Contributing Documentation
This guide assumes the reader has read the [Contribution workflow](contributing.md) for details about making changes to
code within gstlal repo, since the documentation files are updated by a similar workflow.
## Writing Documentation
In general, the gstlal documentation uses [RestructuredText (rst)](https://docutils.sourceforge.io/rst.html) files
ending in `.rst` or [Markdown](https://www.markdownguide.org/basic-syntax/) files ending in `.md`.
The documentation files for gstlal are located under `gstlal/doc/source`. If you add a new page (doc file), make sure to
reference it from the main index page.
Useful Links:
- [MyST Directive Syntax](https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#syntax-directives)
Executables
===============
.. toctree::
:maxdepth: 2
gstlal/bin/bin
gstlal-inspiral/bin/bin
gstlal-burst/bin/bin
gstlal-ugly/bin/bin
.. _extrinsic-parameters-generation:
Generating Extrinsic Parameter Distributions
============================================
This tutorial will show you how to regenerate the extrinsic parameter
distributions used to determine the likelihood ratio term that accounts for the
relative times-of-arrival, phases, and amplitudes of a CBC signal at each of
the LVK detectors.
There are two parts described below that represents different terms. Full
documentation can be found here:
https://lscsoft.docs.ligo.org/gstlal/gstlal-inspiral/python-modules/stats.inspiral_extrinsics.html
Setting up the dt, dphi, dsnr dag
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. Setup a work area and obtain the necessary input files
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
You will need to create a directory on a cluster running HTCondor, e.g.,
.. code:: bash
$ mkdir dt_dphi_dsnr
$ cd dt_dphi_dsnr
This workflow requires estimates of the power spectral densities for LIGO,
Virgo and KAGRA. For this tutorial we use projected O4 sensitivities in the
LIGO DCC. You can feel free to substitute these to suit your needs.
We will use the following files found at: https://dcc.ligo.org/LIGO-T2000012 ::
aligo_O4high.txt
avirgo_O4high_NEW.txt
kagra_3Mpc.txt
Download the above files and place them in the dt_dphi_dsnr directory that you are currently in.
2. Excecute commands to generate the HTCondorDAG
"""""""""""""""""""""""""""""""""""""""""""""""""
For this tutorial, we assume that you have a singularity container with the
gstlal software. More details can be found here:
https://lscsoft.docs.ligo.org/gstlal/installation.html
The following Makefile illustrates the sequence of commands required to generate an HTCondor workflow. You can copy this into a file called ``Makefile`` and modify it as you wish.
.. code:: make
SINGULARITY_IMAGE=/ligo/home/ligo.org/chad.hanna/development/gstlal-dev/
sexec=singularity exec $(SINGULARITY_IMAGE)
all: dt_dphi.dag
# 417.6 Mpc Horizon
H1_aligo_O4high_psd.xml.gz: aligo_O4high.txt
$(sexec) gstlal_psd_xml_from_asd_txt --instrument=H1 --output $@ $<
# 417.6 Mpc Horizon
L1_aligo_O4high_psd.xml.gz: aligo_O4high.txt
$(sexec) gstlal_psd_xml_from_asd_txt --instrument=L1 --output $@ $<
# 265.8 Mpc Horizon
V1_avirgo_O4high_NEW_psd.xml.gz: avirgo_O4high_NEW.txt
$(sexec) gstlal_psd_xml_from_asd_txt --instrument=V1 --output $@ $<
# 6.16 Mpc Horizon
K1_kagra_3Mpc_psd.xml.gz: kagra_3Mpc.txt
$(sexec) gstlal_psd_xml_from_asd_txt --instrument=K1 --output $@ $<
O4_projected_psds.xml.gz: H1_aligo_O4high_psd.xml.gz L1_aligo_O4high_psd.xml.gz V1_avirgo_O4high_NEW_psd.xml.gz K1_kagra_3Mpc_psd.xml.gz
$(sexec) ligolw_add --output $@ $^
# SNR ratios according to horizon ratios
dt_dphi.dag: O4_projected_psds.xml.gz
$(sexec) gstlal_inspiral_create_dt_dphi_snr_ratio_pdfs_dag \
--psd-xml $< \
--H-snr 8.00 \
--L-snr 8.00 \
--V-snr 5.09 \
--K-snr 0.12 \
--m1 1.4 \
--m2 1.4 \
--s1 0.0 \
--s2 0.0 \
--flow 15.0 \
--fhigh 1024.0 \
--NSIDE 16 \
--n-inc-angle 33 \
--n-pol-angle 33 \
--singularity-image $(SINGULARITY_IMAGE)
clean:
rm -rf H1_aligo_O4high_psd.xml.gz L1_aligo_O4high_psd.xml.gz V1_avirgo_O4high_NEW_psd.xml.gz logs dt_dphi.dag gstlal_inspiral_compute_dtdphideff_cov_matrix.sub gstlal_inspiral_create_dt_dphi_snr_ratio_pdfs.sub gstlal_inspiral_add_dt_dphi_snr_ratio_pdfs.sub dt_dphi.sh
3. Submit the HTCondor DAG and monitor the output
""""""""""""""""""""""""""""""""""""""""""""""""""
Next run make to generate the HTCondor DAG
.. code:: bash
$ make
Then submit the DAG
.. code:: bash
$ condor_submit_dag dt_dphi.dag
You can check the DAG progress by doing
.. code:: bash
$ tail -f dt_dphi.dag.dagman.out
4. Test the output
"""""""""""""""""""
When the DAG completes successfully, you should have a file called ``inspiral_dtdphi_pdf.h5``. You can verify that this file works with a python terminal, e.g.,
.. code:: bash
$ singularity exec /ligo/home/ligo.org/chad.hanna/development/gstlal-dev/ python3
Python 3.6.8 (default, Nov 10 2020, 07:30:01)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from gstlal.stats.inspiral_extrinsics import InspiralExtrinsics
>>> IE = InspiralExtrinsics(filename='inspiral_dtdphi_pdf.h5')
>>>
Setting up probability of instrument combinations dag
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. Setup a work area and obtain the necessary input files
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
You will need to create a directory on a cluster running HTCondor, e.g.,
.. code:: bash
$ mkdir p_of_instruments
$ cd p_of_instruments
2. Excecute commands to generate the HTCondorDAG
"""""""""""""""""""""""""""""""""""""""""""""""""
Below is a sample Makefile that will work if you are using singularity
3. Submit the HTCondor DAG
"""""""""""""""""""""""""""
.. code:: bash
$ condor_submit_dag p_of_I_H1K1L1V1.dag
.. code:: make
SINGULARITY_IMAGE=/ligo/home/ligo.org/chad.hanna/development/gstlal-dev/
sexec=singularity exec $(SINGULARITY_IMAGE)
all:
$(sexec) gstlal_inspiral_create_p_of_ifos_given_horizon_dag --instrument=H1 --instrument=L1 --instrument=V1 --instrument=K1 --singularity-image $(SINGULARITY_IMAGE)
clean:
rm -rf gstlal_inspiral_add_p_of_ifos_given_horizon.sub gstlal_inspiral_create_p_of_ifos_given_horizon.sub logs p_of_I_H1K1L1V1.dag p_of_I_H1K1L1V1.sh
See Also
^^^^^^^^
* https://arxiv.org/abs/1901.02227
* https://lscsoft.docs.ligo.org/gstlal/gstlal-inspiral/python-modules/stats.inspiral_extrinsics.html
* https://lscsoft.docs.ligo.org/gstlal/gstlal-inspiral/bin/gstlal_inspiral_create_dt_dphi_snr_ratio_pdfs.html
* https://lscsoft.docs.ligo.org/gstlal/gstlal-inspiral/bin/gstlal_inspiral_create_dt_dphi_snr_ratio_pdfs_dag.html
* https://lscsoft.docs.ligo.org/gstlal/gstlal-inspiral/bin/gstlal_inspiral_compute_dtdphideff_cov_matrix.html
.. _feature_extraction:
Feature Extraction
====================================================================================================
SNAX (Signal-based Noise Acquisition and eXtraction), the `snax` module and related SNAX executables
contain relevant libraries to identify glitches in low-latency using auxiliary channel data.
SNAX functions as a modeled search for data quality by applying matched filtering
on auxiliary channel timeseries using waveforms that model a large number of glitch classes. Its primary
purpose is to whiten incoming auxiliary channels and extract relevant features in low-latency.
.. _feature_extraction-intro:
Introduction
------------
There are two different modes of feature generation:
1. **Timeseries:**
Production of regularly-spaced feature rows, containing the SNR, waveform parameters,
and the time of the loudest event in a sampling time interval.
2. **ETG:**
This produces output that resembles that of a traditional event trigger generator (ETG), in
which only feature rows above an SNR threshold will be produced.
One useful feature in using a matched filter approach to detect glitches is the ability to switch between
different glitch templates or generate a heterogeneous bank of templates. Currently, there are Sine-Gaussian,
half-Sine-Gaussian, and tapered Sine-Gaussian waveforms implemented for use in detecting glitches, but the feature
extractor is designed to be fairly modular and so it isn't difficult to design and add new waveforms for use.
Since SNAX uses time-domain convolution to matched filter auxiliary channel timeseries
with glitch waveforms, this allows latencies to be much lower than in traditional ETGs. The latency upon writing
features to disk are O(5 s) in the current layout when using waveforms where the peak occurs at the edge of the
template (zero-latency templates). Otherwise, there is extra latency incurred due to the non-causal nature of
the waveform itself.
.. graphviz::
digraph llpipe {
labeljust = "r";
label="gstlal_snax_extract"
rankdir=LR;
graph [fontname="Roman", fontsize=24];
edge [ fontname="Roman", fontsize=10 ];
node [fontname="Roman", shape=box, fontsize=11];
subgraph clusterNodeN {
style=rounded;
label="gstreamer pipeline";
labeljust = "r";
fontsize = 14;
H1L1src [label="H1(L1) data source:\n mkbasicmultisrc()", color=red4];
Aux1 [label="Auxiliary channel 1", color=red4];
Aux2 [label="Auxiliary channel 2", color=green4];
AuxN [label="Auxiliary channel N", color=magenta4];
Multirate1 [label="Auxiliary channel 1\nWhiten/Downsample", color=red4];
Multirate2 [label="Auxiliary channel 2\nWhiten/Downsample", color=green4];
MultirateN [label="Auxiliary channel N\nWhiten/Downsample", color=magenta4];
FilterBankAux1Rate1 [label="Auxiliary Channel 1:\nGlitch Filter Bank", color=red4];
FilterBankAux1Rate2 [label="Auxiliary Channel 1:\nGlitch Filter Bank", color=red4];
FilterBankAux1RateN [label="Auxiliary Channel 1:\nGlitch Filter Bank", color=red4];
FilterBankAux2Rate1 [label="Auxiliary Channel 2:\nGlitch Filter Bank", color=green4];
FilterBankAux2Rate2 [label="Auxiliary Channel 2:\nGlitch Filter Bank", color=green4];
FilterBankAux2RateN [label="Auxiliary Channel 2:\nGlitch Filter Bank", color=green4];
FilterBankAuxNRate1 [label="Auxiliary Channel N:\nGlitch Filter Bank", color=magenta4];
FilterBankAuxNRate2 [label="Auxiliary Channel N:\nGlitch Filter Bank", color=magenta4];
FilterBankAuxNRateN [label="Auxiliary Channel N:\nGlitch Filter Bank", color=magenta4];
TriggerAux1Rate1 [label="Auxiliary Channel 1:\nMax SNR Feature (N Hz)", color=red4];
TriggerAux1Rate2 [label="Auxiliary Channel 1:\nMax SNR Feature (N Hz)", color=red4];
TriggerAux1RateN [label="Auxiliary Channel 1:\nMax SNR Feature (N Hz)", color=red4];
TriggerAux2Rate1 [label="Auxiliary Channel 2:\nMax SNR Feature (N Hz)", color=green4];
TriggerAux2Rate2 [label="Auxiliary Channel 2:\nMax SNR Feature (N Hz)", color=green4];
TriggerAux2RateN [label="Auxiliary Channel 2:\nMax SNR Feature (N Hz)", color=green4];
TriggerAuxNRate1 [label="Auxiliary Channel N:\nMax SNR Feature (N Hz)", color=magenta4];
TriggerAuxNRate2 [label="Auxiliary Channel N:\nMax SNR Feature (N Hz)", color=magenta4];
TriggerAuxNRateN [label="Auxiliary Channel N:\nMax SNR Feature (N Hz)", color=magenta4];
H1L1src -> Aux1;
H1L1src -> Aux2;
H1L1src -> AuxN;
Aux1 -> Multirate1;
Aux2 -> Multirate2;
AuxN -> MultirateN;
Multirate1 -> FilterBankAux1Rate1 [label="4096Hz"];
Multirate2 -> FilterBankAux2Rate1 [label="4096Hz"];
MultirateN -> FilterBankAuxNRate1 [label="4096Hz"];
Multirate1 -> FilterBankAux1Rate2 [label="2048Hz"];
Multirate2 -> FilterBankAux2Rate2 [label="2048Hz"];
MultirateN -> FilterBankAuxNRate2 [label="2048Hz"];
Multirate1 -> FilterBankAux1RateN [label="Nth-pow-of-2 Hz"];
Multirate2 -> FilterBankAux2RateN [label="Nth-pow-of-2 Hz"];
MultirateN -> FilterBankAuxNRateN [label="Nth-pow-of-2 Hz"];
FilterBankAux1Rate1 -> TriggerAux1Rate1;
FilterBankAux1Rate2 -> TriggerAux1Rate2;
FilterBankAux1RateN -> TriggerAux1RateN;
FilterBankAux2Rate1 -> TriggerAux2Rate1;
FilterBankAux2Rate2 -> TriggerAux2Rate2;
FilterBankAux2RateN -> TriggerAux2RateN;
FilterBankAuxNRate1 -> TriggerAuxNRate1;
FilterBankAuxNRate2 -> TriggerAuxNRate2;
FilterBankAuxNRateN -> TriggerAuxNRateN;
}
Synchronize [label="Synchronize buffers by timestamp"];
Extract [label="Extract features from buffer"];
Save [label="Save triggers to disk"];
Kafka [label="Push features to queue"];
TriggerAux1Rate1 -> Synchronize;
TriggerAux1Rate2 -> Synchronize;
TriggerAux1RateN -> Synchronize;
TriggerAux2Rate1 -> Synchronize;
TriggerAux2Rate2 -> Synchronize;
TriggerAux2RateN -> Synchronize;
TriggerAuxNRate1 -> Synchronize;
TriggerAuxNRate2 -> Synchronize;
TriggerAuxNRateN -> Synchronize;
Synchronize -> Extract;
Extract -> Save [label="Option 1"];
Extract -> Kafka [label="Option 2"];
}
.. _feature_extraction-highlights:
Highlights
----------
* Launch SNAX jobs in online or offline mode:
* Online: Using /shm or framexmit protocol
* Offline: Read frames off disk
* Online/Offline DAGs available for launching jobs.
* Offline DAG parallelizes by time, channels are processed sequentially by subsets to reduce I/O concurrency issues.
* On-the-fly PSD generation (or take in a prespecified PSD)
* Auxiliary channels to be processed can be specified in two ways:
* Channel list .INI file, provided by DetChar. This provides ways to filter channels by safety and subsystem.
* Channel list .txt file, one line per channel in the form H1:CHANNEL_NAME:2048.
* Configurable min/max frequency bands for aux channel processing in powers of two. The default here is 32 - 2048 Hz.
* Verbose latency output at various stages of the pipeline. If regular verbosity is specified, latencies are given only when files are written to disk.
* Various file transfer/saving options:
* Disk: HDF5
* Transfer: Kafka (used for low-latency implementation)
* Various waveform configuration options:
* Waveform type (currently Sine-Gaussian and half-Sine-Gaussian only)
* Specify parameter ranges (frequency, Q for Sine-Gaussian based)
* Min mismatch between templates
.. _feature_extraction-online:
Online Operation
----------------
An online DAG is provided in /gstlal-burst/share/snax/Makefile.gstlal_feature_extractor_online
in order to provide a convenient way to launch online feature extraction jobs as well as auxiliary jobs as
needed (synchronizer/hdf5 file sinks). A condensed list of instructions for use is also provided within the Makefile itself.
There are four separate modes that can be used to launch online jobs:
1. Auxiliary channel ingestion:
a. Reading from framexmit protocol (DATA_SOURCE=framexmit).
This mode is recommended when reading in live data from LHO/LLO.
b. Reading from shared memory (DATA_SOURCE=lvshm).
This mode is recommended for reading in data for O2 replay (e.g. UWM).
2. Data transfer of features:
a. Saving features directly to disk, e.g. no data transfer.
This will save features to disk directly from the feature extractor,
and saves features periodically via hdf5.
b. Transfer of features via Kafka topics.
This requires a Kafka/Zookeeper service to be running (can be existing LDG
or your own). Features get transferred via Kafka from the feature extractor,
parallel instances of the extractor get synchronized, and then sent downstream
where it can be read by other processes (e.g. iDQ). In addition, an streaming
hdf5 file sink is launched where it'll dump features periodically to disk.
In order to start up online runs, you'll need an installation of gstlal. An installation Makefile that
includes Kafka dependencies are located at: gstlal/gstlal-burst/share/feature_extractor/Makefile.gstlal_idq_icc
To run, making sure that the correct environment is sourced:
$ make -f Makefile.gstlal_feature_extractor_online
Then launch the DAG with:
$ condor_submit_dag feature_extractor_pipe.dag
.. _feature_extraction-offline:
Offline Operation
-----------------
An offline DAG is provided in /gstlal-burst/share/snax/Makefile.gstlal_feature_extractor_offline
in order to provide a convenient way to launch offline feature extraction jobs. A condensed list of
instructions for use is also provided within the Makefile itself.
For general use cases, the only configuration options that need to be changed are:
* User/Accounting tags: GROUP_USER, ACCOUNTING_TAG
* Analysis times: START, STOP
* Data ingestion: IFO, CHANNEL_LIST
* Waveform parameters: WAVEFORM, MISMATCH, QHIGH
In order to start up offline runs, you'll need an installation of gstlal. An installation Makefile that
includes Kafka dependencies are located at: gstlal/gstlal-burst/share/feature_extractor/Makefile.gstlal_idq_icc
To generate a DAG, making sure that the correct environment is sourced:
$ make -f Makefile.gstlal_feature_extractor_offline
Then launch the DAG with:
$ condor_submit_dag feature_extractor_pipe.dag
Getting started
===============
You can get a development copy of the gstlal software suite from git. Doing this at minimum will require a development copy of lalsuite.
* https://git.ligo.org/lscsoft/gstlal
* https://git.ligo.org/lscsoft/lalsuite
Source tarballs are available here: http://software.ligo.org/lscsoft/source/
Limited inary packages are available here:
Building and installing from source follows the normal GNU build procedures
involving:
1. ./00init.sh
2. ./configure
3. make
4. make install.
You should build the packages in order of gstlal, gstlal-ugly,
gstlal-calibration, gstlal-inspiral. If you are building to a non FHS place
(e.g., your home directory) you will need to ensure some environment variables
are set so that your installation will function. The following five variables
must be set. As **just an example**::
GI_TYPELIB_PATH="/path/to/your/installation/lib/girepository-1.0:${GI_TYPELIB_PATH}"
GST_PLUGIN_PATH="/path/to/your/installation/lib/gstreamer-0.10:${GST_PLUGIN_PATH}"
PATH="/path/to/your/installation/bin:${PATH}"
# Debian systems need lib, RH systems need lib64, including both doesn't hurt
PKG_CONFIG_PATH="/path/to/your/installation/lib/pkgconfig:/path/to/your/installation/lib64/pkgconfig:${PKG_CONFIG_PATH}"
# Debian systems need lib, RH systems need lib and lib64
PYTHONPATH="/path/to/your/installation/lib64/python2.7/site-packages:/path/to/your/installation/lib/python2.7/site-packages:$PYTHONPATH"
GstLAL burst code
=================
.. toctree::
:maxdepth: 2
.. This code should be uncommented once the burst package is in good shape
..
bin/bin
python-modules/modules
GstLAL burst project
====================
.. toctree::
:maxdepth: 2
overview
code
Overview
========
FIXME
GstLAL calibration code
=======================
.. toctree::
:maxdepth: 2
bin/bin
python-modules/modules
GstLAL calibration project
==========================
.. toctree::
:maxdepth: 2
overview
code
Overview
========
FIXME
GstLAL inspiral code
====================
.. toctree::
:maxdepth: 2
bin/bin
python-modules/modules
GstLAL inspiral project
=======================
.. toctree::
:maxdepth: 2
overview
code