Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • steffen.grunewald/gstlal
  • sumedha.biswas/gstlal
  • spiir-group/gstlal
  • madeline-wade/gstlal
  • hunter.schuler/gstlal
  • adam-mercer/gstlal
  • amit.reza/gstlal
  • alvin.li/gstlal
  • duncanmmacleod/gstlal
  • rebecca.ewing/gstlal
  • javed.sk/gstlal
  • leo.tsukada/gstlal
  • brian.bockelman/gstlal
  • ed-maros/gstlal
  • koh.ueno/gstlal
  • leo-singer/gstlal
  • lscsoft/gstlal
17 results
Show changes
Showing
with 3330 additions and 1335 deletions
This diff is collapsed.
Running an online compact binary coalescence analysis
========================================================================
Prerequisites
-------------
- Fully functional gstlal, gstlal-ugly, gstlal-inspiral installation
- Condor managed computing resource using the LIGO Data Grid configuration
with dedicated nodes to support online jobs (which run indefinitely)
- Network streaming gravitational wave data
- Optional, but recommended, accounts for LVAlert and a robot certificate to
authtenticate uploades to the GRavitational-wave Candidate Event DataBase
(GraceDB).
Introduction
------------
This tutorial will will help you to setup a real-time gravitational wave search
for merging neutron stars and black holes.
The online analysis has a somewhat involved setup procedure. This
documentation covers all of it. The steps are:
1. Generate template banks for the target search area
2. Decompose the template waveforms using the SVD in chirpmass and chi bins
3. Setup and run the actual online analysis.
You can expect the setup (steps 1 and 2) to take several days. Furthermore,
the analsysis (step 3) requires at minimum several days of burn-in time until
it learns the noise statistics of the data before it should be allowed to
submit candidate gravitational waves. Plan accordingly.
**Note, this tutorial will assume a specific directory structure and certain
configuration files. These can and should be changed by the user. This
tutorial should be considered to be a guide, not cut-and-paste instructions.**
Generate template banks for the target search area
--------------------------------------------------
This tutorial will describe the steps relative to the root directory on the CIT
cluster::
/home/gstlalcbc/observing/3/online/sept_opa
While not necessary, it is best to organize the analysis into distinct
sub-directories. We will do that for this tutorial::
mkdir -p sept_opa/banks/bns sept_opa/banks/nsbh sept_opa/banks/bbh sept_opa/banks/imbh
Making the BNS bank
^^^^^^^^^^^^^^^^^^^
Go into the bns directory and get the example configuration file from gitlab::
cd sept_opa/banks/bns
wget https://git.ligo.org/lscsoft/gstlal/raw/master/gstlal-inspiral/share/O3/sept_opa/sbank_bns.ini
You will also need an **appropriate** PSD for the data you intend to analyze.
Here is an example file, but it is important you use an appropriate one::
wget https://git.ligo.org/lscsoft/gstlal/raw/master/gstlal-inspiral/share/O3/sept_opa/H1L1V1-REFERENCE_PSD-1186624818-687900.xml.gz
**NOTE you will need to modify the content for your code installation and
desired parameter space - this is simply an example file. You can see
lalapps_cbc_sbank --help for more information**
Next generate the condor dag by running lalapps_cbc_sbank_pipe::
lalapps_cbc_sbank_pipe --config-file sbank_bns.ini --user-tag GSTLAL_BNS
Submit it to condor::
condor_submit_dag GSTLAL_BNS.dag
You can monitor the progress by doing::
tail -f GSTLAL_BNS.dag.dagman.out
You need to wait for the BNS bank to finish before moving on to the SVD
decomposition step for the BNS bank, however the other banks (NSBH, BBH, IMBH)
can be generated simultaneously.
Making the NSBH, BBH, and IMBH banks
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can repeat the above procedure for generating the NSBH, BBH and IMBH banks.
You will need to change the sbank configuration file (.ini). Examples can be
found here:
- https://git.ligo.org/lscsoft/gstlal/raw/master/gstlal-inspiral/share/O3/sept_opa/sbank_nsbh.ini
- https://git.ligo.org/lscsoft/gstlal/raw/master/gstlal-inspiral/share/O3/sept_opa/sbank_bbh.ini
- https://git.ligo.org/lscsoft/gstlal/raw/master/gstlal-inspiral/share/O3/sept_opa/sbank_imbh.ini
You can generate all of these banks in parallel.
Decompose the template waveforms using the SVD in chirpmass and chi bins
------------------------------------------------------------------------
In order to remain organized we will make new directories for the svd
decomposed template banks. First go to the projects root directory, e.g.::
cd /home/gstlalcbc/observing/3/online/
Then make new directories for the bank::
mkdir -p sept_opa/svd/bns sept_opa/svd/nsbh sept_opa/svd/bbh sept_opa/svd/imbh
Decomposing the BNS bank
^^^^^^^^^^^^^^^^^^^^^^^^
Go into the bns svd sub directory::
cd sept_opa/svd/bns
Get the config file example::
wget https://git.ligo.org/lscsoft/gstlal/raw/master/gstlal-inspiral/share/O3/sept_opa/Makefile.bns_svd
**NOTE this file is provided as an example. You will in general have to suit it
to the spcifics of your environment and the search you plan to conduct**
Then run make to generate an SVD dag::
make -f Makefile.bns_svd
Submit it::
condor_submit_dag bank.dag
You have to wait for this dag to finish before starting the actual analysis.
Decomposing the NSBH, BBH and IMBH banks
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can repeat the above procedure for the NSBH, BBH and IMBH banks. You
should modify these example files to suit your needs, but here are example make
files.
- https://git.ligo.org/lscsoft/gstlal/raw/master/gstlal-inspiral/share/O3/sept_opa/Makefile.nsbh_svd
- https://git.ligo.org/lscsoft/gstlal/raw/master/gstlal-inspiral/share/O3/sept_opa/Makefile.bbh_svd
- https://git.ligo.org/lscsoft/gstlal/raw/master/gstlal-inspiral/share/O3/sept_opa/Makefile.imbh_svd
Combining the SVD bank caches into a single cache
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In order to move to the next step, one must combine the cache files after all of the SVD jobs have finished::
cd /home/gstlalcbc/observing/3/online/svd
Then combine the cache files with::
cat bns/H1_bank.cache bbh/H1_bank.cache nsbh/H1_bank.cache imbh/H1_bank.cache > H1_bank.cache
cat bns/L1_bank.cache bbh/L1_bank.cache nsbh/L1_bank.cache imbh/L1_bank.cache > L1_bank.cache
cat bns/V1_bank.cache bbh/V1_bank.cache nsbh/V1_bank.cache imbh/V1_bank.cache > V1_bank.cache
Setup and run the actual online analysis
----------------------------------------
You need to make a directory for the analysis results, e.g.,::
cd /home/gstlalcbc/observing/3/online/
mkdir trigs
cd trigs
Then get an example Makefile::
wget https://git.ligo.org/lscsoft/gstlal/raw/master/gstlal-inspiral/share/O3/sept_opa/Makefile.online_analysis
Modify the example Makefile to your needs. **NOTE when starting an analysis from scratch it is important to have the --gracedb-far-threshold = 1**
Run make::
make -f Makefile.online_analysis
And submit the condor dag::
condor_submit_dag trigger_pipe.dag
Basic LIGO/ALIGO colored Gaussian noise on the command line
-----------------------------------------------------------
......@@ -5,3 +5,5 @@ Tutorials
:maxdepth: 1
gstlal_fake_data_overview
online_analysis
offline_analysis
.. _workflow-config:
Workflow Configuration
=======================
WRITEME
#!/usr/bin/python
#!/usr/bin/env python3
# Copyright 2018 Chad Hanna
#
import sys
......@@ -6,14 +6,15 @@ import os
import subprocess
def process_source(prog, outfile):
for line in open(prog):
if not line.startswith("###"):
continue
outfile.write(line.replace("### ", "").replace("###",""))
with open(prog, 'r') as fid:
for line in fid.readlines():
if not line.startswith("###"):
continue
outfile.write(line.replace("### ", "").replace("###",""))
if len(sys.argv) == 1:
print "USAGE: sphinx-bindoc <output directory> <input directory> [patterns to exclude]"
print("USAGE: sphinx-bindoc <output directory> <input directory> [patterns to exclude]")
sys.exit()
assert(len(sys.argv) >= 3)
......@@ -23,7 +24,7 @@ outdir = sys.argv[1]
tocf = open(os.path.join(outdir, "bin.rst"), "w")
tocf.write("""bin
===
=====================
.. toctree::
:maxdepth: 1
......@@ -45,25 +46,27 @@ for prog in sorted(os.listdir(indir)):
tocf.write("\n %s" % os.path.split(fname)[-1].replace(".rst",""))
if os.path.exists(fname):
print >> sys.stderr, "File %s already exists, skipping." % fname
print("File %s already exists, skipping." % fname)
continue
else:
print >> sys.stderr, "Creating file ", fname
f = open(fname, "w", 0)
# parse the bin program itself for additional documentation
f.write("%s\n%s\n\n" % (prog, "".join(["="] * len(prog))))
process_source(path_to_prog, f)
print("Creating file ", fname)
# write the output of --help
f.write("%s\n%s\n\n" % ("Command line options", "".join(["-"] * len("Command line options"))))
f.write("\n\n.. code-block:: none\n\n")
proc = subprocess.Popen([path_to_prog, "--help"], stdout = subprocess.PIPE)
helpmessage = proc.communicate()[0]
helpmessage = "\n".join([" %s" % l for l in helpmessage.split("\n")])
f.write(helpmessage)
with open(fname, "w") as f:
# parse the bin program itself for additional documentation
f.write("%s\n%s\n\n" % (prog, "".join(["="] * len(prog))))
process_source(path_to_prog, f)
# close the file
f.close()
# write the output of --help
f.write("%s\n%s\n\n" % ("Command line options", "".join(["-"] * len("Command line options"))))
f.write("\n\n.. code-block:: none\n\n")
try:
proc = subprocess.Popen([path_to_prog, "--help"], stdout = subprocess.PIPE)
helpmessage = proc.stdout.read()
if isinstance(helpmessage, bytes):
helpmessage = helpmessage.decode('utf-8')
helpmessage = "\n".join([" %s" % l for l in helpmessage.split('\n')])
f.write(helpmessage)
except OSError:
pass
tocf.close()
gstlal-burst.spec
lib/gstlal-burst.pc
lib/gstlal-burst/gstlal-burst.pc
ACLOCAL_AMFLAGS = -I gnuscripts
EXTRA_DIST = gstlal-burst.spec
SUBDIRS = debian lib python bin gst
SUBDIRS = debian lib gst python bin share
# check that the most recent changelog entry's version matches the package
# version
......
dist_bin_SCRIPTS = \
gstlal_excesspower \
gstlal_excesspower_trigvis \
gstlal_feature_extractor \
gstlal_feature_extractor_pipe \
gstlal_feature_extractor_pipe_online \
gstlal_feature_extractor_whitener_check \
gstlal_feature_extractor_template_overlap \
gstlal_feature_hdf5_sink \
gstlal_feature_synchronizer
gstlal_cherenkov_burst \
gstlal_cherenkov_calc_likelihood \
gstlal_cherenkov_calc_rank_pdfs \
gstlal_cherenkov_inj \
gstlal_cherenkov_plot_rankingstat \
gstlal_cherenkov_plot_summary \
gstlal_cherenkov_zl_rank_pdfs \
gstlal_cs_triggergen \
gstlal_impulse_inj
This diff is collapsed.
#!/usr/bin/env python3
#
# Copyright (C) 2010--2015 Kipp Cannon, Chad Hanna
# Copyright (C) 2021 Soichiro Kuwawhara
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
# Free Software Foundation; either version 2 of the License, or (at your
# option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
# Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
### A program to compute the noise probability distributions of likehood ratios for inspiral triggers
#
# =============================================================================
#
# Preamble
#
# =============================================================================
#
from optparse import OptionParser
import sys
from tqdm import tqdm
from ligo.lw import ligolw
from ligo.lw import lsctables
from ligo.lw import utils as ligolw_utils
from ligo.lw.utils import process as ligolw_process
from lal.utils import CacheEntry
from gstlal import cherenkov
from gstlal.cherenkov import rankingstat as cherenkov_rankingstat
__author__ = "Soichiro Kuwara <soichiro.kuwahara@ligo.org>"
#
# =============================================================================
#
# Command Line
#
# =============================================================================
#
def parse_command_line():
parser = OptionParser(
description = "Rankngstat calculation program for Cherenkov burst search.",
usage = "%prog [options] [candidatesxml ...]"
)
parser.add_option("--candidates-cache", metavar = "filename", help = "Also load the candidates from files listed in this LAL cache. See lalapps_path2cache for information on how to produce a LAL cache file.")
parser.add_option("--ranking-stat-cache", metavar = "filename", help = "Also load the ranking statistic likelihood ratio data files listed in this LAL cache. See lalapps_path2cache for information on how to produce a LAL cache file.")
parser.add_option("--ranking-stat", metavar = "filename", action = "append", help = "Load ranking statistic data from this file. Can be given multiple times.")
parser.add_option("-v", "--verbose", action = "store_true", help = "Be verbose.")
options, urls = parser.parse_args()
paramdict = options.__dict__.copy()
if options.candidates_cache is not None:
urls += [CacheEntry(line).url for line in open(options.candidates_cache)]
if not urls:
raise ValueError("must provide at least one candidate file")
if options.ranking_stat is None:
options.ranking_stat = []
if options.ranking_stat_cache is not None:
options.ranking_stat += [CacheEntry(line).url for line in open(options.ranking_stat_cache)]
if not options.ranking_stat:
raise ValueError("must provide at least one ranking statistic file")
return options, urls, paramdict
#
# =============================================================================
#
# Main
#
# =============================================================================
#
#
# command line
#
options, urls, paramdict = parse_command_line()
#
# load parameter distribution data
#
rankingstat = cherenkov_rankingstat.marginalize_rankingstat_urls(options.ranking_stat, verbose = options.verbose)
#
# invoke .finish() to apply density estimation kernels and correct the
# normalization.
#
rankingstat.finish()
#
# load zero-lag candidates and histogram their ranking statistics
#
for n, url in enumerate(urls, 1):
if options.verbose:
print("%d/%d: " % (n, len(urls)), end = "", file = sys.stderr)
xmldoc = ligolw_utils.load_url(url, contenthandler = cherenkov_rankingstat.LIGOLWContentHandler, verbose = options.verbose)
process = ligolw_process.register_to_xmldoc(xmldoc, "gstlal_cherenkov_calc_likelihood", paramdict = paramdict)
time_slide_index = lsctables.TimeSlideTable.get_table(xmldoc).as_dict()
sngl_burst_index = dict((row.event_id, row) for row in lsctables.SnglBurstTable.get_table(xmldoc))
coinc_index = {}
for row in lsctables.CoincMapTable.get_table(xmldoc):
if row.table_name == "sngl_burst":
if row.coinc_event_id not in coinc_index:
coinc_index[row.coinc_event_id] = []
coinc_index[row.coinc_event_id].append(sngl_burst_index[row.event_id])
coinc_def_id = lsctables.CoincDefTable.get_table(xmldoc).get_coinc_def_id(search = cherenkov.CherenkovBBCoincDef.search, search_coinc_type = cherenkov.CherenkovBBCoincDef.search_coinc_type, create_new = False)
for coinc in tqdm(lsctables.CoincTable.get_table(xmldoc), desc = "calculating LR", disable = not options.verbose):
if coinc.coinc_def_id == coinc_def_id:
coinc.likelihood = rankingstat.ln_lr_from_triggers(coinc_index[coinc.coinc_event_id], time_slide_index[coinc.time_slide_id])
process.set_end_time_now()
ligolw_utils.write_url(xmldoc, url, verbose = options.verbose)
xmldoc.unlink()
# Copyright (C) 2015 Madeline Wade
#!/usr/bin/env python3
#
# Copyright (C) 2010--2015 Kipp Cannon, Chad Hanna
# Copyright (C) 2021 Soichiro Kuwawhara
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
......@@ -14,6 +17,8 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
### A program to compute the noise probability distributions of likehood ratios for inspiral triggers
#
# =============================================================================
#
......@@ -22,130 +27,109 @@
# =============================================================================
#
"""
Arbitrary function generator based on sink pad timestamps and duration.
Accepts any Python expression. The "numpy" module is available as if you
typed "from numpy import *". The local variable "t" provides the stream
time in seconds.
"""
__author__ = "Madeline Wade <madeline.wade@ligo.org>"
import numpy
import gst
import sys
import gobject
from optparse import OptionParser
from ligo.lw import ligolw
from ligo.lw import utils as ligolw_utils
from ligo.lw.utils import process as ligolw_process
from lal.utils import CacheEntry
from gstlal.cherenkov import rankingstat as cherenkov_rankingstat
__author__ = "Soichiro Kuwara <soichiro.kuwahara@ligo.org>"
from gstlal import pipeio
from gstlal.pipeutil import *
#
# =============================================================================
#
# Functions
# Command Line
#
# =============================================================================
#
def create_expression(inbuf, outbuf, caps, expression):
rate = caps[0]["rate"]
dt = 1.0/float(rate)
t_start = float(inbuf.timestamp) / float(gst.SECOND)
dur = float(inbuf.duration) / float(gst.SECOND)
t_end = t_start + dur
t = numpy.arange(t_start, t_end, dt)
y = eval(expression, numpy.__dict__, {'t': t})
unitsize = pipeio.get_unit_size(caps)
bufsize = unitsize * len(t)
outbuf[0:bufsize] = y.flatten().astype(pipeio.numpy_dtype_from_caps(caps)).data
def parse_command_line():
parser = OptionParser(
description = "Rankngstat calculation program for Cherenkov burst search.",
usage = "%prog [options] [rankingstatxml ...]"
)
parser.add_option("--output", metavar = "filename", help = "Write ranking statistic PDFs to this LIGO Light-Weight XML file.")
parser.add_option("--ranking-stat-cache", metavar = "filename", help = "Also load the ranking statistic likelihood ratio data files listed in this LAL cache. See lalapps_path2cache for information on how to produce a LAL cache file.")
parser.add_option("--ranking-stat-samples", metavar = "N", default = 2**24, type = "int", help = "Construct ranking statistic histograms by drawing this many samples from the ranking statistic generator (default = 2^24).")
parser.add_option("-v", "--verbose", action = "store_true", help = "Be verbose.")
options, urls = parser.parse_args()
paramdict = options.__dict__.copy()
if options.ranking_stat_cache is not None:
urls += [CacheEntry(line).url for line in open(options.ranking_stat_cache)]
if not urls:
raise ValueError("must provide some ranking statistic files")
if options.output is None:
raise ValueError("must set --output")
return options, urls, paramdict
#
# =============================================================================
#
# Element
# Main
#
# =============================================================================
#
class lal_numpy_fx_transform(gst.BaseTransform):
__gstdetails__ = (
"Arbitrary function generator from sink timestamps and duration",
"Filter/Audio",
__doc__,
__author__
)
__gproperties__ = {
'expression': (
gobject.TYPE_STRING,
'Expression',
'any Python expression, to be evaluated under "from numpy import *"',
'0 * t',
gobject.PARAM_READWRITE | gobject.PARAM_CONSTRUCT
)
}
__gsttemplates__ = (
gst.PadTemplate("sink",
gst.PAD_SINK,
gst.PAD_ALWAYS,
gst.caps_from_string(
"audio/x-raw-float, " +
"rate = (int) [1, MAX], " +
"channels = (int) 1, " +
"endianness = (int) BYTE_ORDER, " +
"width = (int) 64"
)
),
gst.PadTemplate("src",
gst.PAD_SRC,
gst.PAD_ALWAYS,
gst.caps_from_string(
"audio/x-raw-float, " +
"rate = (int) [1, MAX], " +
"channels = (int) 1, " +
"endianness = (int) BYTE_ORDER, " +
"width = (int) 64"
)
)
)
#
# command line
#
options, urls, paramdict = parse_command_line()
#
# start the output document and record our start time
#
xmldoc = ligolw.Document()
xmldoc.appendChild(ligolw.LIGO_LW())
#
# load parameter distribution data
#
rankingstat = cherenkov_rankingstat.marginalize_rankingstat_urls(urls, verbose = options.verbose)
process = ligolw_process.register_to_xmldoc(xmldoc, "gstlal_cherenkov_calc_rank_pdfs", paramdict = paramdict, instruments = rankingstat.instruments)
#
# invoke .finish() to apply density estimation kernels and correct the
# normalization.
#
rankingstat.finish()
#
# generate likelihood ratio histograms
#
rankingstatpdf = cherenkov_rankingstat.RankingStatPDF(rankingstat, nsamples = options.ranking_stat_samples, verbose = options.verbose)
#
# Write the ranking statistic distribution data to a file
#
def __init__(self):
super(lal_numpy_fx_transform, self).__init__()
self.set_gap_aware(True)
def do_set_property(self, prop, val):
if prop.name == "expression":
self.__compiled_expression = compile(val, "<compiled Python expression>", "eval")
self.__expression = val
def do_get_property(self, prop):
if prop.name == "expression":
return self.__expression
def do_transform(self, inbuf, outbuf):
pad = self.src_pads().next()
caps = pad.get_caps()
# FIXME: I'm not sure this is the right fix for hearbeat buffers, so I need to check this!
if len(inbuf) == 0:
gst.Buffer.flag_set(inbuf, gst.BUFFER_FLAG_GAP)
# Process buffer
if not gst.Buffer.flag_is_set(inbuf, gst.BUFFER_FLAG_GAP):
# Input is not 0s
create_expression(inbuf, outbuf, caps, self.__expression)
else:
# Input is 0s
gst.Buffer.flag_set(outbuf, gst.BUFFER_FLAG_GAP)
return gst.FLOW_OK
gobject.type_register(lal_numpy_fx_transform)
__gstelementfactory__ = (
lal_numpy_fx_transform.__name__,
gst.RANK_NONE,
lal_numpy_fx_transform
)
xmldoc.childNodes[-1].appendChild(rankingstatpdf.to_xml("gstlal_cherenkov_rankingstat_pdf"))
process.set_end_time_now()
ligolw_utils.write_filename(xmldoc, options.output, verbose = options.verbose)
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
#!/usr/bin/env python3
#
# Copyright (C) 2010--2015 Kipp Cannon, Chad Hanna
# Copyright (C) 2021 Soichiro Kuwawhara
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
# Free Software Foundation; either version 2 of the License, or (at your
# option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
# Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
### A program to compute the noise probability distributions of likehood ratios for inspiral triggers
#
# =============================================================================
#
# Preamble
#
# =============================================================================
#
from optparse import OptionParser
import sys
from ligo.lw import ligolw
from ligo.lw import lsctables
from ligo.lw import utils as ligolw_utils
from ligo.lw.utils import process as ligolw_process
from lal.utils import CacheEntry
from gstlal import cherenkov
from gstlal.cherenkov import rankingstat as cherenkov_rankingstat
__author__ = "Soichiro Kuwara <soichiro.kuwahara@ligo.org>"
#
# =============================================================================
#
# Command Line
#
# =============================================================================
#
def parse_command_line():
parser = OptionParser(
description = "Rankngstat calculation program for Cherenkov burst search.",
usage = "%prog [options] [candidatesxml ...]"
)
parser.add_option("--candidates-cache", metavar = "filename", help = "Also load the candidates from files listed in this LAL cache. See lalapps_path2cache for information on how to produce a LAL cache file.")
parser.add_option("-v", "--verbose", action = "store_true", help = "Be verbose.")
parser.add_option("--ranking-stat-pdf", metavar = "filename", help = "Write zero-lag ranking statistic PDF to this file. Must contain exactly one Cherenkov burst ranking statistic PDF object.")
parser.add_option("--is-timeshifted", action = "store_true", help = "Whether the open(zerolag) or closed(time shifted) box.")
options, urls = parser.parse_args()
paramdict = options.__dict__.copy()
if options.candidates_cache is not None:
urls += [CacheEntry(line).url for line in open(options.candidates_cache)]
if not urls:
raise ValueError("must provide some candidate files")
if options.ranking_stat_pdf is None:
raise ValueError("must set --ranking-stat-pdf")
return options, urls, paramdict
#
# =============================================================================
#
# Main
#
# =============================================================================
#
#
# command line
#
options, urls, paramdict = parse_command_line()
#
# load ranking statistic PDF
#
# FIXME: it would be better to preserve all the contents of the original file instead of just the PDF object
rankingstatpdf = cherenkov_rankingstat.RankingStatPDF.from_xml(ligolw_utils.load_filename(options.ranking_stat_pdf, contenthandler = cherenkov_rankingstat.LIGOLWContentHandler, verbose = options.verbose), "gstlal_cherenkov_rankingstat_pdf")
#
# zero the zero-lag ranking statistic PDF
#
rankingstatpdf.zl_lr_lnpdf.array[:] = 0.
#
# load zero-lag candidates and histogram their ranking statistics
#
for n, url in enumerate(urls, 1):
if options.verbose:
print("%d/%d: " % (n, len(urls)), end = "", file = sys.stderr)
xmldoc = ligolw_utils.load_url(url, contenthandler = cherenkov_rankingstat.LIGOLWContentHandler, verbose = options.verbose)
coinc_def_id = lsctables.CoincDefTable.get_table(xmldoc).get_coinc_def_id(search = cherenkov.CherenkovBBCoincDef.search, search_coinc_type = cherenkov.CherenkovBBCoincDef.search_coinc_type, create_new = False)
if options.is_timeshifted:
zl_time_slide_ids = frozenset(time_slide_id for time_slide_id, offsetvector in lsctables.TimeSlideTable.get_table(xmldoc).as_dict().items() if any(offsetvector.values()))
else:
zl_time_slide_ids = frozenset(time_slide_id for time_slide_id, offsetvector in lsctables.TimeSlideTable.get_table(xmldoc).as_dict().items() if not any(offsetvector.values()))
for coinc in lsctables.CoincTable.get_table(xmldoc):
if coinc.coinc_def_id == coinc_def_id and coinc.time_slide_id in zl_time_slide_ids:
rankingstatpdf.zl_lr_lnpdf.count[coinc.likelihood,] += 1
#
# apply density estimation kernel to zero-lag counts
#
rankingstatpdf.density_estimate(rankingstatpdf.zl_lr_lnpdf, "zero-lag")
rankingstatpdf.zl_lr_lnpdf.normalize()
#
# Write the parameter and ranking statistic distribution data to a file
#
xmldoc = ligolw.Document()
xmldoc.appendChild(ligolw.LIGO_LW())
process = ligolw_process.register_to_xmldoc(xmldoc, "gstlal_cherenkov_calc_rank_pdfs", paramdict = paramdict)
xmldoc.childNodes[-1].appendChild(rankingstatpdf.to_xml("gstlal_cherenkov_rankingstat_pdf"))
process.set_end_time_now()
ligolw_utils.write_filename(xmldoc, options.ranking_stat_pdf, verbose = options.verbose)
This diff is collapsed.
......@@ -65,8 +65,8 @@ from gstlal.excesspower import parts
from gstlal.excesspower.scan import EPScan
from glue import gpstime
from glue.lal import LIGOTimeGPS
from glue.segments import segment
from lal import LIGOTimeGPS
from ligo.segments import segment
__author__ = "Chris Pankow <chris.pankow@ligo.org>"
__version__ = "Defiant" # until we get proper versioning tags
......@@ -84,7 +84,7 @@ ep.append_options(parser)
datasource.append_options(parser)
(options, args) = parser.parse_args()
gw_data_source_opts = datasource.GWDataSourceInfo(options)
gw_data_source_opts = datasource.DataSourceInfo.from_optparse(options)
# Verbosity and diagnostics
verbose = options.verbose
......@@ -119,7 +119,7 @@ if verbose:
print "Assembling pipeline... this is gstlal_excesspower, version code name %s\n" % __version__,
# Construct the data acquisition and conditioning part of the pipeline
head, _, _ = datasource.mkbasicsrc(pipeline, gw_data_source_opts, handler.inst, verbose)
head, _, _, _ = datasource.mkbasicsrc(pipeline, gw_data_source_opts, handler.inst, verbose)
# If we're running online, we need to set up a few things
if gw_data_source_opts.data_source in ("lvshm", "framexmit"):
......
This diff is collapsed.
This diff is collapsed.