locklost
: aLIGO IFO lock loss tracking and analysis
This package provides a set of tools for analyzing LIGO detector "lock losses". It consists of four main components:
-
search
for detector lock losses in past data based on guardian state transitions. -
analyze
individual lock losses to generate plots and look for identifying features. -
online
search to monitor for lock losses and automatically run follow-up analyses. -
web
interface to view lock loses event pages.
A command line interface is provided to launch the online
analysis
and condor jobs to search
for and analyze
events.
Usage
To start/stop/restart the online analysis use the online
command:
$ locklost online start
This launches a condor job that runs the online analysis.
To launch a condor job to search for lock losses within some time window:
$ locklost search --condor START END
This will find lock losses with the specified time range, but will not run the follow-up analyses. This is primarily needed to backfill times when the online analysis was not running (see below).
Any time argument ('START', 'END', 'TIME', etc.) can be either a GPS times or a full (even relative) date/time string, e.g. '1 week ago'.
To run a full analysis on a specific lock loss time found from the search above:
$ locklost analyze TIME
To launch a condor job to analyze all un-analyzed events within a time range:
$ locklost analyze --condor START END
To re-analyze events add the --rerun
flag e.g.:
$ locklost analyze TIME --rerun
or
$ locklost analyze --condor START END --rerun
It has happened that analysis jobs are improperly killed by condor, not giving them a chance to clean up their run locks. The site locklost deployments include a command line utility to find and remove any old, stale analysis locks:
$ find-locks -r
Analysis plugins
Lock loss event analysis consists of a set of follow-up "plugin"
analyses, located in the locklost.plugins
sub-package:
Each follow-up module is registered in
locklost/plugins/__init__.py
. Some
of the currently enabled follow-up are:
-
discover.discover_data
wait for data to be available -
refine.refine_event
refine event time -
saturations.find_saturations
find saturating channels before event -
lpy.find_lpy
find length/pitch/yaw oscillations in suspensions -
glitch.analyze_glitches
look for glitches around event -
overflows.find_overflows
look for ADC overflows -
state_start.find_lock_start
find the start the lock leading to current lock loss
Each plugin does it's own analysis, although some depend on the output of other analyses. The output from any analysis (e.g. plots or data) should be written into the event directory.
Site deployments
Each site (LHO and LLO) has a dedicated "lockloss" account on their
local LDAS cluster where locklost
is running:
These accounts have deployments of the latest locklost
package
release, run the production online and analysis condor jobs, and host
the web pages.
deploying new versions
When a new version is ready for release, create an annotated tag for the release and push it to the main repo (https://git.ligo.org/jameson.rollins/locklost):
$ git tag -m release 0.16
$ git push --tags
In the LDAS lockloss account, pull the new tag and run the test/deploy script:
$ ssh lockloss@detchar.ligo-la.caltech.edu
$ cd ~/src/locklost
$ git pull
$ locklost-deploy
If there are changes to the online search, restart the online condor process, otherwise analysis changes should get picked up automatically in new condor executions:
$ locklost online restart
Developing and contributing
See the CONTRIBUTING for instructions on how to
contribute to locklost
.