Skip to content
Snippets Groups Projects
Commit 77a7c709 authored by Jameson Graef Rollins's avatar Jameson Graef Rollins
Browse files

update README

parent 75a21155
No related branches found
No related tags found
No related merge requests found
......@@ -4,11 +4,11 @@ locklost: aLIGO IFO lock loss tracking and analysis tool
This package provides a set of tools for analyzing LIGO detector "lock
losses". It consists of four main components:
* `search` for detector lock losses based on guardian state
transitions.
* `analyze` lock losses to generate plots and look for identifying
features.
* `online` analysis to monitor for lock losses and automatically run
* `search` for detector lock losses in past data based on guardian
state transitions.
* `analyze` individual lock losses to generate plots and look for
identifying features.
* `online` search to monitor for lock losses and automatically run
follow-up analyses.
* `web` interface to view lock loses event pages.
......@@ -31,16 +31,23 @@ web pages.
## deploying new versions
When a new tag has been applied to the code, the following can be run
to deploy the new code at the site:
When a new version is ready for release, create an annotated tag for
the release and push it to the repo:
```shell
$ git tag -m "release" 0.16
$ git push --tags
```
In the lockloss account, pull the new tag and run
the test/deploy script:
```shell
$ ssh lockloss@detchar.ligo-la.caltech.edu
$ cd src/locklost
$ cd ~/src/locklost
$ git pull
$ locklost-deploy
```
If there are changes to the online search, restart the online condor
process:
process, otherwise analysis changes should get picked up automatically
in new condor executions:
```shell
$ locklost online restart
```
......@@ -57,18 +64,18 @@ This launches a condor job that runs the online analysis.
To launch a condor job to search for lock losses within some time
window:
```shell
$ locklost search GPS_START GPS_END
$ locklost search --condor GPS_START GPS_END
```
This will find lock losses with the specified time range, but will not
run the follow-up analyses. This is only needed to cover times when
the online analysis was not running.
To run an full analysis on a specific lock loss time found from the
To run a full analysis on a specific lock loss time found from the
search above:
```shell
$ locklost analyze GPS_TIME
```
To launch a condor job to analyze all events within a time range:
To launch a condor job to analyze all un-analyzed events within a time range:
```shell
$ locklost analyze --condor GPS_START GPS_END
```
......@@ -80,21 +87,23 @@ $ locklost analyze --condor GPS_START GPS_END --rerun
# Analysis plugins
Lock loss event analyses are handled by a set of "plugins" located in
the /locklost/plugins/ package directory:
Lock loss event analysis consists of a set of follow-up "plugin"
analyses, located in the `locklost.plugins` sub-package:
* [`locklost/plugins/`](/locklost/plugins/)
Each module is registered in [`locklost/plugins/__init__.py`](/locklost/plugins/__init__.py). The
currently enabled follow-up plugins are:
Each follow-up module is registered in
[`locklost/plugins/__init__.py`](/locklost/plugins/__init__.py). Some
of the currently enabled follow-up are:
* `discover.discover_data` wait for data to be available
* `refine.refine_event` refine event time
* `saturations.find_saturations` find saturating channels before event
* `lpy.find_lpy` find length/pitch/yaw oscillations in the suspensions
* `lpy.find_lpy` find length/pitch/yaw oscillations in suspensions
* `glitch.analyze_glitches` look for glitches around event
* `overflows.find_overflows` look for ADC overflows
* `state_start.find_lock_start` find the start of the lock
* `state_start.find_lock_start` find the start the lock leading to
current lock loss
Each plugin does it's own analysis, although some depend on the output
of other analyses. The output from any analysis (e.g. plots or data)
......@@ -105,3 +114,43 @@ should be written into the event directory.
See the [CONTRIBUTING](CONTRIBUTING.md) for instructions on how to
contribute to `locklost`.
# Re-analysis
Occasionally the online search will fail or go offline, or the
analysis will fail or not launch. This is usually because of problems
with the cluster, e.g. from Tuesday maintenance or other problems, but
could be due to code bugs as well. It's therefore necessary to
occasionally run an offline search/analysis to find missing events or
redo their analyses.
It has happened that analysis jobs are improperly killed by condor,
not giving them a chance to clean up their run locks. Before re-running
analyses remove any old, stale analysis locks:
```shell
$ find-locks -r
```
Launch on offline search for any missed lock losses:
```shell
$ locklost search --condor `gpstime -g 1 month ago` `gpstime -g`
```
This will find lock losses with the specified time range, but will not
run the follow-up analyses. This is only needed to cover times when
the online analysis was not running. The start and stop time
arguments are given using calls to the gpstime utility as a convenience to
get GPS times from human-readable times (FIXME: this should be folded
in to the locklost command line parser directly).
To run a full analysis on a specific lock loss time found from the
search above, just call `locklost analyze` with the time:
```shell
$ locklost analyze GPS_TIME
```
To launch a condor job to analyze all un-analyzed events within the
same specified time range:
```shell
$ locklost analyze --condor `gpstime -g 1 month ago` `gpstime -g`
```
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment