Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • iain.morton/locklost
  • oli.patane/locklost-olifork
  • timothy.ohanlon/locklost
  • benjaminrobert.mannix/locklost
  • austin.jennings/locklost
  • camilla.compton/locklost
  • arnaud.pele/locklost
  • yamamoto/locklost
  • marc.lormand/locklost
  • saravanan.tiruppatturrajamanikkam/locklost
  • nikhil-mukund/locklost
  • patrick.godwin/locklost
  • yannick.lecoeuche/locklost
  • jameson.rollins/locklost
14 results
Show changes
Commits on Source (202)
Showing with 1055 additions and 380 deletions
*.pyc *.pyc
*~ *~
locklost/version.py
# https://computing.docs.ligo.org/gitlab-ci-templates
include:
- project: computing/gitlab-ci-templates
file:
- conda.yml
- python.yml
stages:
- lint
# https://computing.docs.ligo.org/gitlab-ci-templates/python/#.python:flake8
lint:
extends:
- .python:flake8
stage: lint
needs: []
...@@ -14,8 +14,8 @@ cluster. ...@@ -14,8 +14,8 @@ cluster.
make sure all code is python3 compatible. make sure all code is python3 compatible.
The `locklost` project uses [git](https://git-scm.com/) and is hosted The `locklost` project uses [git](https://git-scm.com/) and is hosted
at [git.ligo.org](https://git.ligo.org). Issues are tracked via the at [git.ligo.org](https://git.ligo.org/jameson.rollins/locklost).
[GitLab issue Issues are tracked via the [GitLab issue
tracker](https://git.ligo.org/jameson.rollins/locklost/issues). tracker](https://git.ligo.org/jameson.rollins/locklost/issues).
......
...@@ -12,68 +12,28 @@ losses". It consists of four main components: ...@@ -12,68 +12,28 @@ losses". It consists of four main components:
follow-up analyses. follow-up analyses.
* `web` interface to view lock loses event pages. * `web` interface to view lock loses event pages.
A command line interface is provided to launch the `online` analysis The `locklost` command line interface provides access to all of the
and condor jobs to `search` for and `analyze` events. above functionality.
The LIGO sites have `locklost` deployments that automatically find and
analyze lock loss events. The site lock loss web pages are available
at the following URLs:
# Usage * [H1: https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/](https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/)
* [L1: https://ldas-jobs.ligo-la.caltech.edu/~lockloss/](https://ldas-jobs.ligo-la.caltech.edu/~lockloss/)
To start/stop/restart the online analysis use the `online` command: If you notice any issues please [file a bug
```shell report](https://git.ligo.org/jameson.rollins/locklost/-/issues), or if
$ locklost online start you have any questions please contact the ["lock-loss" group on
``` chat.ligo.org](https://chat.ligo.org/ligo/channels/lock-loss).
This launches a condor job that runs the online analysis.
To launch a condor job to search for lock losses within some time
window:
```shell
$ locklost search --condor START END
```
This will find lock losses with the specified time range, but will not
run the follow-up analyses. This is primarily needed to backfill
times when the online analysis was not running (see below).
Any time argument ('START', 'END', 'TIME', etc.) can be either a GPS
times or a full (even relative) date/time string, e.g. '1 week ago'.
To run a full analysis on a specific lock loss time found from the
search above:
```shell
$ locklost analyze TIME
```
To launch a condor job to analyze all un-analyzed events within a time
range:
```shell
$ locklost analyze --condor START END
```
To re-analyze events add the `--rerun` flag e.g.:
```shell
$ locklost analyze TIME --rerun
```
or
```shell
$ locklost analyze --condor START END --rerun
```
It has happened that analysis jobs are improperly killed by condor,
not giving them a chance to clean up their run locks. The site
locklost deployments include a command line utility to find and remove
any old, stale analysis locks:
```shell
$ find-locks -r
```
# Analysis plugins # Analysis plugins
Lock loss event analysis consists of a set of follow-up "plugin" Lock loss event analysis is handled by a set of [analysis
analyses, located in the `locklost.plugins` sub-package: "plugins"](/locklost/plugins/). Each plugin is registered in
* [`locklost/plugins/`](/locklost/plugins/)
Each follow-up module is registered in
[`locklost/plugins/__init__.py`](/locklost/plugins/__init__.py). Some [`locklost/plugins/__init__.py`](/locklost/plugins/__init__.py). Some
of the currently enabled follow-up are: of the currently enabled plugins are:
* `discover.discover_data` wait for data to be available * `discover.discover_data` wait for data to be available
* `refine.refine_event` refine event time * `refine.refine_event` refine event time
...@@ -85,43 +45,214 @@ of the currently enabled follow-up are: ...@@ -85,43 +45,214 @@ of the currently enabled follow-up are:
current lock loss current lock loss
Each plugin does it's own analysis, although some depend on the output Each plugin does it's own analysis, although some depend on the output
of other analyses. The output from any analysis (e.g. plots or data) of other plugins. The output from any plugin (e.g. plots or data)
should be written into the event directory. should be written into the event directory.
# Site deployments # Site deployments
Each site (LHO and LLO) has a dedicated "lockloss" account on their Each site (LHO and LLO) has a dedicated "lockloss" account on their
local LDAS cluster where `locklost` is running: local LDAS cluster where `locklost` is deployed. The `locklost`
command line interface is available when logging into these acocunts.
NOTE: these accounts are for the managed `locklost` deployments
**ONLY**. These are specially configured. Please don't change the
configuration or run other jobs in these accounts unless you know what
you're doing.
The online searches and other periodic maintenance tasks are handled
via `systemd --user` on the dedicated "detchar.ligo-?a.caltech.edu"
nodes. `systemd --user` is a user-specific instance of the
[systemd](https://systemd.io/) process supervision daemon. The [arch
linux wiki](https://wiki.archlinux.org/title/Systemd) has a nice intro
to systemd.
## online systemd service
The online search service is called `locklost-online.service`. To
control and view the status of the process use the `sysctemctl`
command, e.g.:
```shell
$ systemctl --user status locklost-online.service
$ systemctl --user start locklost-online.service
$ systemctl --user restart locklost-online.service
$ systemctl --user stop locklost-online.service
```
To view the logs from the online search user `journalctl`:
```shell
$ journalctl --user-unit locklost-online.service -f
```
The `-f` options tells journalctl to "follow" the logs and print new
log lines as they come in.
The "service unit" files for the online search service (and the timer
services described below) that describe how the units should be
managed by systemd are maintained in the [support](/support/systemd)
directory in the git source.
## systemd timers
Instead of cron, the site deployments use `systemd timers` to handle
periodic background jobs. The enabled timers and their schedules can
be viewed with:
```shell
$ systemctl --user list-timers
```
* https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/ Timers are kind of like other services, which can be started/stopped,
* https://ldas-jobs.ligo-la.caltech.edu/~lockloss/ enabled/disabled, etc. Unless there is some issue they should usually
just be left alone.
These accounts have deployments of the latest `locklost` package
release, run the production online and analysis condor jobs, and host
the web pages.
## deploying new versions ## deploying new versions
When a new version is ready for release, create an annotated tag for When a new version is ready for release, create an annotated tag for
the release and push it to the main repo (https://git.ligo.org/jameson.rollins/locklost): the release and push it to the [main
repo](https://git.ligo.org/jameson.rollins/locklost):
```shell ```shell
$ git tag -m release 0.16 $ git tag -m release 0.22.1
$ git push --tags $ git push --tags
``` ```
In the LDAS lockloss account, pull the new tag and run In the "lockloss" account on the LDS "detchar" machines, pull the new
the test/deploy script: release and run the `deploy` script, which should automatically run the
tests before installing the new version:
```shell ```shell
$ ssh lockloss@detchar.ligo-la.caltech.edu $ ssh lockloss@detchar.ligo-la.caltech.edu
$ cd ~/src/locklost $ cd src/locklost
$ git pull $ git pull
$ locklost-deploy $ ./deploy
```
The deployment should automatically install any updates to the online
search, and restart the search service.
## common issues
There are a couple of issues that crop up periodically, usually due to
problems with the site LDAS clusters where the jobs run.
### online analysis restarting due to NDS problems
One of the most common problems is that the online analysis is falling
over because it can't connect to the site NDS server, for example:
```
2021-05-12_17:18:31 2021-05-12 17:18:31,317 NDS connect: 10.21.2.4:31200
2021-05-12_17:18:31 Error in write(): Connection refused
2021-05-12_17:18:31 Error in write(): Connection refused
2021-05-12_17:18:31 Traceback (most recent call last):
2021-05-12_17:18:31 File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
2021-05-12_17:18:31 "__main__", mod_spec)
2021-05-12_17:18:31 File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
2021-05-12_17:18:31 exec(code, run_globals)
2021-05-12_17:18:31 File "/home/lockloss/.local/lib/python3.6/site-packages/locklost-0.21.3-py3.6.egg/locklost/online.py", line 109, in <module>
2021-05-12_17:18:31 stat_file=stat_file,
2021-05-12_17:18:31 File "/home/lockloss/.local/lib/python3.6/site-packages/locklost-0.21.3-py3.6.egg/locklost/search.py", line 72, in search_iterate
2021-05-12_17:18:31 for bufs in data.nds_iterate([channel], start_end=segment):
2021-05-12_17:18:31 File "/home/lockloss/.local/lib/python3.6/site-packages/locklost-0.21.3-py3.6.egg/locklost/data.py", line 48, in nds_iterate
2021-05-12_17:18:31 with closing(nds_connection()) as conn:
2021-05-12_17:18:31 File "/home/lockloss/.local/lib/python3.6/site-packages/locklost-0.21.3-py3.6.egg/locklost/data.py", line 28, in nds_connection
2021-05-12_17:18:31 conn = nds2.connection(HOST, PORT)
2021-05-12_17:18:31 File "/usr/lib64/python3.6/site-packages/nds2.py", line 3172, in __init__
2021-05-12_17:18:31 _nds2.connection_swiginit(self, _nds2.new_connection(*args))
2021-05-12_17:18:31 RuntimeError: Failed to establish a connection[INFO: Error occurred trying to write to socket]
```
This is usually because the NDS server itself has died and needs to be
restarted/reset (frequently due to Tuesday maintenance).
Unfortunately the site admins aren't necessarily aware of this issue
and need to be poked about it. Once the NDS server is back the job
should just pick up on it's own.
### analyze jobs failing because of cluster data problems
Another common failure mode is failing follow-up analysis jobs due to
data access problems in the cluster. These often occur during Tuesday
maintenance, but often mysteriously at other times as well. These
kinds of failures are indicated by the following exceptions in the
event analyze log:
```
2021-05-07 16:02:51,722 [analyze.analyze_event] exception in discover_data:
Traceback (most recent call last):
File "/home/lockloss/.local/lib/python3.6/site-packages/locklost-0.21.0-py3.6.egg/locklost/analyze.py", line 56, in analyze_event
func(event)
File "/home/lockloss/.local/lib/python3.6/site-packages/locklost-0.21.0-py3.6.egg/locklost/plugins/discover.py", line 67, in discover_data
raise RuntimeError("data discovery timeout reached, data not found")
RuntimeError: data discovery timeout reached, data not found
``` ```
If there are changes to the online search, restart the online condor or:
process, otherwise analysis changes should get picked up automatically ```
in new condor executions: 2021-05-07 16:02:51,750 [analyze.analyze_event] exception in find_previous_state:
Traceback (most recent call last):
File "/home/lockloss/.local/lib/python3.6/site-packages/locklost-0.21.0-py3.6.egg/locklost/analyze.py", line 56, in analyze_event
func(event)
File "/home/lockloss/.local/lib/python3.6/site-packages/locklost-0.21.0-py3.6.egg/locklost/plugins/history.py", line 26, in find_previous_state
gbuf = data.fetch(channels, segment)[0]
File "/home/lockloss/.local/lib/python3.6/site-packages/locklost-0.21.0-py3.6.egg/locklost/data.py", line 172, in fetch
bufs = func(channels, start, stop)
File "/home/lockloss/.local/lib/python3.6/site-packages/locklost-0.21.0-py3.6.egg/locklost/data.py", line 150, in frame_fetch_gwpy
data = gwpy.timeseries.TimeSeriesDict.find(channels, start, stop, frametype=config.IFO+'_R')
File "/usr/lib/python3.6/site-packages/gwpy/timeseries/core.py", line 1291, in find
on_gaps="error" if pad is None else "warn",
File "/usr/lib/python3.6/site-packages/gwpy/io/datafind.py", line 335, in wrapped
return func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/gwpy/io/datafind.py", line 642, in find_urls
on_gaps=on_gaps)
File "/usr/lib/python3.6/site-packages/gwdatafind/http.py", line 433, in find_urls
raise RuntimeError(msg)
RuntimeError: Missing segments:
[1304463181 ... 1304463182)
```
This problem sometimes just corrects itself, but often needs admin
poking as well.
## back-filling events
After things have recovered from any of the issues mentioned above,
you'll probably want to back-fill any missed events. The best way to
do that is to run a condor `search` for missed event (e.g. from "4
weeks ago" until "now"):
```shell
$ locklost search --condor '4 weeks ago' now
```
The search will find lock loss events, but it will not run the
follow-up analyses. To run the full analysis on any un-analyzed
events, run the condor `analyze` command over the same time range,
e.g.:
```shell
$ locklost analyze --condor '4 weeks ago' now
```
The `analyze` command will analyze both missed/new events but also
"failed" events.
You can also re-analyze a specific event with the `--rerun` flag:
```shell
$ locklost analyze TIME --rerun
```
or
```shell
$ locklost analyze --condor START END --rerun
```
Occaissionally analysis jobs are improperly killed by condor, not
giving them a chance to clean up their run locks. The find and remove
any old, stale analysis locks:
```shell
$ locklost find-locks -r
```
We also have timer services available to handle the backfilling after
Tuesday maintenance if need be:
```shell ```shell
$ locklost online restart $ systemctl --user enable --now locklost-search-backfill.timer
$ systemctl --user enable --now locklost-analyze-backfill.timer
``` ```
......
#!/bin/bash -ex
if [[ $(hostname -f) =~ detchar.ligo-[lw]+a.caltech.edu ]] ; then
hostname -f
else
echo "This deploy script is intended to be run only on the detchar.ligo-{l,w}.caltech.edu hosts."
exit 1
fi
CONDA_ROOT=/cvmfs/software.igwn.org/conda
CONDA_ENV=igwn-py39
VENV=~/opt/locklost-${CONDA_ENV}
if [[ "$1" == '--setup' ]] ; then
shift
source ${CONDA_ROOT}/etc/profile.d/conda.sh
conda activate ${CONDA_ENV}
python3 -m venv --system-site-packages ${VENV}
ln -sfn ${VENV} ~/opt/locklost
fi
source ${VENV}/bin/activate
git pull --tags
if [[ "$1" == '--skip-test' ]] ; then
echo "WARNING: SKIPPING TESTS!"
else
./test/run
fi
pip install .
# install -t ~/bin/ support/bin/*
install -m 644 -t ~/.config/systemd/user support/systemd/*
systemctl --user daemon-reload
systemctl --user restart locklost-online.service
systemctl --user status locklost-online.service
from __future__ import print_function, division import os
import signal import signal
import logging import logging
from . import config
import matplotlib import matplotlib
matplotlib.use('Agg') matplotlib.use('Agg')
...@@ -10,7 +12,9 @@ try: ...@@ -10,7 +12,9 @@ try:
except ImportError: except ImportError:
__version__ = '?.?.?' __version__ = '?.?.?'
from . import config
logger = logging.getLogger('LOCKLOST')
# signal handler kill/stop signals # signal handler kill/stop signals
def signal_handler(signum, frame): def signal_handler(signum, frame):
...@@ -18,10 +22,25 @@ def signal_handler(signum, frame): ...@@ -18,10 +22,25 @@ def signal_handler(signum, frame):
signame = signal.Signal(signum).name signame = signal.Signal(signum).name
except AttributeError: except AttributeError:
signame = signum signame = signum
logging.error("Signal received: {} ({})".format(signame, signum)) logger.error("Signal received: {} ({})".format(signame, signum))
raise SystemExit(1) raise SystemExit(1)
def set_signal_handlers(): def set_signal_handlers():
signal.signal(signal.SIGINT, signal_handler) signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler) signal.signal(signal.SIGTERM, signal_handler)
signal.signal(eval('signal.'+config.CONDOR_KILL_SIGNAL), signal_handler) signal.signal(eval('signal.'+config.CONDOR_KILL_SIGNAL), signal_handler)
def config_logger(level=None):
if not level:
level = os.getenv('LOG_LEVEL', 'INFO')
handler = logging.StreamHandler()
if os.getenv('LOG_FMT_NOTIME'):
fmt = config.LOG_FMT_NOTIME
else:
fmt = config.LOG_FMT
formatter = logging.Formatter(fmt)
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(level)
import os import os
import subprocess import subprocess
import argparse import argparse
import logging
from . import __version__, set_signal_handlers from . import __version__
from . import set_signal_handlers
from . import config_logger
from . import config from . import config
from . import search from . import search
from . import analyze from . import analyze
...@@ -15,6 +16,7 @@ from . import plots ...@@ -15,6 +16,7 @@ from . import plots
########## ##########
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
prog='locklost', prog='locklost',
) )
...@@ -22,10 +24,10 @@ subparser = parser.add_subparsers( ...@@ -22,10 +24,10 @@ subparser = parser.add_subparsers(
title='Commands', title='Commands',
metavar='<command>', metavar='<command>',
dest='cmd', dest='cmd',
#help=argparse.SUPPRESS,
) )
subparser.required = True subparser.required = True
def gen_subparser(cmd, func): def gen_subparser(cmd, func):
assert func.__doc__, "empty docstring: {}".format(func) assert func.__doc__, "empty docstring: {}".format(func)
help = func.__doc__.split('\n')[0].lower().strip('.') help = func.__doc__.split('\n')[0].lower().strip('.')
...@@ -41,6 +43,7 @@ def gen_subparser(cmd, func): ...@@ -41,6 +43,7 @@ def gen_subparser(cmd, func):
########## ##########
parser.add_argument( parser.add_argument(
'--version', action='version', version='%(prog)s {}'.format(__version__), '--version', action='version', version='%(prog)s {}'.format(__version__),
help="print version and exit") help="print version and exit")
...@@ -77,6 +80,7 @@ def list_events(args): ...@@ -77,6 +80,7 @@ def list_events(args):
e.path(), e.path(),
)) ))
p = gen_subparser('list', list_events) p = gen_subparser('list', list_events)
...@@ -99,6 +103,7 @@ def show_event(args): ...@@ -99,6 +103,7 @@ def show_event(args):
# print("files:") # print("files:")
# subprocess.call(['find', e.path()]) # subprocess.call(['find', e.path()])
p = gen_subparser('show', show_event) p = gen_subparser('show', show_event)
p.add_argument('--log', '-l', action='store_true', p.add_argument('--log', '-l', action='store_true',
help="show event log") help="show event log")
...@@ -126,6 +131,7 @@ def tag_events(args): ...@@ -126,6 +131,7 @@ def tag_events(args):
else: else:
e.add_tag(tag) e.add_tag(tag)
p = gen_subparser('tag', tag_events) p = gen_subparser('tag', tag_events)
p.add_argument('event', p.add_argument('event',
help="event ID") help="event ID")
...@@ -139,6 +145,7 @@ def compress_segments(args): ...@@ -139,6 +145,7 @@ def compress_segments(args):
""" """
segments.compress_segdir(config.SEG_DIR) segments.compress_segdir(config.SEG_DIR)
p = gen_subparser('compress', compress_segments) p = gen_subparser('compress', compress_segments)
...@@ -156,6 +163,7 @@ def mask_segment(args): ...@@ -156,6 +163,7 @@ def mask_segment(args):
[Segment(args.start, args.end)], [Segment(args.start, args.end)],
) )
p = gen_subparser('mask', mask_segment) p = gen_subparser('mask', mask_segment)
p.add_argument('start', type=int, p.add_argument('start', type=int,
help="segment start") help="segment start")
...@@ -169,20 +177,50 @@ def plot_history(args): ...@@ -169,20 +177,50 @@ def plot_history(args):
""" """
plots.plot_history(args.path, lookback=args.lookback) plots.plot_history(args.path, lookback=args.lookback)
p = gen_subparser('plot-history', plot_history) p = gen_subparser('plot-history', plot_history)
p.add_argument('path', p.add_argument('path',
help="output file") help="output file")
p.add_argument('-l', '--lookback', default='7 days ago', p.add_argument('-l', '--lookback', default='7 days ago',
help="history look back ['7 days ago']") help="history look back ['7 days ago']")
def find_locks(args):
"""Find/clear event locks
These are sometimes left by failed condor jobs. This should not
normally need to be run, and caution should be taken to not remove
active locks and events currently being analyzed.
"""
root = config.EVENT_ROOT
for d in os.listdir(root):
try:
int(d)
except ValueError:
continue
for dd in os.listdir(os.path.join(root, d)):
try:
int(d+dd)
except ValueError:
continue
lock_path = os.path.join(root, d, dd, 'lock')
if os.path.exists(lock_path):
print(lock_path)
if args.remove:
os.remove(lock_path)
p = gen_subparser('find-locks', find_locks)
p.add_argument('-r', '--remove', action='store_true',
help="remove any found event locks")
########## ##########
def main(): def main():
set_signal_handlers() set_signal_handlers()
logging.basicConfig( config_logger()
level=os.getenv('LOG_LEVEL', 'INFO'),
format=config.LOG_FMT,
)
args = parser.parse_args() args = parser.parse_args()
if not config.EVENT_ROOT: if not config.EVENT_ROOT:
raise SystemExit("Must specify LOCKLOST_EVENT_ROOT env var.") raise SystemExit("Must specify LOCKLOST_EVENT_ROOT env var.")
......
import os import os
import sys import sys
import argparse
import logging import logging
import argparse
from matplotlib import pyplot as plt from matplotlib import pyplot as plt
from . import __version__, set_signal_handlers from . import __version__
from . import set_signal_handlers
from . import config_logger
from . import logger
from . import config from . import config
from .plugins import PLUGINS from .plugins import PLUGINS
from .event import LocklossEvent, find_events from .event import LocklossEvent, find_events
from . import condor from . import condor
from . import search from . import search
################################################## ##################################################
def analyze_event(event, plugins=None): def analyze_event(event, plugins=None):
"""Execute all follow-up plugins for event """Execute all follow-up plugins for event
...@@ -21,19 +26,20 @@ def analyze_event(event, plugins=None): ...@@ -21,19 +26,20 @@ def analyze_event(event, plugins=None):
# log analysis to event log # log analysis to event log
# always append log # always append log
log_mode = 'a' log_mode = 'a'
logger = logging.getLogger()
handler = logging.FileHandler(event.path('log'), mode=log_mode) handler = logging.FileHandler(event.path('log'), mode=log_mode)
handler.setFormatter(logging.Formatter(config.LOG_FMT)) handler.setFormatter(logging.Formatter(config.LOG_FMT))
logger.addHandler(handler) logger.addHandler(handler)
# log exceptions # log exceptions
def exception_logger(exc_type, exc_value, traceback): def exception_logger(exc_type, exc_value, traceback):
logging.error("Uncaught exception:", exc_info=(exc_type, exc_value, traceback)) logger.error("Uncaught exception:", exc_info=(exc_type, exc_value, traceback))
sys.excepthook = exception_logger sys.excepthook = exception_logger
try: try:
event.lock() event.lock()
except OSError as e: except OSError as e:
logging.error(e) logger.error(e)
raise raise
# clean up previous run # clean up previous run
...@@ -41,37 +47,37 @@ def analyze_event(event, plugins=None): ...@@ -41,37 +47,37 @@ def analyze_event(event, plugins=None):
event._scrub(archive=False) event._scrub(archive=False)
event._set_version(__version__) event._set_version(__version__)
logging.info("analysis version: {}".format(event.analysis_version)) logger.info("analysis version: {}".format(event.analysis_version))
logging.info("event id: {}".format(event.id)) logger.info("event id: {}".format(event.id))
logging.info("event path: {}".format(event.path())) logger.info("event path: {}".format(event.path()))
logging.info("event url: {}".format(event.url())) logger.info("event url: {}".format(event.url()))
complete = True complete = True
for name, func in PLUGINS.items(): for name, func in PLUGINS.items():
if plugins and name not in plugins: if plugins and name not in plugins:
continue continue
logging.info("executing plugin: {}({})".format( logger.info("executing plugin: {}({})".format(
name, event.id)) name, event.id))
try: try:
func(event) func(event)
except SystemExit: except SystemExit:
complete = False complete = False
logging.error("EXIT signal, cleaning up...") logger.error("EXIT signal, cleaning up...")
break break
except AssertionError: except AssertionError:
complete = False complete = False
logging.exception("FATAL exception in {}:".format(name)) logger.exception("FATAL exception in {}:".format(name))
break break
except: except Exception:
complete = False complete = False
logging.exception("exception in {}:".format(name)) logger.exception("exception in {}:".format(name))
plt.close('all') plt.close('all')
if complete: if complete:
logging.info("analysis complete") logger.info("analysis complete")
event._set_status(0) event._set_status(0)
else: else:
logging.warning("INCOMPLETE ANALYSIS") logger.warning("INCOMPLETE ANALYSIS")
event._set_status(1) event._set_status(1)
logger.removeHandler(handler) logger.removeHandler(handler)
...@@ -82,38 +88,39 @@ def analyze_event(event, plugins=None): ...@@ -82,38 +88,39 @@ def analyze_event(event, plugins=None):
def analyze_condor(event): def analyze_condor(event):
if event.analyzing: if event.analyzing:
logging.warning("event analysis already in progress, aborting") logger.warning("event analysis already in progress, aborting")
return return
condor_dir = event.path('condor') condor_dir = event.path('condor')
try: try:
os.makedirs(condor_dir) os.makedirs(condor_dir)
except: except Exception:
pass pass
sub = condor.CondorSubmit( sub = condor.CondorSubmit(
condor_dir, condor_dir,
'analyze', 'analyze',
[str(event.id)], [str(event.id)],
local=False, local=False,
notify_user=os.getenv('CONDOR_NOTIFY_USER'),
) )
sub.write() sub.write()
sub.submit() sub.submit()
################################################## ##################################################
def _parser_add_arguments(parser): def _parser_add_arguments(parser):
from .util import GPSTimeParseAction from .util import GPSTimeParseAction
egroup = parser.add_mutually_exclusive_group(required=True) egroup = parser.add_mutually_exclusive_group(required=True)
egroup.add_argument('event', action=GPSTimeParseAction, nargs='?', type=int, egroup.add_argument('event', nargs='?', type=int,
help="event ID / GPS second") help="event ID / GPS second")
egroup.add_argument('--condor', action=GPSTimeParseAction, nargs=2, metavar='GPS', egroup.add_argument('--condor', action=GPSTimeParseAction, nargs=2, metavar='GPS',
help="condor analyze all events within GPS range") help="condor analyze all events within GPS range. If the second time is '0' (zero), the first time will be interpreted as an exact event time and just that specific event will be analyzed via condor submit.")
parser.add_argument('--rerun', action='store_true', parser.add_argument('--rerun', action='store_true',
help="condor re-analyze events") help="condor re-analyze events")
parser.add_argument('--plugin', '-p', action='append', parser.add_argument('--plugin', '-p', action='append',
help="execute only specified plugin (multiple ok, not available for condor runs)") help="execute only specified plugin (multiple ok, not available for condor runs)")
def main(args=None): def main(args=None):
"""Analyze event(s) """Analyze event(s)
...@@ -128,20 +135,23 @@ def main(args=None): ...@@ -128,20 +135,23 @@ def main(args=None):
args = parser.parse_args() args = parser.parse_args()
if args.condor: if args.condor:
logging.info("finding events...") t0, t1 = tuple(int(t) for t in args.condor)
after, before = tuple(int(t) for t in args.condor) if t1 == 0:
events = [e for e in find_events(after=after, before=before)] events = [LocklossEvent(t0)]
else:
logger.info("finding events...")
events = [e for e in find_events(after=t0, before=t1)]
to_analyze = [] to_analyze = []
for e in events: for e in events:
if e.analyzing: if e.analyzing:
logging.debug(" {} analyzing".format(e)) logger.debug(" {} analyzing".format(e))
continue continue
if e.analysis_succeeded and not args.rerun: if e.analysis_succeeded and not args.rerun:
logging.debug(" {} suceeded".format(e)) logger.debug(" {} suceeded".format(e))
continue continue
logging.debug(" adding {}".format(e)) logger.debug(" adding {}".format(e))
to_analyze.append(e) to_analyze.append(e)
logging.info("found {} events, {} to analyze.".format(len(events), len(to_analyze))) logger.info("found {} events, {} to analyze.".format(len(events), len(to_analyze)))
if not to_analyze: if not to_analyze:
return return
...@@ -165,13 +175,13 @@ def main(args=None): ...@@ -165,13 +175,13 @@ def main(args=None):
except OSError: except OSError:
pass pass
if not event: if not event:
logging.info("no event matching GPS {}. searching...".format(int(args.event))) logger.info("no event matching GPS {}. searching...".format(int(args.event)))
search.search((args.event-1, args.event+1)) search.search((args.event-1, args.event+1))
try: try:
event = LocklossEvent(args.event) event = LocklossEvent(args.event)
except OSError as e: except OSError as e:
sys.exit(e) sys.exit(e)
logging.info("analyzing event {}...".format(event)) logger.info("analyzing event {}...".format(event))
if not analyze_event(event, args.plugin): if not analyze_event(event, args.plugin):
sys.exit(1) sys.exit(1)
...@@ -179,10 +189,7 @@ def main(args=None): ...@@ -179,10 +189,7 @@ def main(args=None):
# direct execution of this module intended for condor jobs # direct execution of this module intended for condor jobs
if __name__ == '__main__': if __name__ == '__main__':
set_signal_handlers() set_signal_handlers()
logging.basicConfig( config_logger(level='DEBUG')
level='DEBUG',
format=config.LOG_FMT,
)
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument('event', type=int, parser.add_argument('event', type=int,
help="event ID / GPS second") help="event ID / GPS second")
......
...@@ -5,14 +5,16 @@ import shutil ...@@ -5,14 +5,16 @@ import shutil
import collections import collections
import subprocess import subprocess
# import uuid # import uuid
import logging
import datetime import datetime
from . import logger
from . import config from . import config
################################################## ##################################################
def _write_executable( def _write_executable(
module, module,
path, path,
...@@ -23,29 +25,19 @@ def _write_executable( ...@@ -23,29 +25,19 @@ def _write_executable(
The first argument is the name of the locklost module to be executed. The first argument is the name of the locklost module to be executed.
""" """
PYTHONPATH = os.getenv('PYTHONPATH', '')
NDSSERVER = os.getenv('NDSSERVER', '')
LIGO_DATAFIND_SERVER = os.getenv('LIGO_DATAFIND_SERVER')
with open(path, 'w') as f: with open(path, 'w') as f:
f.write("""#!/bin/sh f.write(f"""#!/bin/sh
export IFO={} export IFO={config.IFO}
export LOCKLOST_EVENT_ROOT={} export LOCKLOST_EVENT_ROOT={config.EVENT_ROOT}
export PYTHONPATH={} export PYTHONPATH={PYTHONPATH}
export DATA_ACCESS={} export DATA_ACCESS={data_access}
export NDSSERVER={} export NDSSERVER={NDSSERVER}
export LIGO_DATAFIND_SERVER={} export LIGO_DATAFIND_SERVER={LIGO_DATAFIND_SERVER}
export CONDOR_ACCOUNTING_GROUP={} exec {sys.executable} -m locklost.{module} "$@" 2>&1
export CONDOR_ACCOUNTING_GROUP_USER={} """)
exec {} -m locklost.{} "$@" 2>&1
""".format(
config.IFO,
config.EVENT_ROOT,
os.getenv('PYTHONPATH', ''),
data_access,
os.getenv('NDSSERVER', ''),
os.getenv('LIGO_DATAFIND_SERVER'),
config.CONDOR_ACCOUNTING_GROUP,
config.CONDOR_ACCOUNTING_GROUP_USER,
sys.executable,
module,
))
os.chmod(path, 0o755) os.chmod(path, 0o755)
...@@ -75,6 +67,7 @@ class CondorSubmit(object): ...@@ -75,6 +67,7 @@ class CondorSubmit(object):
('executable', '{}'.format(self.exec_path)), ('executable', '{}'.format(self.exec_path)),
('arguments', ' '.join(args)), ('arguments', ' '.join(args)),
('universe', universe), ('universe', universe),
('request_disk', '10 MB'),
('accounting_group', config.CONDOR_ACCOUNTING_GROUP), ('accounting_group', config.CONDOR_ACCOUNTING_GROUP),
('accounting_group_user', config.CONDOR_ACCOUNTING_GROUP_USER), ('accounting_group_user', config.CONDOR_ACCOUNTING_GROUP_USER),
('getenv', 'True'), ('getenv', 'True'),
...@@ -98,11 +91,12 @@ class CondorSubmit(object): ...@@ -98,11 +91,12 @@ class CondorSubmit(object):
def submit(self): def submit(self):
assert os.path.exists(self.sub_path), "Must write() before submitting" assert os.path.exists(self.sub_path), "Must write() before submitting"
logging.info("condor submit: {}".format(self.sub_path)) logger.info("condor submit: {}".format(self.sub_path))
subprocess.call(['condor_submit', self.sub_path]) subprocess.call(['condor_submit', self.sub_path])
################################################## ##################################################
class CondorDAG(object): class CondorDAG(object):
def __init__(self, condor_dir, module, submit_args_gen, log=None, use_test_job=False): def __init__(self, condor_dir, module, submit_args_gen, log=None, use_test_job=False):
...@@ -132,7 +126,7 @@ class CondorDAG(object): ...@@ -132,7 +126,7 @@ class CondorDAG(object):
break break
if not log: if not log:
log='logs/$(jobid)_$(cluster)_$(process).' log = 'logs/$(jobid)_$(cluster)_$(process).'
self.sub = CondorSubmit( self.sub = CondorSubmit(
self.condor_dir, self.condor_dir,
...@@ -153,28 +147,29 @@ class CondorDAG(object): ...@@ -153,28 +147,29 @@ class CondorDAG(object):
s = '' s = ''
for jid, sargs in enumerate(self.submit_args_gen()): for jid, sargs in enumerate(self.submit_args_gen()):
VARS = self.__gen_VARS(('jobid', jid), *sargs) VARS = self.__gen_VARS(('jobid', jid), *sargs)
s += ''' VARS = ' '.join(VARS)
JOB {jid} {sub} s += f'''
JOB {jid} {self.sub.sub_path}
VARS {jid} {VARS} VARS {jid} {VARS}
RETRY {jid} 1 RETRY {jid} 1
'''.format(jid=jid, '''
sub=self.sub.sub_path,
VARS=' '.join(VARS),
)
if self.use_test_job and jid == 0: if self.use_test_job and jid == 0:
s += '''RETRY {jid} 1 s += '''RETRY {jid} 1
'''.format(jid=jid) '''.format(jid=jid)
logging.info("condor DAG {} jobs".format(jid+1)) logger.info("condor DAG {} jobs".format(jid+1))
return s return s
@property
def lock(self):
return os.path.join(self.condor_dir, 'dag.lock')
@property @property
def has_lock(self): def has_lock(self):
lock = os.path.join(self.condor_dir, 'dag.lock') return os.path.exists(self.lock)
return os.path.exists(lock)
def write(self): def write(self):
if self.has_lock: if self.has_lock:
raise RuntimeError("DAG already running: {}".format(lock)) raise RuntimeError("DAG already running: {}".format(self.lock))
shutil.rmtree(self.condor_dir) shutil.rmtree(self.condor_dir)
try: try:
os.makedirs(os.path.join(self.condor_dir, 'logs')) os.makedirs(os.path.join(self.condor_dir, 'logs'))
...@@ -187,8 +182,8 @@ RETRY {jid} 1 ...@@ -187,8 +182,8 @@ RETRY {jid} 1
def submit(self): def submit(self):
assert os.path.exists(self.dag_path), "Must write() before submitting" assert os.path.exists(self.dag_path), "Must write() before submitting"
if self.has_lock: if self.has_lock:
raise RuntimeError("DAG already running: {}".format(lock)) raise RuntimeError("DAG already running: {}".format(self.lock))
logging.info("condor submit dag: {}".format(self.dag_path)) logger.info("condor submit dag: {}".format(self.dag_path))
subprocess.call(['condor_submit_dag', self.dag_path]) subprocess.call(['condor_submit_dag', self.dag_path])
print(""" print("""
Run the following to monitor condor job: Run the following to monitor condor job:
...@@ -198,6 +193,7 @@ tail -F {0}.* ...@@ -198,6 +193,7 @@ tail -F {0}.*
################################################## ##################################################
def find_jobs(): def find_jobs():
"""find all running condor jobs """find all running condor jobs
...@@ -239,7 +235,7 @@ def stop_jobs(job_list): ...@@ -239,7 +235,7 @@ def stop_jobs(job_list):
import htcondor import htcondor
schedd = htcondor.Schedd() schedd = htcondor.Schedd()
for job in job_list: for job in job_list:
logging.info("stopping job: {}".format(job_str(job))) logger.info("stopping job: {}".format(job_str(job)))
schedd.act( schedd.act(
htcondor.JobAction.Remove, htcondor.JobAction.Remove,
'ClusterId=={} && ProcId=={}'.format(job['ClusterId'], job['ProcId']), 'ClusterId=={} && ProcId=={}'.format(job['ClusterId'], job['ProcId']),
......
...@@ -6,22 +6,28 @@ IFO = os.getenv('IFO') ...@@ -6,22 +6,28 @@ IFO = os.getenv('IFO')
if not IFO: if not IFO:
sys.exit('Must specify IFO env var.') sys.exit('Must specify IFO env var.')
def ifochans(channels): def ifochans(channels):
return ['{}:{}'.format(IFO, chan) for chan in channels] return ['{}:{}'.format(IFO, chan) for chan in channels]
LOG_FMT = '%(asctime)s [%(module)s.%(funcName)s] %(message)s'
LOG_FMT = '%(asctime)s [%(module)s:%(funcName)s] %(message)s'
LOG_FMT_NOTIME = '[%(module)s:%(funcName)s] %(message)s'
EVENT_ROOT = os.getenv('LOCKLOST_EVENT_ROOT', '') EVENT_ROOT = os.getenv('LOCKLOST_EVENT_ROOT', '')
if os.path.exists(EVENT_ROOT): if os.path.exists(EVENT_ROOT):
EVENT_ROOT = os.path.abspath(EVENT_ROOT) EVENT_ROOT = os.path.abspath(EVENT_ROOT)
SEG_DIR = os.path.join(EVENT_ROOT, '.segments') SUMMARY_ROOT = os.getenv('LOCKLOST_SUMMARY_ROOT', '')
WEB_ROOT = os.getenv('LOCKLOST_WEB_ROOT', '') WEB_ROOT = os.getenv('LOCKLOST_WEB_ROOT', '')
SEG_DIR = os.path.join(EVENT_ROOT, '.segments')
CONDOR_ONLINE_DIR = os.path.join(EVENT_ROOT, '.condor_online') CONDOR_ONLINE_DIR = os.path.join(EVENT_ROOT, '.condor_online')
CONDOR_SEARCH_DIR = os.path.join(EVENT_ROOT, '.condor_search') CONDOR_SEARCH_DIR = os.path.join(EVENT_ROOT, '.condor_search')
CONDOR_ANALYZE_DIR = os.path.join(EVENT_ROOT, '.condor_analyze') CONDOR_ANALYZE_DIR = os.path.join(EVENT_ROOT, '.condor_analyze')
ONLINE_STAT_FILE = os.path.join(EVENT_ROOT, '.online-stat')
QUERY_DEFAULT_LIMIT = 50 QUERY_DEFAULT_LIMIT = 50
CONDOR_ACCOUNTING_GROUP = os.getenv('CONDOR_ACCOUNTING_GROUP') CONDOR_ACCOUNTING_GROUP = os.getenv('CONDOR_ACCOUNTING_GROUP')
...@@ -38,26 +44,24 @@ if IFO == 'H1': ...@@ -38,26 +44,24 @@ if IFO == 'H1':
GRD_NOMINAL_STATE = (600, 'NOMINAL_LOW_NOISE') GRD_NOMINAL_STATE = (600, 'NOMINAL_LOW_NOISE')
elif IFO == 'L1': elif IFO == 'L1':
GRD_NOMINAL_STATE = (2000, 'LOW_NOISE') GRD_NOMINAL_STATE = (2000, 'LOW_NOISE')
GRD_LOCKLOSS_STATES = [ GRD_LOCKLOSS_STATES = [(2, 'LOCKLOSS')]
(2, 'LOCKLOSS'),
# (3, 'LOCKLOSS_DRMI'),
]
if IFO == 'H1': if IFO == 'H1':
CDS_EVENT_ROOT = 'https://lhocds.ligo-wa.caltech.edu/exports/lockloss/events' CDS_EVENT_ROOT = 'https://lhocds.ligo-wa.caltech.edu/exports/lockloss/events'
elif IFO == 'L1': elif IFO == 'L1':
CDS_EVENT_ROOT = 'https://llocds.ligo-la.caltech.edu/data/lockloss/events' CDS_EVENT_ROOT = 'https://llocds.ligo-la.caltech.edu/data/lockloss/events'
O3_GPS_START = 1238112018 O4_GPS_START = 1368975618
SEARCH_STRIDE = 10000 SEARCH_STRIDE = 10000
DATA_ACCESS = os.getenv('DATA_ACCESS', 'gwpy') DATA_ACCESS = os.getenv('DATA_ACCESS', 'gwpy')
MAX_QUERY_LATENCY = 70 DATA_DEVSHM_ROOT = os.path.join("/dev/shm/lldetchar/", IFO)
DATA_DEVSHM_TIMEOUT = 80
DATA_DISCOVERY_SLEEP = 30 DATA_DISCOVERY_SLEEP = 30
DATA_DISCOVERY_TIMEOUT = 600 DATA_DISCOVERY_TIMEOUT = int(os.getenv('DATA_DISCOVERY_TIMEOUT', 1200))
# tag colors for web (foreground, background) # tag colors for web (foreground, background)
TAG_COLORS = { TAG_COLORS = {
...@@ -71,11 +75,19 @@ TAG_COLORS = { ...@@ -71,11 +75,19 @@ TAG_COLORS = {
'ANTHROPOGENIC': ('red', 'black'), 'ANTHROPOGENIC': ('red', 'black'),
'ADS_EXCURSION': ('mediumvioletred', 'plum'), 'ADS_EXCURSION': ('mediumvioletred', 'plum'),
'FSS_OSCILLATION': ('69650b', 'palegoldenrod'), 'FSS_OSCILLATION': ('69650b', 'palegoldenrod'),
'ETM_GLITCH': ('navy', 'red'),
'ISS': ('beige', 'darkolivegreen'),
'SEI_BS_TRANS': ('sienna', 'orange'), 'SEI_BS_TRANS': ('sienna', 'orange'),
'COMMISSIONING': ('lightseagreen', 'darkslategrey'), 'COMMISSIONING': ('lightseagreen', 'darkslategrey'),
'MAINTENANCE': ('lightseagreen', 'darkslategrey'), 'MAINTENANCE': ('lightseagreen', 'darkslategrey'),
'CALIBRATION': ('lightseagreen', 'darkslategrey'), 'CALIBRATION': ('lightseagreen', 'darkslategrey'),
'FAST_DRMI': ('blue', 'pink'), 'FAST_DRMI': ('blue', 'pink'),
'SOFT_LIMITERS': ('hotpink', 'green'),
'OMC_DCPD': ('#dcd0ff', '#734f96'),
'INITIAL_ALIGNMENT': ('yellow', 'purple'),
'PI_MONITOR': ('seashell', 'coral'),
'VIOLIN': ('linen', 'saddlebrown'),
'IMC': ('coral', 'palegoldenrod'),
} }
################################################## ##################################################
...@@ -105,7 +117,7 @@ if IFO == 'H1': ...@@ -105,7 +117,7 @@ if IFO == 'H1':
INDICATORS = { INDICATORS = {
410: { 410: {
'CHANNEL': 'H1:ASC-AS_A_DC_NSUM_OUT_DQ', 'CHANNEL': 'H1:ASC-AS_A_DC_NSUM_OUT_DQ',
'THRESHOLD': 10, 'THRESHOLD': -25,
'MINIMUM': 50, 'MINIMUM': 50,
}, },
102: { 102: {
...@@ -131,67 +143,165 @@ elif IFO == 'L1': ...@@ -131,67 +143,165 @@ elif IFO == 'L1':
################################################## ##################################################
PLOT_SATURATIONS = 8 PLOT_SATURATIONS = 8
SATURATION_THRESHOLD = 131072
PLOT_CHUNKSIZE = 3000 PLOT_CHUNKSIZE = 3000
# This corresponds to a change from 18 to 20-bit DAC for ETMX L3 channels. # This function should give the saturation threshold for a given channel
# ETMX L3 channels after this date have counts that are four times higher # It checks if a channel is a known 20, 18, or 16-bit daq
if IFO == 'H1': # If it is not a 20 or 16-bit daq is says it is a 18-bit daq
CHANGE_DAC_DATE = 1224961218 # Will default to 18-bit daq
if IFO == 'L1':
CHANGE_DAC_DATE = 1188518418
def get_saturation_threshold(chan, gps, IFO):
# ETM's are 20-bit and were changed at a date within our wanted # Threshold is divided by two for the counts can be positive or negative
# lockloss time range SATURATION_THRESHOLD_16 = (2**16) / 2
if IFO == 'H1': SATURATION_THRESHOLD_18 = (2**18) / 2
ETM_L3_CHANNELS = [ SATURATION_THRESHOLD_20 = (2**20) / 2
'SUS-ETMX_L3_MASTER_OUT_UR_DQ', SATURATION_THRESHOLD_28 = (2**28) / 2
'SUS-ETMX_L3_MASTER_OUT_UL_DQ', # Initializing bit channel lists so we don't run into issues when returning
'SUS-ETMX_L3_MASTER_OUT_LR_DQ', TWENTYEIGHT_BIT_CHANNELS = []
'SUS-ETMX_L3_MASTER_OUT_LL_DQ', TWENTY_BIT_CHANNELS = []
] EIGHTEEN_BIT_CHANNELS = []
if IFO == 'L1': SIXTEEN_BIT_CHANNELS = [
ETM_L3_CHANNELS = [ 'SUS-RM',
'SUS-ETMY_L3_MASTER_OUT_DC_DQ', 'SUS-ZM',
'SUS-ETMY_L3_MASTER_OUT_UR_DQ', 'SUS-OM1',
'SUS-ETMY_L3_MASTER_OUT_UL_DQ', 'SUS-OM2',
'SUS-ETMY_L3_MASTER_OUT_LR_DQ', 'SUS-OM3'
'SUS-ETMY_L3_MASTER_OUT_LL_DQ'
] ]
ETM_L3_CHANNELS = ifochans(ETM_L3_CHANNELS)
if IFO == 'L1':
# RM's, OM'S, and ZM'S are 16-bit # This corresponds to a change from 18 to 20-bit DAC for ETMY L3 channels.
SIXTEEN_BIT_CHANNELS = [ # ETMY L3 channels after this date have counts that are four times higher
'SUS-RM1_M1_MASTER_OUT_UR_DQ', CHANGE_DAC_DATE = 1188518418 # 2017/09/04
'SUS-RM1_M1_MASTER_OUT_UL_DQ', # This gets the DAC configuration in line with https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=63133
'SUS-RM1_M1_MASTER_OUT_LR_DQ', # Exact DAC times should be added later to better analyze old events
'SUS-RM1_M1_MASTER_OUT_LL_DQ', CHANGE_DAC_DATE2 = 1358033599 # 2023/01/17
'SUS-RM2_M1_MASTER_OUT_UR_DQ', # https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=63808
'SUS-RM2_M1_MASTER_OUT_UL_DQ', CHANGE_DAC_DATE3 = 1361911279 # 2023/03/03
'SUS-RM2_M1_MASTER_OUT_LR_DQ', elif IFO == 'H1':
'SUS-RM2_M1_MASTER_OUT_LL_DQ', # This corresponds to a change from 18 to 20-bit DAC for ETMX L3 channels.
'SUS-ZM1_M1_MASTER_OUT_UR_DQ', # ETMX L3 channels after this date have counts that are four times higher
'SUS-ZM1_M1_MASTER_OUT_UL_DQ', CHANGE_DAC_DATE = 1224961218 # 2018/10/30
'SUS-ZM1_M1_MASTER_OUT_LR_DQ', # This gets the DAC configuration in line with https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=66836
'SUS-ZM1_M1_MASTER_OUT_LL_DQ', # Exact DAC times should be added later to better analyze old events
'SUS-ZM2_M1_MASTER_OUT_UR_DQ', CHANGE_DAC_DATE2 = 1358024479 # 2023/01/17
'SUS-ZM2_M1_MASTER_OUT_UL_DQ', # Change from 20-bit to 28-bit DACs on H1 EX L1, L2, L3 only
'SUS-ZM2_M1_MASTER_OUT_LR_DQ', # See https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80167
'SUS-ZM2_M1_MASTER_OUT_LL_DQ', CHANGE_DAC_DATE3 = 1410642901 # 2024/09/18
'SUS-OM1_M1_MASTER_OUT_UR_DQ', else:
'SUS-OM1_M1_MASTER_OUT_UL_DQ', # Assuming missed channel is 18-bit
'SUS-OM1_M1_MASTER_OUT_LR_DQ', return SATURATION_THRESHOLD_18
'SUS-OM1_M1_MASTER_OUT_LL_DQ',
'SUS-OM2_M1_MASTER_OUT_UR_DQ', # Before CHANGE_DAC_DATE (L1 2017/09/04, H1 2018/10/30)
'SUS-OM2_M1_MASTER_OUT_UL_DQ', if gps < CHANGE_DAC_DATE:
'SUS-OM2_M1_MASTER_OUT_LR_DQ', if any(x in chan for x in SIXTEEN_BIT_CHANNELS):
'SUS-OM2_M1_MASTER_OUT_LL_DQ', return SATURATION_THRESHOLD_16
'SUS-OM3_M1_MASTER_OUT_UR_DQ', else:
'SUS-OM3_M1_MASTER_OUT_UL_DQ', return SATURATION_THRESHOLD_18
'SUS-OM3_M1_MASTER_OUT_LR_DQ', # Between CHANGE_DAC_DATE and CHANGE_DAC_DATE2
'SUS-OM3_M1_MASTER_OUT_LL_DQ', # (L1 2017/09/04-2023/01/17, H1 2018/10/30-2023/01/17)
] elif gps >= CHANGE_DAC_DATE and gps < CHANGE_DAC_DATE2:
SIXTEEN_BIT_CHANNELS = ifochans(SIXTEEN_BIT_CHANNELS) # ETM's are 20-bit and were changed at a date within our wanted
# lockloss time range
if IFO == 'L1':
ETM_L3_CHANNELS = 'SUS-ETMY_L3'
elif IFO == 'H1':
ETM_L3_CHANNELS = 'SUS-ETMX_L3'
if ETM_L3_CHANNELS in chan:
return SATURATION_THRESHOLD_20
elif any(x in chan for x in SIXTEEN_BIT_CHANNELS):
return SATURATION_THRESHOLD_16
else:
return SATURATION_THRESHOLD_18
# Between CHANGE_DAC_DATE2 and CHANGE_DAC_DATE3
# (L1 2023/01/17-2023/03/03, H1 2023/01/17-2024/09/18)
elif gps >= CHANGE_DAC_DATE2:
if IFO == 'L1':
TWENTY_BIT_CHANNELS = [
'SUS-ETM',
'SUS-ITM',
'SUS-BS',
'SUS-MC1',
'SUS-MC2',
'SUS-MC3',
'SUS-PRM',
'SUS-PR2',
'SUS-PR3',
'SUS-SR2',
'SUS-SRM_M2',
'SUS-SRM_M3'
]
EIGHTEEN_BIT_CHANNELS = [
'SUS-SRM_M1',
'SUS-SR3',
'SUS-OMC',
'SUS-IM'
]
elif IFO == 'H1':
TWENTY_BIT_CHANNELS = [
'SUS-ETMX_L1',
'SUS-ETMX_L2',
'SUS-ETMX_L3',
'SUS-ETMY_L1',
'SUS-ETMY_L2',
'SUS-ETMY_L3',
'SUS-ITMX_L2',
'SUS-ITMX_L3',
'SUS-ITMY_L2',
'SUS-ITMY_L3',
'SUS-BS_M2',
'SUS-SRM_M2',
'SUS-SRM_M3',
'SUS-SR2_M2',
'SUS-SR2_M3'
]
EIGHTEEN_BIT_CHANNELS = [
'SUS-ETMX_M0',
'SUS-ETMY_M0',
'SUS-ITMX_M0',
'SUS-ITMX_L1',
'SUS-ITMY_M0',
'SUS-ITMY_L1',
'SUS-MC1',
'SUS-MC2',
'SUS-MC3',
'SUS-PRM',
'SUS-PR2',
'SUS-PR3',
'SUS-SRM_M1',
'SUS-SR2_M1',
'SUS-SR3',
'SUS-BS_M1',
'SUS-OMC',
]
# After CHANGE_DAC_DATE3 (L1 2023/03/03, H1 2024/09/18)
if gps >= CHANGE_DAC_DATE3:
if IFO == 'L1':
TWENTY_BIT_CHANNELS = TWENTY_BIT_CHANNELS + EIGHTEEN_BIT_CHANNELS
elif IFO == 'H1':
TWENTYEIGHT_BIT_CHANNELS = [
'SUS-ETMX_L1',
'SUS-ETMX_L2',
'SUS-ETMX_L3'
]
for new_susstage in TWENTYEIGHT_BIT_CHANNELS:
for susstage in TWENTY_BIT_CHANNELS:
if new_susstage == susstage:
TWENTY_BIT_CHANNELS.remove(susstage)
# Return saturation thresholds
if any(x in chan for x in TWENTYEIGHT_BIT_CHANNELS):
return SATURATION_THRESHOLD_28
elif any(x in chan for x in TWENTY_BIT_CHANNELS):
return SATURATION_THRESHOLD_20
elif any(x in chan for x in EIGHTEEN_BIT_CHANNELS):
return SATURATION_THRESHOLD_18
elif any(x in chan for x in SIXTEEN_BIT_CHANNELS):
return SATURATION_THRESHOLD_16
else:
# Assuming missed channel is 18-bit
return SATURATION_THRESHOLD_18
ANALOG_BOARD_CHANNELS = [ ANALOG_BOARD_CHANNELS = [
'IMC-REFL_SERVO_SPLITMON', 'IMC-REFL_SERVO_SPLITMON',
...@@ -216,6 +326,11 @@ BOARD_SAT_THRESH = 9.5 ...@@ -216,6 +326,11 @@ BOARD_SAT_THRESH = 9.5
################################################## ##################################################
if IFO == 'H1':
ADS_GRD_STATE = (429, 'PREP_ASC_FOR_FULL_IFO')
elif IFO == 'L1':
ADS_GRD_STATE = (1197, 'START_SPOT_CENTERING')
ADS_CHANNELS = [] ADS_CHANNELS = []
for py in ['PIT', 'YAW']: for py in ['PIT', 'YAW']:
for num in [3, 4, 5]: for num in [3, 4, 5]:
...@@ -227,6 +342,74 @@ ADS_THRESH = 0.1 ...@@ -227,6 +342,74 @@ ADS_THRESH = 0.1
################################################## ##################################################
SOFT_SEARCH_WINDOW = [-20, 0]
ASC_INMON = []
for asc in ['CHARD', 'CSOFT', 'DHARD', 'DSOFT', 'SRC1', 'SRC2', 'PRC1', 'PRC2', 'INP1', 'INP2', 'MICH']:
for dof in ['P', 'Y']:
ASC_INMON.append(f'{IFO}:ASC-{asc}_{dof}_SMOOTH_INMON')
ASC_LIMIT = []
for asc in ['CHARD', 'CSOFT', 'DHARD', 'DSOFT', 'SRC1', 'SRC2', 'PRC1', 'PRC2', 'INP1', 'INP2', 'MICH']:
for dof in ['P', 'Y']:
ASC_LIMIT.append(f'{IFO}:ASC-{asc}_{dof}_SMOOTH_LIMIT')
ASC_ENABLE = []
for asc in ['CHARD', 'CSOFT', 'DHARD', 'DSOFT', 'SRC1', 'SRC2', 'PRC1', 'PRC2', 'INP1', 'INP2', 'MICH']:
for dof in ['P', 'Y']:
ASC_ENABLE.append(f'{IFO}:ASC-{asc}_{dof}_SMOOTH_ENABLE')
ASC = [j for i in [ASC_INMON, ASC_LIMIT, ASC_ENABLE] for j in i]
##################################################
OMC_DCPD_CHANNELS = [
'FEC-8_ADC_OVERFLOW_0_12',
'FEC-8_ADC_OVERFLOW_0_13',
]
OMC_DCPD_CHANNELS = ifochans(OMC_DCPD_CHANNELS)
OMC_DCPD_WINDOW = [-5, 1]
OMC_DCPD_BUFFER = 1
##################################################
if IFO == 'H1':
INIT_GRD_STATE = (307, 'SHUTTER_ALS')
ITM_CHANNELS = [] # pulls both ITM error channel signals
for arm in ['X', 'Y']:
for dof in ['PIT', 'YAW']:
ITM_CHANNELS.append('%s:ALS-%s_CAM_ITM_%s_ERR_OUT16' % (IFO, arm, dof))
ALS_WFS_DOF1 = [] # channels for ALS DOF1 WFS
for arm in ['X', 'Y']:
for dof in ['P', 'Y']:
ALS_WFS_DOF1.append('%s:ALS-%s_WFS_DOF_1_%s_OUT16' % (IFO, arm, dof))
ALS_WFS_DOF2 = [] # channels for ALS DOF2 WFS
for arm in ['X', 'Y']:
for dof in ['P', 'Y']:
ALS_WFS_DOF2.append('%s:ALS-%s_WFS_DOF_2_%s_OUT16' % (IFO, arm, dof))
ALS_WFS_DOF3 = [] # channels for ALS DOF3 WFS
for arm in ['X', 'Y']:
for dof in ['P', 'Y']:
ALS_WFS_DOF3.append('%s:ALS-%s_WFS_DOF_3_%s_OUT16' % (IFO, arm, dof))
DOF_1_SCALE = 0.0004 # scaled values for all DOF WFS
DOF_2_SCALE = 10**-7
DOF_3_SCALE = 0.001
WFS_THRESH = 0.8
INITIAL_ALIGNMENT_SEARCH_WINDOW = [-20, 0]
INITIAL_ALIGNMENT_THRESH = 0.15
##################################################
LPY_CHANNELS = [ LPY_CHANNELS = [
'SUS-ETMX_L2_MASTER_OUT_UR_DQ', 'SUS-ETMX_L2_MASTER_OUT_UR_DQ',
'SUS-ETMX_L2_MASTER_OUT_UL_DQ', 'SUS-ETMX_L2_MASTER_OUT_UL_DQ',
...@@ -284,7 +467,7 @@ ADC_OVERFLOWS = { ...@@ -284,7 +467,7 @@ ADC_OVERFLOWS = {
if IFO == 'H1': if IFO == 'H1':
ADC_OVERFLOWS['OMC'] = { ADC_OVERFLOWS['OMC'] = {
'ADC_ID': 8, 'ADC_ID': 8,
'num_bits': 3, 'num_bits': 1,
'bit_exclude': [(2, 4), (2, 13), (2, 15)], 'bit_exclude': [(2, 4), (2, 13), (2, 15)],
} }
...@@ -312,8 +495,8 @@ elif IFO == 'L1': ...@@ -312,8 +495,8 @@ elif IFO == 'L1':
# Set seismic band thresholds based on IFO site # Set seismic band thresholds based on IFO site
if IFO == 'H1': if IFO == 'H1':
SEI_EARTHQUAKE_THRESH = 300 SEI_EARTHQUAKE_THRESH = 300
SEI_ANTHROPOGENIC_THRESH = 1000 # placeholder SEI_ANTHROPOGENIC_THRESH = 1000 # placeholder
SEI_MICROSEISMIC_THRESH = 3000 # placeholder SEI_MICROSEISMIC_THRESH = 3000 # placeholder
if IFO == 'L1': if IFO == 'L1':
SEI_EARTHQUAKE_THRESH = 600 SEI_EARTHQUAKE_THRESH = 600
SEI_ANTHROPOGENIC_THRESH = 1000 SEI_ANTHROPOGENIC_THRESH = 1000
...@@ -324,7 +507,7 @@ if IFO == 'L1': ...@@ -324,7 +507,7 @@ if IFO == 'L1':
# channel, and tag designation # channel, and tag designation
SEISMIC_CONFIG = { SEISMIC_CONFIG = {
'EQ band': { 'EQ band': {
'channel': 'ISI-GND_STS_ITMY_Z_BLRMS_30M_100M', 'channel': 'ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON',
'savefile': 'seismic_eq.png', 'savefile': 'seismic_eq.png',
'dq_channel': 'ISI-GND_STS_ITMY_Z_DQ', 'dq_channel': 'ISI-GND_STS_ITMY_Z_DQ',
'axis': 'Z', 'axis': 'Z',
...@@ -389,8 +572,9 @@ SEISMIC_CONFIG = { ...@@ -389,8 +572,9 @@ SEISMIC_CONFIG = {
}, },
} }
# Loop to convert channel names to the IFO channel names # Loop to convert channel names to the IFO channel names. Could not
# Could not use ifochans function since ifochans was only pulling one character at a time from channel string # use ifochans function since ifochans was only pulling one character
# at a time from channel string
for i in SEISMIC_CONFIG: for i in SEISMIC_CONFIG:
SEISMIC_CONFIG[i]['channel'] = '{}:{}'.format(IFO, SEISMIC_CONFIG[i]['channel']) SEISMIC_CONFIG[i]['channel'] = '{}:{}'.format(IFO, SEISMIC_CONFIG[i]['channel'])
SEISMIC_CONFIG[i]['dq_channel'] = '{}:{}'.format(IFO, SEISMIC_CONFIG[i]['dq_channel']) SEISMIC_CONFIG[i]['dq_channel'] = '{}:{}'.format(IFO, SEISMIC_CONFIG[i]['dq_channel'])
...@@ -583,8 +767,8 @@ LSC_ASC_CHANNELS['ASC Control Signals (Vertex)'] = [ ...@@ -583,8 +767,8 @@ LSC_ASC_CHANNELS['ASC Control Signals (Vertex)'] = [
'ASC-SRC1_P_OUT_DQ', 'ASC-SRC1_P_OUT_DQ',
'ASC-SRC2_Y_OUT_DQ', 'ASC-SRC2_Y_OUT_DQ',
'ASC-SRC2_P_OUT_DQ', 'ASC-SRC2_P_OUT_DQ',
'ASC-INP1_P_OUT_DQ',
'ASC-INP1_Y_OUT_DQ', 'ASC-INP1_Y_OUT_DQ',
'ASC-INP1_P_OUT_DQ',
] ]
LSC_ASC_CHANNELS['ASC Centering Control Signals'] = [ LSC_ASC_CHANNELS['ASC Centering Control Signals'] = [
'ASC-DC1_Y_OUT_DQ', 'ASC-DC1_Y_OUT_DQ',
...@@ -598,3 +782,93 @@ LSC_ASC_CHANNELS['ASC Centering Control Signals'] = [ ...@@ -598,3 +782,93 @@ LSC_ASC_CHANNELS['ASC Centering Control Signals'] = [
] ]
for key in LSC_ASC_CHANNELS: for key in LSC_ASC_CHANNELS:
LSC_ASC_CHANNELS[key] = ifochans(LSC_ASC_CHANNELS[key]) LSC_ASC_CHANNELS[key] = ifochans(LSC_ASC_CHANNELS[key])
##################################################
if IFO == 'H1':
PI_DICT = {
f'{IFO}:OMC-PI_DOWNCONV_DC2_SIG_OUT_DQ': '80.0 kHz (PI28, PI29)',
f'{IFO}:OMC-PI_DOWNCONV_DC6_SIG_OUT_DQ': '14.5 kHz (PI15, PI16)',
f'{IFO}:OMC-PI_DOWNCONV_DC7_SIG_OUT_DQ': '10.0 kHz (PI24, PI31)',
}
PI_SAT_THRESH = 5e-3
PI_YLABEL = 'Downconverted Signal [mA]' # BP, shift down, then LP
PI_YLIMS = [-1.5e-2, 1.5e-2]
if IFO == 'L1':
PI_DICT = {
f'{IFO}:OMC-BLRMS_32_BAND1_LOG10_OUTMON': '2.5 - 8 kHz',
f'{IFO}:OMC-BLRMS_32_BAND2_LOG10_OUTMON': '8 - 12 kHz',
f'{IFO}:OMC-BLRMS_32_BAND3_LOG10_OUTMON': '12 - 16 kHz',
f'{IFO}:OMC-BLRMS_32_BAND4_LOG10_OUTMON': '16 - 20 kHz',
f'{IFO}:OMC-BLRMS_32_BAND5_LOG10_OUTMON': '20 - 24 kHz',
f'{IFO}:OMC-BLRMS_32_BAND6_LOG10_OUTMON': '24 - 28 kHz',
f'{IFO}:OMC-BLRMS_32_BAND7_LOG10_OUTMON': '28 - 32 kHz',
}
PI_SAT_THRESH = 0.5
PI_YLABEL = 'Band-Limited RMS [log]'
PI_YLIMS = [0, 1]
PI_CHANNELS = []
for chan in PI_DICT:
PI_CHANNELS.append(chan)
PI_SEARCH_WINDOW = [-(8*60), 5]
##################################################
VIOLIN_CHANNELS = [
'SUS-ITMX_L2_DAMP_MODE3_RMSLP_LOG10_OUTMON',
'SUS-ITMX_L2_DAMP_MODE4_RMSLP_LOG10_OUTMON',
'SUS-ITMX_L2_DAMP_MODE5_RMSLP_LOG10_OUTMON',
'SUS-ITMY_L2_DAMP_MODE3_RMSLP_LOG10_OUTMON',
'SUS-ITMY_L2_DAMP_MODE4_RMSLP_LOG10_OUTMON',
'SUS-ITMY_L2_DAMP_MODE5_RMSLP_LOG10_OUTMON',
'SUS-ITMY_L2_DAMP_MODE6_RMSLP_LOG10_OUTMON',
'SUS-ETMX_L2_DAMP_MODE3_RMSLP_LOG10_OUTMON',
'SUS-ETMX_L2_DAMP_MODE4_RMSLP_LOG10_OUTMON',
'SUS-ETMX_L2_DAMP_MODE5_RMSLP_LOG10_OUTMON',
'SUS-ETMX_L2_DAMP_MODE6_RMSLP_LOG10_OUTMON',
'SUS-ETMY_L2_DAMP_MODE3_RMSLP_LOG10_OUTMON',
'SUS-ETMY_L2_DAMP_MODE4_RMSLP_LOG10_OUTMON',
'SUS-ETMY_L2_DAMP_MODE5_RMSLP_LOG10_OUTMON',
'SUS-ETMY_L2_DAMP_MODE6_RMSLP_LOG10_OUTMON',
'SUS-ITMX_L2_DAMP_MODE11_RMSLP_LOG10_OUTMON',
'SUS-ITMX_L2_DAMP_MODE12_RMSLP_LOG10_OUTMON',
'SUS-ITMX_L2_DAMP_MODE13_RMSLP_LOG10_OUTMON',
'SUS-ITMX_L2_DAMP_MODE14_RMSLP_LOG10_OUTMON',
'SUS-ITMY_L2_DAMP_MODE11_RMSLP_LOG10_OUTMON',
'SUS-ITMY_L2_DAMP_MODE12_RMSLP_LOG10_OUTMON',
'SUS-ITMY_L2_DAMP_MODE13_RMSLP_LOG10_OUTMON',
'SUS-ITMY_L2_DAMP_MODE14_RMSLP_LOG10_OUTMON',
'SUS-ETMX_L2_DAMP_MODE11_RMSLP_LOG10_OUTMON',
'SUS-ETMX_L2_DAMP_MODE12_RMSLP_LOG10_OUTMON',
'SUS-ETMX_L2_DAMP_MODE13_RMSLP_LOG10_OUTMON',
'SUS-ETMX_L2_DAMP_MODE14_RMSLP_LOG10_OUTMON',
'SUS-ETMY_L2_DAMP_MODE11_RMSLP_LOG10_OUTMON',
'SUS-ETMY_L2_DAMP_MODE12_RMSLP_LOG10_OUTMON',
'SUS-ETMY_L2_DAMP_MODE13_RMSLP_LOG10_OUTMON',
'SUS-ETMY_L2_DAMP_MODE14_RMSLP_LOG10_OUTMON',
'SUS-ITMX_L2_DAMP_MODE21_RMSLP_LOG10_OUTMON',
'SUS-ITMX_L2_DAMP_MODE22_RMSLP_LOG10_OUTMON',
'SUS-ITMX_L2_DAMP_MODE23_RMSLP_LOG10_OUTMON',
'SUS-ITMX_L2_DAMP_MODE24_RMSLP_LOG10_OUTMON',
'SUS-ITMY_L2_DAMP_MODE21_RMSLP_LOG10_OUTMON',
'SUS-ITMY_L2_DAMP_MODE22_RMSLP_LOG10_OUTMON',
'SUS-ITMY_L2_DAMP_MODE23_RMSLP_LOG10_OUTMON',
'SUS-ITMY_L2_DAMP_MODE24_RMSLP_LOG10_OUTMON',
'SUS-ETMX_L2_DAMP_MODE21_RMSLP_LOG10_OUTMON',
'SUS-ETMX_L2_DAMP_MODE22_RMSLP_LOG10_OUTMON',
'SUS-ETMX_L2_DAMP_MODE23_RMSLP_LOG10_OUTMON',
'SUS-ETMX_L2_DAMP_MODE24_RMSLP_LOG10_OUTMON',
'SUS-ETMY_L2_DAMP_MODE21_RMSLP_LOG10_OUTMON',
'SUS-ETMY_L2_DAMP_MODE22_RMSLP_LOG10_OUTMON',
'SUS-ETMY_L2_DAMP_MODE23_RMSLP_LOG10_OUTMON',
'SUS-ETMY_L2_DAMP_MODE24_RMSLP_LOG10_OUTMON',
]
VIOLIN_CHANNELS = ifochans(VIOLIN_CHANNELS)
VIOLIN_SAT_THRESH = -15.5
VIOLIN_SEARCH_WINDOW = [-(5*60), 5]
import os import os
import glob import glob
import time import time
import numpy as np
from contextlib import closing from contextlib import closing
import logging
import numpy as np
import nds2 import nds2
import gpstime import gpstime
import gwdatafind
import gwpy.timeseries import gwpy.timeseries
from . import logger
from . import config from . import config
################################################## ##################################################
def nds_connection(): def nds_connection():
try: try:
HOSTPORT = os.getenv('NDSSERVER').split(',')[0].split(':') HOSTPORT = os.getenv('NDSSERVER').split(',')[0].split(':')
...@@ -24,7 +26,7 @@ def nds_connection(): ...@@ -24,7 +26,7 @@ def nds_connection():
PORT = int(HOSTPORT[1]) PORT = int(HOSTPORT[1])
except IndexError: except IndexError:
PORT = 31200 PORT = 31200
logging.debug("NDS connect: {}:{}".format(HOST, PORT)) logger.debug("NDS connect: {}:{}".format(HOST, PORT))
conn = nds2.connection(HOST, PORT) conn = nds2.connection(HOST, PORT)
conn.set_parameter('GAP_HANDLER', 'STATIC_HANDLER_NAN') conn.set_parameter('GAP_HANDLER', 'STATIC_HANDLER_NAN')
# conn.set_parameter('ITERATE_USE_GAP_HANDLERS', 'false') # conn.set_parameter('ITERATE_USE_GAP_HANDLERS', 'false')
...@@ -33,7 +35,11 @@ def nds_connection(): ...@@ -33,7 +35,11 @@ def nds_connection():
def nds_fetch(channels, start, stop): def nds_fetch(channels, start, stop):
with closing(nds_connection()) as conn: with closing(nds_connection()) as conn:
bufs = conn.fetch(int(start), int(stop), channels) bufs = conn.fetch(
int(np.round(start)),
int(np.round(stop)),
channels,
)
return [ChannelBuf.from_nds(buf) for buf in bufs] return [ChannelBuf.from_nds(buf) for buf in bufs]
...@@ -57,6 +63,7 @@ def nds_iterate(channels, start_end=None): ...@@ -57,6 +63,7 @@ def nds_iterate(channels, start_end=None):
################################################## ##################################################
class ChannelBuf(object): class ChannelBuf(object):
def __init__(self, channel, data, gps_start, gps_nanoseconds, sample_rate): def __init__(self, channel, data, gps_start, gps_nanoseconds, sample_rate):
self.channel = channel self.channel = channel
...@@ -71,7 +78,7 @@ class ChannelBuf(object): ...@@ -71,7 +78,7 @@ class ChannelBuf(object):
gps=self.gps_start, gps=self.gps_start,
sec=self.duration, sec=self.duration,
nsamples=len(self), nsamples=len(self),
) )
def __len__(self): def __len__(self):
return len(self.data) return len(self.data)
...@@ -132,22 +139,27 @@ class ChannelBuf(object): ...@@ -132,22 +139,27 @@ class ChannelBuf(object):
################################################## ##################################################
def frame_fetch(channels, start, stop):
conn = glue.datafind.GWDataFindHTTPConnection() # def frame_fetch(channels, start, stop):
cache = conn.find_frame_urls(config.IFO[0], IFO+'_R', start, stop, urltype='file') # conn = glue.datafind.GWDataFindHTTPConnection()
fc = frutils.FrameCache(cache, verbose=True) # cache = conn.find_frame_urls(config.IFO[0], IFO+'_R', start, stop, urltype='file')
return [ChannelBuf.from_frdata(fc.fetch(channel, start, stop)) for channel in channels] # fc = frutils.FrameCache(cache, verbose=True)
# return [ChannelBuf.from_frdata(fc.fetch(channel, start, stop)) for channel in channels]
def frame_fetch_gwpy(channels, start, stop): def frame_fetch_gwpy(channels, start, stop):
gps_now = int(gpstime.tconvert('now')) gps_now = int(gpstime.tconvert('now'))
### grab from shared memory if data is still available, otherwise use datafind # grab from shared memory if data is still available, otherwise use datafind
if gps_now - start < config.MAX_QUERY_LATENCY: if gps_now - start < config.DATA_DEVSHM_TIMEOUT:
frames = glob.glob("/dev/shm/lldetchar/{}/*".format(config.IFO)) frames = glob.glob(config.DATA_DEVSHM_ROOT + "/*")
data = gwpy.timeseries.TimeSeriesDict.read(frames, channels, start=start, end=stop) try:
data = gwpy.timeseries.TimeSeriesDict.read(frames, channels, start=start, end=stop)
except IndexError:
raise RuntimeError(f"data not found in {config.DATA_DEVSHM_ROOT}")
else: else:
data = gwpy.timeseries.TimeSeriesDict.find(channels, start, stop, frametype=config.IFO+'_R') frametype = f'{config.IFO}_R'
data = gwpy.timeseries.TimeSeriesDict.find(channels, start, stop, frametype=frametype)
return [ChannelBuf.from_gwTS(data[channel]) for channel in channels] return [ChannelBuf.from_gwTS(data[channel]) for channel in channels]
...@@ -162,71 +174,42 @@ def fetch(channels, segment, as_dict=False): ...@@ -162,71 +174,42 @@ def fetch(channels, segment, as_dict=False):
method = config.DATA_ACCESS.lower() method = config.DATA_ACCESS.lower()
if method == 'nds': if method == 'nds':
func = nds_fetch func = nds_fetch
elif method == 'fr': # elif method == 'fr':
func = frame_fetch # func = frame_fetch
elif method == 'gwpy': elif method == 'gwpy':
func = frame_fetch_gwpy func = frame_fetch_gwpy
else: else:
raise ValueError("unknown data access method: {}".format(DATA_ACCESS)) raise ValueError(f"unknown data access method: {method}")
logging.debug("{}({}, {}, {})".format(func.__name__, channels, start, stop)) logger.debug("{}({}, {}, {})".format(func.__name__, channels, start, stop))
bufs = func(channels, start, stop)
if as_dict: # keep attempting to pull data in a loop, until we get the data or
return {buf.channel:buf for buf in bufs} # the timeout is reached
else: bufs = None
return bufs tstart = time.monotonic()
while time.monotonic() <= tstart + config.DATA_DISCOVERY_TIMEOUT:
def data_available(segment):
"""Return True if the specified data segment is available.
Based on purely on end time of segment.
"""
end = int(segment[-1])
start = end - 1
ifo = config.IFO[0]
method = config.DATA_ACCESS.lower()
if method == 'nds':
buf = nds2.fetch([config.GRD_STATE_N_CHANNEL], start, end)
return buf is not None
elif method == 'fr' or method == 'gwpy':
ftype = '{}_R'.format(config.IFO)
logging.debug('gwdatafind.find_times({}, {}, {}, {})'.format(
ifo, ftype, start, end,
))
segs = gwdatafind.find_times(
ifo, ftype, start, end,
)
try: try:
seg_available = segs.extent() bufs = func(channels, start, stop)
except ValueError: break
return False except RuntimeError as e:
return segment[1] >= seg_available[1] logger.info(
"data not available, sleeping for {} seconds ({})".format(
config.DATA_DISCOVERY_SLEEP,
e,
)
)
time.sleep(config.DATA_DISCOVERY_SLEEP)
else: else:
raise ValueError("unknown data access method: {}".format(DATA_ACCESS)) raise RuntimeError(f"data discovery timeout reached ({config.DATA_DISCOVERY_TIMEOUT}s)")
def data_wait(segment): if as_dict:
"""Wait until data segment is available. return {buf.channel: buf for buf in bufs}
else:
Returns True if data is available, or False if the return bufs
DATA_DISCOVERY_TIMEOUT was reached.
"""
start = time.monotonic()
while time.monotonic() <= start + config.DATA_DISCOVERY_TIMEOUT:
if data_available(segment):
return True
logging.info(
"no data available in LDR, sleeping for {} seconds".format(
config.DATA_DISCOVERY_SLEEP,
)
)
time.sleep(config.DATA_DISCOVERY_SLEEP)
return False
################################################## ##################################################
def gen_transitions(buf, previous=None): def gen_transitions(buf, previous=None):
"""Generator of transitions in ChannelBuf """Generator of transitions in ChannelBuf
......
import os import os
import shutil import shutil
import logging
import numpy as np import numpy as np
from gpstime import tconvert from gpstime import tconvert
from . import logger
from . import config from . import config
################################################## ##################################################
def _trans_int(trans): def _trans_int(trans):
return tuple(int(i) for i in trans) return tuple(int(i) for i in trans)
...@@ -103,7 +106,7 @@ class LocklossEvent(object): ...@@ -103,7 +106,7 @@ class LocklossEvent(object):
try: try:
with open(self.path('guard_state_end_gps')) as f: with open(self.path('guard_state_end_gps')) as f:
self.__transition_gps = float(f.read().strip()) self.__transition_gps = float(f.read().strip())
except: except Exception:
# if this is an old event and we don't have a # if this is an old event and we don't have a
# guard_state_end_gps file, just return the id (int of # guard_state_end_gps file, just return the id (int of
# transition time) as the transition GPS # transition time) as the transition GPS
...@@ -142,7 +145,7 @@ class LocklossEvent(object): ...@@ -142,7 +145,7 @@ class LocklossEvent(object):
if transition_index: if transition_index:
with open(event.path('transition_index'), 'w') as f: with open(event.path('transition_index'), 'w') as f:
f.write('{:.0f} {:.0f}\n'.format(*transition_index)) f.write('{:.0f} {:.0f}\n'.format(*transition_index))
logging.info("event created: {}".format(event.path())) logger.info("event created: {}".format(event.path()))
return event return event
def _scrub(self, archive=True): def _scrub(self, archive=True):
...@@ -166,7 +169,7 @@ class LocklossEvent(object): ...@@ -166,7 +169,7 @@ class LocklossEvent(object):
except OSError: except OSError:
return return
bak_dir = self.path('bak', str(last_time)) bak_dir = self.path('bak', str(last_time))
logging.info("archiving old artifacts: {}".format(bak_dir)) logger.info("archiving old artifacts: {}".format(bak_dir))
try: try:
os.makedirs(bak_dir) os.makedirs(bak_dir)
except OSError: except OSError:
...@@ -176,7 +179,7 @@ class LocklossEvent(object): ...@@ -176,7 +179,7 @@ class LocklossEvent(object):
continue continue
shutil.move(self.path(f), bak_dir) shutil.move(self.path(f), bak_dir)
else: else:
logging.info("purging old artifacts...") logger.info("purging old artifacts...")
for f in os.listdir(self.path()): for f in os.listdir(self.path()):
if f in preserve_files: if f in preserve_files:
continue continue
...@@ -251,8 +254,13 @@ class LocklossEvent(object): ...@@ -251,8 +254,13 @@ class LocklossEvent(object):
path = self.path('status') path = self.path('status')
if not os.path.exists(path): if not os.path.exists(path):
return None return None
with open(path, 'r') as f: # FIXME: why do we get FileNotFoundError on the web server
return int(f.readline().strip()) # even after the above test passed?
try:
with open(path, 'r') as f:
return int(f.readline().strip())
except FileNotFoundError:
return None
@property @property
def analysis_succeeded(self): def analysis_succeeded(self):
...@@ -307,7 +315,7 @@ class LocklossEvent(object): ...@@ -307,7 +315,7 @@ class LocklossEvent(object):
for tag in tags: for tag in tags:
assert ' ' not in tag, "Spaces are not permitted in tags." assert ' ' not in tag, "Spaces are not permitted in tags."
open(self._tag_path(tag), 'a').close() open(self._tag_path(tag), 'a').close()
logging.info("added tag: {}".format(tag)) logger.info("added tag: {}".format(tag))
def rm_tag(self, *tags): def rm_tag(self, *tags):
"""remove tags""" """remove tags"""
...@@ -383,6 +391,7 @@ class LocklossEvent(object): ...@@ -383,6 +391,7 @@ class LocklossEvent(object):
################################################## ##################################################
def _generate_all_events(): def _generate_all_events():
"""generate all events in reverse chronological order""" """generate all events in reverse chronological order"""
event_root = config.EVENT_ROOT event_root = config.EVENT_ROOT
...@@ -436,7 +445,8 @@ def find_events(**kwargs): ...@@ -436,7 +445,8 @@ def find_events(**kwargs):
if states and trans_idx not in states: if states and trans_idx not in states:
continue continue
if 'tag' in kwargs and kwargs['tag'] not in event.list_tags(): if 'tag' in kwargs:
continue if not set(kwargs['tag']) <= set(event.list_tags()):
continue
yield event yield event
import os import os
import sys
import argparse import argparse
import logging
from . import set_signal_handlers from . import set_signal_handlers
from . import config_logger
from . import config from . import config
from . import search from . import search
from . import analyze from . import analyze
from . import condor from . import condor
################################################## ##################################################
def start_job(): def start_job():
try: try:
os.makedirs(config.CONDOR_ONLINE_DIR) os.makedirs(config.CONDOR_ONLINE_DIR)
...@@ -36,28 +39,31 @@ def _parser_add_arguments(parser): ...@@ -36,28 +39,31 @@ def _parser_add_arguments(parser):
title='Commands', title='Commands',
metavar='<command>', metavar='<command>',
dest='cmd', dest='cmd',
#help=argparse.SUPPRESS,
) )
parser.required = True parser.required = True
ep = parser.add_parser( sp = parser.add_parser(
'exec', 'exec',
help="direct execute online analysis", help="directly execute online search",
)
sp.add_argument(
'--analyze', action='store_true',
help="execute the condor event analysis callback when events are found",
) )
parser.add_parser( parser.add_parser(
'start', 'start',
help="start condor online analysis", help="start condor online search",
) )
parser.add_parser( parser.add_parser(
'stop', 'stop',
help="stop condor online analysis", help="stop condor online search",
) )
parser.add_parser( parser.add_parser(
'restart', 'restart',
help="restart condor online analysis", help="restart condor online search",
) )
parser.add_parser( parser.add_parser(
'status', 'status',
help="print condor job status", help="print online condor job status",
) )
return parser return parser
...@@ -72,20 +78,32 @@ def main(args=None): ...@@ -72,20 +78,32 @@ def main(args=None):
args = parser.parse_args() args = parser.parse_args()
if args.cmd == 'exec': if args.cmd == 'exec':
if args.analyze:
event_callback = analyze.analyze_condor
stat_file = config.ONLINE_STAT_FILE
if not os.path.exists(stat_file):
open(stat_file, 'w').close()
else:
event_callback = None
stat_file = None
search.search_iterate( search.search_iterate(
event_callback=analyze.analyze_condor, event_callback=event_callback,
stat_file=stat_file,
) )
elif args.cmd == 'start': elif args.cmd in ['start', 'stop', 'restart']:
print("The LIGO site deployments of the online search are now handled via systemd --user on the 'detechar.ligo-?a.caltech.edu' hosts.", file=sys.stderr)
print("See \"systemctl --user status\" for more info.", file=sys.stderr)
if input(f"Type 'yes' if you're really sure you want to execute the online condor '{args.cmd}' command: ") != 'yes':
print("aborting.", file=sys.stderr)
exit(1)
ojobs, ajobs = condor.find_jobs() ojobs, ajobs = condor.find_jobs()
assert not ojobs, "There are already currently acitve jobs:\n{}".format( if args.cmd == 'start':
'\n'.join([condor.job_str(job) for job in ojobs])) assert not ojobs, "There are already currently acitve jobs:\n{}".format(
start_job() '\n'.join([condor.job_str(job) for job in ojobs]))
else:
elif args.cmd in ['stop', 'restart']: condor.stop_jobs(ojobs)
ojobs, ajobs = condor.find_jobs() if args.cmd in ['start', 'restart']:
condor.stop_jobs(ojobs)
if args.cmd == 'restart':
start_job() start_job()
elif args.cmd == 'status': elif args.cmd == 'status':
...@@ -97,12 +115,10 @@ def main(args=None): ...@@ -97,12 +115,10 @@ def main(args=None):
# direct execution of this module intended for condor jobs # direct execution of this module intended for condor jobs
if __name__ == '__main__': if __name__ == '__main__':
set_signal_handlers() set_signal_handlers()
logging.basicConfig( config_logger(level='DEBUG')
level='DEBUG', stat_file = config.ONLINE_STAT_FILE
format='%(asctime)s %(message)s' if not os.path.exists(stat_file):
) open(stat_file, 'w').close()
stat_file = os.path.join(config.CONDOR_ONLINE_DIR, 'stat')
open(stat_file, 'w').close()
search.search_iterate( search.search_iterate(
event_callback=analyze.analyze_condor, event_callback=analyze.analyze_condor,
stat_file=stat_file, stat_file=stat_file,
......
import io import io
import sys import sys
import logging import xml.etree.ElementTree as ET
import numpy as np import numpy as np
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import matplotlib.dates as dates import matplotlib.dates as dates
import xml.etree.ElementTree as ET
import gpstime import gpstime
from gwpy.segments import DataQualityFlag from gwpy.segments import DataQualityFlag
from . import logger
from . import config from . import config
from . import data from . import data
from .event import find_events from .event import find_events
# plt.rc('text', usetex=True) # plt.rc('text', usetex=True)
################################################## ##################################################
def gtmin(gt): def gtmin(gt):
"""GPS time rounded to minute from gpstime object""" """GPS time rounded to minute from gpstime object"""
return int(gt.gps() / 60) * 60 return int(gt.gps() / 60) * 60
...@@ -74,12 +78,12 @@ def plot_history(path, lookback='7 days ago', draw_segs=False): ...@@ -74,12 +78,12 @@ def plot_history(path, lookback='7 days ago', draw_segs=False):
bufs = data.nds_fetch([channel], gtmin(start), gtmin(end)) bufs = data.nds_fetch([channel], gtmin(start), gtmin(end))
state_index, state_time = bufs[0].yt() state_index, state_time = bufs[0].yt()
if np.all(np.isnan(state_index)): if np.all(np.isnan(state_index)):
logging.warning("state data [{}] is all nan??".format(channel)) logger.warning("state data [{}] is all nan??".format(channel))
# find events # find events
events = list(find_events(after=start.gps())) events = list(find_events(after=start.gps()))
fig = plt.figure(figsize=(16, 4)) #, dpi=80) fig = plt.figure(figsize=(16, 4))
ax = fig.add_subplot(111) ax = fig.add_subplot(111)
# plot analyzing # plot analyzing
...@@ -180,6 +184,7 @@ def plot_history(path, lookback='7 days ago', draw_segs=False): ...@@ -180,6 +184,7 @@ def plot_history(path, lookback='7 days ago', draw_segs=False):
################################################## ##################################################
def main(): def main():
path = sys.argv[1] path = sys.argv[1]
plot_history(path) plot_history(path)
......
...@@ -18,36 +18,47 @@ from matplotlib import rcParamsDefault ...@@ -18,36 +18,47 @@ from matplotlib import rcParamsDefault
plt.rcParams.update(rcParamsDefault) plt.rcParams.update(rcParamsDefault)
PLUGINS = collections.OrderedDict()
def register_plugin(func): def register_plugin(func):
PLUGINS.update([(func.__name__, func)]) PLUGINS.update([(func.__name__, func)])
PLUGINS = collections.OrderedDict()
from .discover import discover_data from .discover import discover_data
register_plugin(discover_data) register_plugin(discover_data)
from .history import find_previous_state
register_plugin(find_previous_state)
from .refine import refine_time from .refine import refine_time
register_plugin(refine_time) register_plugin(refine_time)
from .ifo_mode import check_ifo_mode from .ifo_mode import check_ifo_mode
register_plugin(check_ifo_mode) register_plugin(check_ifo_mode)
from .initial_alignment import initial_alignment_check
register_plugin(initial_alignment_check)
from .saturations import find_saturations from .saturations import find_saturations
register_plugin(find_saturations) register_plugin(find_saturations)
from .lpy import find_lpy from .lpy import find_lpy
register_plugin(find_lpy) register_plugin(find_lpy)
from .lsc_asc import plot_lsc_asc from .pi import check_pi
register_plugin(plot_lsc_asc) register_plugin(check_pi)
from .glitch import analyze_glitches from .violin import check_violin
register_plugin(analyze_glitches) register_plugin(check_violin)
from .overflows import find_overflows from .fss_oscillation import check_fss
register_plugin(find_overflows) register_plugin(check_fss)
from .iss import check_iss
register_plugin(check_iss)
from .etm_glitch import check_glitch
register_plugin(check_glitch)
from .ham6_power import power_in_ham6
register_plugin(power_in_ham6)
from .brs import check_brs from .brs import check_brs
register_plugin(check_brs) register_plugin(check_brs)
...@@ -58,15 +69,34 @@ register_plugin(check_boards) ...@@ -58,15 +69,34 @@ register_plugin(check_boards)
from .wind import check_wind from .wind import check_wind
register_plugin(check_wind) register_plugin(check_wind)
from .overflows import find_overflows
register_plugin(find_overflows)
from .lsc_asc import plot_lsc_asc
register_plugin(plot_lsc_asc)
from .glitch import analyze_glitches
register_plugin(analyze_glitches)
from .darm import plot_darm
register_plugin(plot_darm)
# Check other possible tags to add
from .ads_excursion import check_ads from .ads_excursion import check_ads
register_plugin(check_ads) register_plugin(check_ads)
from .sei_bs_trans import check_sei_bs from .sei_bs_trans import check_sei_bs
register_plugin(check_sei_bs) register_plugin(check_sei_bs)
from .fss_oscillation import check_fss from .soft_limiters import check_soft_limiters
register_plugin(check_fss) register_plugin(check_soft_limiters)
from .omc_dcpd import check_omc_dcpd
register_plugin(check_omc_dcpd)
# add last since this needs to wait for additional data # Add the following at the end because they need to wait for additional data
from .seismic import check_seismic from .seismic import check_seismic
register_plugin(check_seismic) register_plugin(check_seismic)
from .history import find_previous_state
register_plugin(find_previous_state)
\ No newline at end of file
import sys import argparse
import logging
import importlib
from .. import set_signal_handlers from .. import set_signal_handlers
from .. import config from .. import config_logger
from ..event import LocklossEvent from ..event import LocklossEvent
from . import PLUGINS
parser = argparse.ArgumentParser()
parser.add_argument(
'plugin', nargs='?',
)
parser.add_argument(
'event', nargs='?',
)
def main(): def main():
set_signal_handlers() set_signal_handlers()
logging.basicConfig( config_logger()
level='DEBUG', args = parser.parse_args()
format=config.LOG_FMT, if not args.plugin:
) for name, func in PLUGINS.items():
mpath = sys.argv[1].split('.') print(name)
event = LocklossEvent(sys.argv[2]) return
mod = importlib.import_module('.'+'.'.join(mpath[:-1]), __package__) if not args.event:
func = mod.__dict__[mpath[-1]] parser.error("must specifiy event")
try:
func = PLUGINS[args.plugin]
except KeyError:
parser.error(f"unknown plugin: {args.plugin}")
event = LocklossEvent(args.event)
func(event) func(event)
......
import logging
import numpy as np import numpy as np
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
from gwpy.segments import Segment from gwpy.segments import Segment
from .. import logger
from .. import config from .. import config
from .. import data from .. import data
from .. import plotutils from .. import plotutils
################################################# #################################################
def check_ads(event): def check_ads(event):
"""Checks for ADS channels above threshold. """Checks for ADS channels above threshold.
...@@ -18,8 +20,8 @@ def check_ads(event): ...@@ -18,8 +20,8 @@ def check_ads(event):
""" """
if event.transition_index[0] != config.GRD_NOMINAL_STATE[0]: if event.transition_index[0] < config.ADS_GRD_STATE[0]:
logging.info('lockloss not from nominal low noise') logger.info('lockloss not from state using ADS injections')
return return
plotutils.set_rcparams() plotutils.set_rcparams()
...@@ -45,9 +47,9 @@ def check_ads(event): ...@@ -45,9 +47,9 @@ def check_ads(event):
if saturating: if saturating:
event.add_tag('ADS_EXCURSION') event.add_tag('ADS_EXCURSION')
else: else:
logging.info('no ADS excursion detected') logger.info('no ADS excursion detected')
fig, ax = plt.subplots(1, figsize=(22,16)) fig, ax = plt.subplots(1, figsize=(22, 16))
for idx, buf in enumerate(ads_channels): for idx, buf in enumerate(ads_channels):
srate = buf.sample_rate srate = buf.sample_rate
t = np.arange(segment[0], segment[1], 1/srate) t = np.arange(segment[0], segment[1], 1/srate)
......
import logging
import numpy as np import numpy as np
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
from gwpy.segments import Segment from gwpy.segments import Segment
from .. import logger
from .. import config from .. import config
from .. import data from .. import data
from .. import plotutils from .. import plotutils
############################################## ##############################################
def check_boards(event): def check_boards(event):
"""Checks for analog board saturations. """Checks for analog board saturations.
...@@ -28,7 +30,7 @@ def check_boards(event): ...@@ -28,7 +30,7 @@ def check_boards(event):
for buf in board_channels: for buf in board_channels:
srate = buf.sample_rate srate = buf.sample_rate
t = np.arange(segment[0], segment[1], 1/srate) t = np.arange(segment[0], segment[1], 1/srate)
before_loss = buf.data[np.where(t<event.gps-config.BOARD_SAT_BUFFER)] before_loss = buf.data[np.where(t < event.gps-config.BOARD_SAT_BUFFER)]
if any(abs(before_loss) >= config.BOARD_SAT_THRESH): if any(abs(before_loss) >= config.BOARD_SAT_THRESH):
saturating = True saturating = True
glitch_idx = np.where(abs(buf.data) > config.BOARD_SAT_THRESH)[0][0] glitch_idx = np.where(abs(buf.data) > config.BOARD_SAT_THRESH)[0][0]
...@@ -38,9 +40,9 @@ def check_boards(event): ...@@ -38,9 +40,9 @@ def check_boards(event):
if saturating: if saturating:
event.add_tag('BOARD_SAT') event.add_tag('BOARD_SAT')
else: else:
logging.info('no saturating analog boards') logger.info('no saturating analog boards')
fig, ax = plt.subplots(1, figsize=(22,16)) fig, ax = plt.subplots(1, figsize=(22, 16))
for idx, buf in enumerate(board_channels): for idx, buf in enumerate(board_channels):
srate = buf.sample_rate srate = buf.sample_rate
t = np.arange(segment[0], segment[1], 1/srate) t = np.arange(segment[0], segment[1], 1/srate)
......
import logging
import numpy as np import numpy as np
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
from gwpy.segments import Segment from gwpy.segments import Segment
from .. import logger
from .. import config from .. import config
from .. import data from .. import data
from .. import plotutils from .. import plotutils
################################################# #################################################
CHANNEL_ENDINGS = [ CHANNEL_ENDINGS = [
'100M_300M', '100M_300M',
'300M_1', '300M_1',
...@@ -67,8 +69,10 @@ THRESHOLD = 15 ...@@ -67,8 +69,10 @@ THRESHOLD = 15
SEARCH_WINDOW = [-30, 5] SEARCH_WINDOW = [-30, 5]
################################################# #################################################
def check_brs(event): def check_brs(event):
"""Checks for BRS glitches at both end stations. """Checks for BRS glitches at both end stations.
...@@ -87,7 +91,7 @@ def check_brs(event): ...@@ -87,7 +91,7 @@ def check_brs(event):
try: try:
buf_dict = data.fetch(channels, segment, as_dict=True) buf_dict = data.fetch(channels, segment, as_dict=True)
except ValueError: except ValueError:
logging.warning('BRS info not available for {}'.format(station)) logger.warning('BRS info not available for {}'.format(station))
continue continue
# check for proper state # check for proper state
...@@ -95,7 +99,7 @@ def check_brs(event): ...@@ -95,7 +99,7 @@ def check_brs(event):
t = state_buf.tarray t = state_buf.tarray
state = state_buf.data[np.argmin(np.absolute(t-event.gps))] state = state_buf.data[np.argmin(np.absolute(t-event.gps))]
if state in params['skip_states']: if state in params['skip_states']:
logging.info('{} not using sensor correction during lockloss'.format(station)) logger.info('{} not using sensor correction during lockloss'.format(station))
continue continue
del buf_dict[params['state_chan']] del buf_dict[params['state_chan']]
...@@ -105,13 +109,13 @@ def check_brs(event): ...@@ -105,13 +109,13 @@ def check_brs(event):
for channel, buf in buf_dict.items(): for channel, buf in buf_dict.items():
max_brs = max([max_brs, max(buf.data)]) max_brs = max([max_brs, max(buf.data)])
if any(buf.data > THRESHOLD): if any(buf.data > THRESHOLD):
logging.info('BRS GLITCH DETECTED in {}'.format(channel)) logger.info('BRS GLITCH DETECTED in {}'.format(channel))
event.add_tag('BRS_GLITCH') event.add_tag('BRS_GLITCH')
glitch_idx = np.where(buf.data > THRESHOLD)[0][0] glitch_idx = np.where(buf.data > THRESHOLD)[0][0]
glitch_time = buf.tarray[glitch_idx] glitch_time = buf.tarray[glitch_idx]
thresh_crossing = min(glitch_time, thresh_crossing) thresh_crossing = min(glitch_time, thresh_crossing)
fig, ax = plt.subplots(1, figsize=(22,16)) fig, ax = plt.subplots(1, figsize=(22, 16))
for channel, buf in buf_dict.items(): for channel, buf in buf_dict.items():
t = buf.tarray t = buf.tarray
ax.plot( ax.plot(
...@@ -134,7 +138,7 @@ def check_brs(event): ...@@ -134,7 +138,7 @@ def check_brs(event):
ax.grid() ax.grid()
ax.set_xlabel('Time [s] since lock loss at {}'.format(event.gps), labelpad=10) ax.set_xlabel('Time [s] since lock loss at {}'.format(event.gps), labelpad=10)
ax.set_ylabel('RMS Velocity [nrad/s]') ax.set_ylabel('RMS Velocity [nrad/s]')
ax.set_ylim(0, max_brs+1) # ax.set_ylim(0, max_brs+1)
ax.set_xlim(t[0]-event.gps, t[-1]-event.gps) ax.set_xlim(t[0]-event.gps, t[-1]-event.gps)
ax.legend(loc='best') ax.legend(loc='best')
ax.set_title('{} BRS BLRMS'.format(station), y=1.04) ax.set_title('{} BRS BLRMS'.format(station), y=1.04)
......
import numpy as np
import matplotlib.pyplot as plt
from gwpy.timeseries import TimeSeries
from .. import logger
from .. import config
from .. import plotutils
DARM_CHANNEL = f'{config.IFO}:GDS-CALIB_STRAIN_CLEAN'
##############################################
def plot_darm(event):
"""Grabs DARM data from 8 minutes before LL and right before
LL and plots them
"""
# Only create DARM plot if we got a LL from NLN or above
if event.transition_index[0] < config.GRD_NOMINAL_STATE[0]:
logger.info('IFO not fully locked, DARM plot will not be created.')
return
plotutils.set_rcparams()
plt.rcParams['figure.figsize'] = (66, 24)
logger.info('Plotting DARM 8 minutes vs right before lockloss')
# Before time: between 8 and 9 minutes before LL
time_before = [int(event.gps) - (9*60), int(event.gps) - (8*60)]
# Right before LL time: 61 seconds to 1 second before LL
time_before_LL = [int(event.gps) - 61, int(event.gps) - 1]
# Grab the data
darm_before = TimeSeries.get(
DARM_CHANNEL,
time_before[0],
time_before[1],
frametype=f'{config.IFO}_HOFT_C00',
nproc=8,
verbose=True
)
darm_before_LL = TimeSeries.get(
DARM_CHANNEL,
time_before_LL[0],
time_before_LL[1],
frametype=f'{config.IFO}_HOFT_C00',
nproc=8,
verbose=True
)
# Calculate the Strain PSD
fft = 16
psd_before = darm_before.psd(fft, fft/2.).crop(10, 5000)
psd_before_LL = darm_before_LL.psd(fft, fft/2.).crop(10, 5000)
# Plot DARM comparison
plt.loglog(
np.sqrt(psd_before),
linewidth=1,
color='red',
label='Right before lockloss'
)
plt.loglog(
np.sqrt(psd_before_LL),
linewidth=1,
color='lightsteelblue',
label='8 min before lockloss'
)
plt.xlim(10, 5000)
plt.xlabel('Frequency [Hz]')
plt.ylabel(u'ASD [1/\u221AHz]')
plt.title(f'DARM - {DARM_CHANNEL}')
plt.suptitle(
'Note: not helpful if not in NLN 9 mins before lockloss',
size=35
)
plt.grid(which='both')
plt.legend()
plt.tight_layout()
outfile_plot = 'darm.png'
outpath_plot = event.path(outfile_plot)
plt.savefig(outpath_plot, bbox_inches='tight')