Skip to content
Snippets Groups Projects
Commit 0fa74724 authored by Jameson Graef Rollins's avatar Jameson Graef Rollins
Browse files

update README

parent 77d90a5b
No related branches found
No related tags found
No related merge requests found
......@@ -103,23 +103,22 @@ You should now be able to see the web page at the appropriate URL,
e.g. https://ldas-jobs.ligo-la.caltech.edu/~albert.einstein/lockloss
# making merge request
# making merge requests
All contributions to `locklost` are handled via [GitLab merge
request](https://docs.gitlab.com/ee/user/project/merge_requests/).
The basic process for making a merge request is as follows (This
The basic process for making a merge request is as follows (this
assumes you have forked the project as specified above, and cloned the
repository from this fork):
0. Change into your checkout of the source, and make sure your master
branch is up-to-date:
0. Checkout the master branch and make sure it's up-to-date:
```shell
$ cd ~/path/to/source/locklost
$ git checkout master
$ git pull --rebase origin master
```
1. Create a new branch in your local checkout for your proposed changes
1. Create a new branch in your local checkout for your proposed changes:
```shell
$ git checkout -B my-dev-branch
```
......@@ -129,15 +128,16 @@ $ git checkout -B my-dev-branch
$ git commit ....
```
Make sure your commit log message is nice and informative. If
you're making a commit that fixes a particular bug, you can add a
command like "closes #XX" in the commit message and the issue will
be automatically closed when the patch is merged. See [automatic
issue
Make sure your commit log message is nice and informative, starting
with a single line summary followed by a paragraph describing the
changes. If you're making a commit that fixes a particular bug,
you can add a line like "closes #XX" at the bottom of commit
message and the corresponding issue will be automatically closed
when the patch is merged. See [automatic issue
closing](https://docs.gitlab.com/ee/user/project/issues/automatic_issue_closing.html).
3. When you've got all changes ready to go, push your development
branch to your fork of the project (assuming your cloned from your
branch to your fork of the project (assuming you cloned from your
fork of the project as specified above):
```shell
$ git push origin my-dev-branch
......
locklost: aLIGO IFO lock loss tracking and analysis tool
`locklost`: aLIGO IFO lock loss tracking and analysis
========================================================
This package provides a set of tools for analyzing LIGO detector "lock
......@@ -18,26 +18,25 @@ and condor jobs to `search` for and `analyze` events.
# Deployments
Each site (LHO and LLO) has a dedicated "lockloss" account on the site
LDAS cluster where the primary lock loss tracking via `locklost` takes
place:
Each site (LHO and LLO) has a dedicated "lockloss" account on their
local LDAS cluster where `locklost` is running:
* https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/
* https://ldas-jobs.ligo-la.caltech.edu/~lockloss/
These accounts have deployments of the `locklost` package and command
line interface, run the online and followup condor jobs, and host the
web pages.
line interface, run the production online and followup condor jobs,
and host the web pages.
## deploying new versions
When a new version is ready for release, create an annotated tag for
the release and push it to the repo:
the release and push it to the main repo (https://git.ligo.org/jameson.rollins/locklost):
```shell
$ git tag -m "release" 0.16
$ git push --tags
```
In the lockloss account, pull the new tag and run
In the LDAS lockloss account, pull the new tag and run
the test/deploy script:
```shell
$ ssh lockloss@detchar.ligo-la.caltech.edu
......@@ -64,24 +63,40 @@ This launches a condor job that runs the online analysis.
To launch a condor job to search for lock losses within some time
window:
```shell
$ locklost search --condor GPS_START GPS_END
$ locklost search --condor START END
```
This will find lock losses with the specified time range, but will not
run the follow-up analyses. This is only needed to cover times when
the online analysis was not running.
run the follow-up analyses. This is primarily needed to backfill
times when the online analysis was not running (see below).
Any time argument ('START', 'END', 'TIME', etc.) can be either a GPS
times or a full (even relative) date/time string, e.g. '1 week ago'.
To run a full analysis on a specific lock loss time found from the
search above:
```shell
$ locklost analyze GPS_TIME
$ locklost analyze TIME
```
To launch a condor job to analyze all un-analyzed events within a time
range:
```shell
$ locklost analyze --condor START END
```
To launch a condor job to analyze all un-analyzed events within a time range:
To re-analyze events add the `--rerun` flag e.g.:
```shell
$ locklost analyze --condor GPS_START GPS_END
$ locklost analyze TIME --rerun
```
To re-analyze events add the `--rerun` flag:
or
```shell
$ locklost analyze --condor START END --rerun
```
It has happened that analysis jobs are improperly killed by condor,
not giving them a chance to clean up their run locks. The site
locklost deployments include a command line utility to find and remove
any old, stale analysis locks:
```shell
$ locklost analyze --condor GPS_START GPS_END --rerun
$ find-locks -r
```
......@@ -110,46 +125,6 @@ of other analyses. The output from any analysis (e.g. plots or data)
should be written into the event directory.
# Re-analysis
Occasionally the online search will fail or go offline, or the
analysis will fail or not launch. This is usually because of problems
with the cluster, e.g. from Tuesday maintenance or other problems, but
could be due to code bugs as well. It's therefore necessary to
occasionally run an offline search/analysis to find missing events or
redo their analyses.
It has happened that analysis jobs are improperly killed by condor,
not giving them a chance to clean up their run locks. Before re-running
analyses remove any old, stale analysis locks:
```shell
$ find-locks -r
```
Launch on offline search for any missed lock losses:
```shell
$ locklost search --condor `gpstime -g 1 month ago` `gpstime -g`
```
This will find lock losses with the specified time range, but will not
run the follow-up analyses. This is only needed to cover times when
the online analysis was not running. The start and stop time
arguments are given using calls to the gpstime utility as a convenience to
get GPS times from human-readable times (FIXME: this should be folded
in to the locklost command line parser directly).
To run a full analysis on a specific lock loss time found from the
search above, just call `locklost analyze` with the time:
```shell
$ locklost analyze GPS_TIME
```
To launch a condor job to analyze all un-analyzed events within the
same specified time range:
```shell
$ locklost analyze --condor `gpstime -g 1 month ago` `gpstime -g`
```
# Developing and contributing
See the [CONTRIBUTING](CONTRIBUTING.md) for instructions on how to
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment