Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • alexander.pace/server
  • geoffrey.mo/gracedb-server
  • deep.chatterjee/gracedb-server
  • cody.messick/server
  • sushant.sharma-chaudhary/server
  • michael-coughlin/server
  • daniel.wysocki/gracedb-server
  • roberto.depietri/gracedb
  • philippe.grassia/gracedb
  • tri.nguyen/gracedb
  • jonah-kanner/gracedb
  • brandon.piotrzkowski/gracedb
  • joseph-areeda/gracedb
  • duncanmmacleod/gracedb
  • thomas.downes/gracedb
  • tanner.prestegard/gracedb
  • leo-singer/gracedb
  • computing/gracedb/server
18 results
Show changes
Showing
with 2219 additions and 0 deletions
.. GraceDB operational (admin) tasks
Operational Tasks
=================
Contents:
.. toctree::
:maxdepth: 2
server_maintenance
new_pipeline
user_permissions
robot_certificate
phone_alerts
lvalert_management
sql_tips
miscellaneous
.. _phone_alerts:
========================
Phone alerts with Twilio
========================
*Last updated 1 December 2016*
Introduction
============
Phone and SMS alerts are currently implemented in GraceDB via `Twilio <https://www.twilio.com/>`__, a programmable communications provider used by both big and small organizations.
For our purposes, Twilio is used to make phone calls with synthesized voice messages, and to send SMS messages to phones (although many people can already get SMS messages using the existing GraceDB email alert feature, by using the email address that their service provider bridges to SMS).
Leo Singer is the local expert on Twilio, having used it with iPTF.
Similarly to e-mail alerts, the user can specify a variety of scenarios in which they would like to receive an alert: when a particular pipeline creates a new event, when a new label is applied, when an event with a given FAR is identified, and so on.
The GraceDB server sends the information to the Twilio server, which then contacts the recipient of the alert through a phone call and/or an SMS message.
Setting up Twilio for the LVC
=============================
*Notes here are from Peter Shawhan and Leo Singer on setting up a Twilio account.*
Peter Shawhan has some flexible funds which should be sufficient to pay for the LVC's usage of Twilio for at least the next couple of years, and possibly longer.
So, Peter created a new Twilio account, giving "LIGO Scientific Collaboration" for the company/organization name and ``peter.shawhan@ligo.org`` as the login email address.
When signing up at https://www.twilio.com, Peter indicated that our first product interest is "Voice" and our application is "Voice alerts" using Python, but that was probably just collecting information for marketing.
The Twilio console is reasonably intuitive.
With guidance from Leo, Peter did some of the "Getting started" steps as well as other configuration tasks:
- Ordered a Twilio phone number. All automated calls and SMS messages will originate from this number. We got to select the phone number, and LVC members will appreciate the number's similarity to the name of our first detected event. :)
- Upgraded the account (clicked the "Upgrade" link which was near the upper right corner of the window) to make it a paid account rather than a trial account.
- Set up a payment method (Peter's credit card, $20 initial deposit, with auto-recharge).
- Added other users to the account (Home --> Account --> Manage Users): for now, Tanner Prestegard and Leo Singer, both with "Developer" permissions.
- Requested permission to make voice calls to several foreign countries where LVC members are located (Home --> Account --> Account Settings --> Geographic Permissions). These calls cost somewhat more, and our request has to be reviewed by a Twilio representative. We entered the LIGO Lab's Caltech address, website URL, and contact phone number (626-395-2129 - we're guessing that is answered by Julie Hiroto?) for this. SMS messages to all countries seem to be enabled by default.
- Set up two TwiML action templates ((...) --> Developer Center --> TwiML Bins) called "GraceDB create" and "GraceDB label" to queue the actual calls and SMS messages through http POST requests. The URLs generated for these actions have long hash codes and can be looked up in that part of the Twilio console.
- We have not attempted to change the default call execution rate, which is 1 call per second. It is possible to submit a request to increase that rate (Programmable Voice --> Settings --> General --> Calls Per Second) but we won't do that, at least for now.
Twilio configuration for GraceDB
================================
As a GraceDB developer/maintainer, your first step is to get added as a developer to the LSC Twilio account (talk to Peter Shawhan or Leo Singer) - if you don't already have a Twilio account, you'll have to create one.
There are a few main things you'll need from the `Twilio Console <https://www.twilio.com/console>`__:
1. Account SID: visible at the top of the page, under "Account Summary".
2. Auth Token: below Account SID; you'll have to click on the eye icon to reveal it.
You'll also need the TwiML bin URLs, which you can get `here <https://www.twilio.com/console/dev-tools/twiml-bins>`__.
Currently, we have two TwiML bins:
- GraceDB create: used when an event is created.
- GraceDB label: used when an event is assigned a label.
If you click on one of the TwiML bins, you'll see an SID, a URL, and the configuration.
The SID will be needed for GraceDB to POST to this TwiML bin.
Also, the URL is defined by a base URL plus the SID: ``https://handler.twilio.com/twiml/SID``.
The "GraceDB create" TwiML bin configuration is shown here::
<?xml version="1.0" encoding="UTF-8"?>
<Response>
<Say>
A {{pipeline}} event with Grace DB ID {{graceid}} was created.
</Say>
<Sms>A {{pipeline}} event with GraceDB ID {{graceid}} was created.
https://{{server}}.ligo.org/events/view/{{graceid}}
</Sms>
</Response>
The call and message components are easily apparent.
Currently, it's configured to do both by default, but this may change in the future.
The attributes are appended to the TwiML bin URL for the POST request.
Example: ``https://handler.twilio.com/twiml/SID?pipeline=gstlal&graceid=G123456&server=gracedb``.
You'll need the Account SID, Auth Token, and TwiML bin SIDs for the next step.
Configuration on GraceDB server
===============================
Most of the relevant code is in ``gracedb/events/alerts.py``, including the following functions:
- ``get_twilio_from``
- ``make_twilio_calls``
- ``issueAlertForLabel``
- ``issueEmailAlert``
There is also some relevant code in ``gracedb/alerts/fields.py``, which defines a ``PhoneNumberField`` for the ``Contact`` model and validates the phone number when a user signs up for this service.
The TwiML bin SIDs are used in ``make_twilio_calls`` to generate the URLs and make the POST request.
These SIDs, along with the Account SID and Auth Token should **NOT** be saved in the git repository.
As a result, they are saved in ``config/settings/secret.py``, which is not part of the git repository, but is created by Puppet and encrypted in the GraceDB eyaml `file <https://git.ligo.org/cgca-computing-team/cgca-config/blob/production/production/hieradata/gracedb.cgca.uwm.edu.eyaml>`__ in the `cgca-config repository <https://git.ligo.org/cgca-computing-team/cgca-config>`__.
If you need to edit this, you'll have to follow the instructions `here <https://git.ligo.org/cgca-computing-team/cgca-config/blob/production/EncryptedYaml.md>`__ for working with eyaml files.
*Note: currently, the code for determining the phone call recipients is coupled with the code that determines e-mail recipients.
This may be the most efficient way of doing it, but we may want to consider separating it in the future for ease of understanding and modularity.*
Finally, phone calls are only made to LVC members at present.
.. _public_gracedb:
=====================================
GraceDB and public triggers
=====================================
In the era of public triggers, certain resources on GraceDB should be available
to users without any authentication. This is because the GCN notices will be
public, and these contain links to GraceDB resources (such as the skymap).
These particular links need to be open, since it's very impolite to include them
otherwise. In my mind, this is the strongest reason, but there could be others
as well.
There are several changes that will be needed in order to open up GraceDB
resources to the public. This outline may not be exhaustive.
Groups and permissions
======================
Un-authenticated users
----------------------
Truly un-authenticated users should only be able to perform ``GET`` requests.
Unfortunately, the Django ``AnonymousUser`` object doesn't help us much,
because this user has no group memberships, and we want to be able to control
access with groups. For example, the filtering of search results depeneds on a
perm string with group-based permissions.
Thus, we could create a dummy user::
from django.contrib.auth.models import User, Group
anon_user = User.objects.create(username='AnonUser')
anon = Group.objects.create(name='anon')
anon.user_set.add(anon_user)
And then instead of ``request.user`` being ``None`` for unauthenticated users,
it would be set to ``AnonUser``. That way, ``request.user`` would always be
either a known user or ``AnonUser``.
Authenticated public users
---------------------------
Interested members of the public who wish to use the web interface to comment
(i.e., a ``POST`` or ``PUT``) or to use the REST API (with basic auth) should
be required to *register* first on gw-astronomy.org. They will need to have a
google account or some institutional login from our identity federation.
This requires a bit of help for the auth team. Basically, we'll need a group
that doesn't require any approval from a liaison. But otherwise would be
similar to the LV-EM group.
The attribute authority should assert a group membership, such as ``public``.
(The name needs to be different than the ``anon`` group above, because we don't
want to allow the anonymous user to make any unsafe requests.)
Actually, here's one issue you'll run into: Google releases an obfuscated
persistent ID. It's basically a long string of numbers instead of the person's
actual name. But a user who leaves a comment on an event may *want* to identify
him/herself.
So I would recommend adding a field in the registration form on
gw-astronomy.org that allows the user to pick a display name. Then configure
the gw-astronomy attribute authority to release this attribute. In the ligoauth
middleware, grab the attribute (if present) and use it to set the ``last_name``
of the user. That way, it will show up in the event log display.
Apache Config
=============
This will be considerably simpler than before. Basically, the idea is that we
will require the shibboleth module to be active on all URL paths in the web
interface. But we won't actually require a ``valid-user`` except for particular
locations, such as the admin docs and the reports page. That way, instead of
having Apache handle the ACLs, more of it will be pushed off onto the app. But
that is actually a good thing::
Alias /documentation/ "/home/gracedb/gracedb_project/docs/user_docs/build/"
<Directory "/home/gracedb/gracedb_project/docs/user_docs/build/">
AuthType shibboleth
ShibRequestSetting requireSession false
Require shibboleth
</Directory>
Alias /admin_docs/ "/home/gracedb/gracedb_project/docs/admin_docs/build/"
<Directory "/home/gracedb/gracedb_project/docs/admin_docs/build/">
AuthType shibboleth
ShibRequestSetting requireSession 1
Require user branson.stephens@ligo.org alexander.pace@ligo.org patrick.brady@ligo.org
</Directory>
<Location "/">
AuthType shibboleth
ShibRequestSetting requireSession false
Require shibboleth
</Location>
View logic for manipulating permissions
=======================================
The necessary logic for manipulating permissions already exists. Suppose you
want to expose a particular event to the public. You'll give ``view`` perms to
the ``public`` and ``anon`` groups, and ``change`` perms to ``public`` only.
Unfortunately, the client doesn't have a convenience function for altering
permissions. So you have to create the URL and ``POST`` to it by hand. For
example, here is how to grant ``view`` permissions to ``public``::
from ligo.gracedb.rest import GraceDb
import urllib
g = GraceDb
graceid = 'G123456'
group_name = 'public'
perm_shortname = 'view'
url = g.service_url + urllib.parse.quote('events/%s/%s/%s' % (graceid, group_name, perm_codename))
r = g.put(url)
Templates
==========
This is where things can get tricky. First, I'll explain where things are now.
At present, there are only two main groups of users that can access GraceDB:
LVC users, and LV-EM MOU partners. The information shown to the MOU partners
differs in 4 ways from that shown to LVC members:
#. the false alarm rate is floored to avoid revealing that we have a gold-plated detection
#. the pipeline-specific attributes are missing
#. the list of event log messages is reduced to EM-relevant info
#. the neighbors list is filtered according to ``view`` permissions
To achieve the first three, ``is_external(request.user)`` is used. In
other words, these restrictions on the event view will be made for any
user who is *not* a member of the LVC. In particular, for (1), we check
whether the user is external before calculating a display FAR to pass into
the template context. For (2), we pass in a ``user_is_external`` variable
into the template context.
For item (3), the ``get`` method on the ``EventLogList`` resource returns an
appropriately filtered list of log messages. In particular, any log entries
will be removed from the list unless they have been deliberately tagged with
the external access tag. This tag is specified in the settings module::
EXTERNAL_ACCESS_TAGNAME = 'lvem'
The name of this tag is a bit unfortunate (since it is too specific--in the
future, not all external users will be members of LV-EM). But it could be
changed in the future.
The final item on the list above--the filtering of neighbors--is done in the
same way as the filtering of search results, so this needs no modification.
So what happens when we enter the public era? Well, if the general public
should be given access to the *same* information as the LV-EM group, then it will
be easy. In fact, nothing would need to be done.
However, I don't see that scenario as being likely, because the LV-EM group
members share information with the LVC and each other inside a private bubble.
They may not want to share information about observation coordinates with
members of the general public. Thus, we'll probably want to have an additional
tagname::
PUBLIC_ACCESS_TAGNAME = 'public'
And then individual log entries can be released to the public by tagging
them with this tag. An extra filter would need to be added in ``api.py``::
@event_and_auth_required
def get(self, request, event):
logset = event.eventlog_set.order_by("created","N")
# Filter log messages for external users.
if is_external(request.user):
logset = logset.filter(tags__name=settings.EXTERNAL_ACCESS_TAGNAME)
if is_public(request.user):
logset = logset.filter(tags__name=settings.PUBLIC_ACCESS_TAGNAME)
And you'll need an ``is_public`` utility function as well. Everything else can
be done in the same way as for external users.
Another potential issue that could arise: It's possible that the LV-EM MOU
group and the general public would have different FAR floors. In other words,
we might want to limit the LV-EM fars to one per 100 years, but limit the
public FARs to 1 per 10 years. If this is the case, you'll want to look for
instances of ``VOEVENT_FAR_FLOOR`` in the code and modify accordingly.
.. _robot_certificate:
================================
Creating a robot account
================================
General information on robot accounts
=====================================
The flagship data analysis pipelines are usually operated by groups of
users. Thus, it doesn't make much sense for the events to be submitted to GraceDB
via a single user's account. This is also impractical, as an individual
user's auth tokens expire often, but the pipeline process needs to be running
all the time.
Thus, some kind of persistent auth token is required, and this should be
attached to a GraceDB account that is *not* an individual user's personal
account. That way, a user running the data analysis pipeline can also
comment on the event as him or herself, and the provenance information is
clear and consistent. Robot accounts are thus, in effect, shared accounts.
Robot authentication
====================
At present, most robots authenticate to GraceDB using x509 certificates.
Users are discouraged from moving cert/key pars around from machine to machine,
so the usual recommendation is to ask that the person in charge of the robotic
process(es) obtain a cert/key pair for each computing cluster as needed.
The Common Names will hopefully follow a sensible pattern::
RobotName/ldas-pcdev1.ligo.caltech.edu
RobotName/ldas-pcdev1.ligo-la.caltech.edu
RobotName/ldas-pcdev1.ligo-wa.caltech.edu
...
Instructions for obtaining LIGO CA robot certificates can be found
`here <https://wiki.ligo.org/AuthProject/LIGOCARobotCertificate>`__.
.. NOTE::
LIGO CA robot certs expire after 1 year. The best way of "renewing"
is to generate a new Certificate Signing Request (CSR) with the old key, and
send that CSR to ``rt-auth``::
openssl x509 -x509toreq -in currentcert.pem -out robot_cert_req.pem -signkey currentkey.pem
.. NOTE::
Neither Apache nor the GraceDB app will check that the domain name
in the user's cert DN resolves to the IP from which the user is connecting.
(This is in contrast with the latest Globus tools, which do perform this
check.) Thus, a user may connect from ``ldas-pcdev2`` at CIT, even if the CN
in the cert is ``RobotName/ldas-pcdev1.ligo.caltech.edu``.
Once the user has obtained the certificates, ask him/her to send you the output
of::
openssl x509 -subject -noout -in /path/to/robot_cert_file.pem
That way you will know the subject(s) to link with the robotic user when
you create it.
In the future, it is hoped that robots will authenticate using Shibboleth
rather than x509. The user would request a robotic keytab and this robotic
user would have the correct group memberships in the LDAP. This will all you
to eliminate the x509 authentication path in GraceDB altogether. See the
sketch at :ref:`shibbolized_client`.
Creating the robot user
=======================
These same steps could all be done by hand using the Django console.
However, using a migration is encouraged since there is more of a paper trail
that way. See the Django docs on data migrations.
Create an empty data migration::
python manage.py makemigrations --empty ligoauth
Rename the resulting file to something sane::
cd ligoauth/migrations
mv 0004_auto_20160229_1541.py 0004_add_robot_RobotName.py
Edit the migration to do what you want it to do. You could use this as a template::
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
from django.conf import settings
ROBOTS = [
{
'username' : 'NewRobot',
'first_name' : '',
'last_name' : 'My New Robot', # Note that the last_name acts as a display
'email' : 'albert.einstein@ligo.org',
'dns' : [
"/DC=org/DC=ligo/O=LIGO/OU=Services/CN=NewRobot/ldas-pcdev1.ligo.caltech.edu.edu",
"/DC=org/DC=ligo/O=LIGO/OU=Services/CN=NewRobot/ldas-pcdev1.ligo-la.caltech.edu.edu",
"/DC=org/DC=ligo/O=LIGO/OU=Services/CN=NewRobot/ldas-pcdev1.ligo-wa.caltech.edu.edu",
]
},
]
def create_robots(apps, schema_editor):
User = apps.get_model('auth', 'User')
X509Cert = apps.get_model('ligoauth', 'X509Cert')
AuthGroup = apps.get_model('ligoauth', 'AuthGroup')
lvc_group = AuthGroup.objects.get(name=settings.LVC_GROUP)
robot_group = AuthGroup.objects.get(name='robot_accounts')
for entry in ROBOTS:
user, created = User.objects.get_or_create(username=entry['username'])
if created:
user.first_name = entry['first_name']
user.last_name = entry['last_name']
user.email = entry['email']
user.is_active = True
user.is_staff = False
user.is_superuser = False
user.save()
# Create the cert objects and link them to our user.
for dn in entry['dns']:
cert, created = X509Cert.objects.get_or_create(subject=dn,
user=user)
# Add our user to the LVC group. This permission is required to
# do most things, but may *NOT* always be appropriate. It may
# also be necessary to give the robotic user permission to populate
# a particular pipeline.
lvc_group.user_set.add(user)
# Add user to robot accounts
robot_group.user_set.add(user)
def delete_robots(apps, schema_editor):
User = apps.get_model('auth', 'User')
X509Cert = apps.get_model('ligoauth', 'X509Cert')
for entry in ROBOTS:
for dn in entry['dns']:
X509Cert.objects.get(subject=dn).delete()
User.objects.get(username=entry['username']).delete()
class Migration(migrations.Migration):
dependencies = [
('ligoauth', '0003_auto_20150819_1201'),
]
operations = [
migrations.RunPython(create_robots, delete_robots),
]
The above could definitely be refactored in some nice way.
I'll leave that as an exercise for the reader :-).
Now apply the migration::
python manage.py migrate ligoauth
.. _server_maintenance:
=============================
GraceDB server maintenance
=============================
*Last updated 24 July 2017*
This section documents procedures for performing server maintenance and upgrading the production server code.
Routine maintenance
===================
A few days in advance, send an e-mail to the DASWG and ldg-announce mailing lists, detailing the date, time, and expected duration of the maintenance.
You should upgrade the system packages on at least a bi-weekly basis.
Do the following on a test server first, then if everything seems OK, repeat for the production server:
.. code-block:: bash
# Get package updates
apt-get update
# CHECK which packages will be upgraded, removed, etc.
apt-get upgrade -s
# Install upgrades, if they look OK
apt-get upgrade
apt-get dist-upgrade
Then, reboot the server, run the server code unit tests (if available), and run the client code unit tests, pointing to the server you just upgraded.
When doing updates on the production server, make sure to take a snapshot of the VM first, in case something goes wrong.
Server code upgrades
====================
First, develop and test your new features on one of the test servers.
SCCB approval
-------------
Once you are ready to move the new features into production, you'll need to get approval from the SCCB (at least, during observational periods).
Create a new issue on the `SCCB project page <https://bugs.ligo.org/redmine/projects/sccb>`__.
The title should be something like "GraceDB server code update (1.0.10), 4 July 2017".
Note that you should submit these requests by Thursday at the latest if you want to implement the changes during maintenance the next Tuesday.
In the issue, you should describe the features/changes/bugfixes you've implemented, along with why they are necessary and what you've done to test them.
It's really helpful if you can get someone else with GraceDB experience to look over them in advance and approve them, too; this goes a long way with the SCCB reviewers.
The description should also include a link to a diff of the code changes from the server code git repository - this can be master vs. the old tag or a new tag vs. the old tag, depending on if you tag the code in advance.
I often wait to tag the new code until it's fully in place on the production server, in case any small changes become necessary due to suggestions from the reviewers.
Leave the status as 'New' and set the the category to 'Requested'.
After approval, an SCCB member will change the category to 'Approved' and the status to 'In Progress'.
After you've successfully updated the server code (see next subsection), post a short note, including a permanent diff between the old tag and the new tag (if you didn't already), and change the status to 'Closed' and the category to 'Production'.
Updating the production server
------------------------------
First, take a snapshot of the VM in case you somehow catastrophically break something.
It usually works best to shut the VM down while you do this.
After booting up again, I usually turn off Apache first to prevent users from submitting anything::
# As root
systemctl stop apache2
Next, ull the changes into the local master branch:
.. code-block:: bash
git checkout master
git pull
If you have any database migrations to run, first back up the database (see :ref:`sql_tips`).
Then, run the migrations (as the gracedb user):
.. code-block:: bash
source ~/djangoenv
# Show all migrations
# Those not marked with an 'X' have not been performed
python ~/gracedb/manage.py showmigrations
# Example migration
python ~/gracedb/manage.py migrate gracedb 0021
If you haven't tagged this version of the code already, do that and push the tag:
.. code-block:: bash
git tag -a gracedb-1.0.11 -m "tag notes"
git push --tags
Check out the tag (the production server should **always** be on a tag):
.. code-block:: bash
git checkout gracedb-1.0.11
Build this documentation (if affected by the patch):
.. code-block:: bash
sphinx-build -b html ~/gracedb/admin_doc/source/ ~/gracedb/admin_doc/build/
At this point, you can run the client code unit tests (pointing to the production server), since they only create Test events.
It always makes me a bit nervous to do this on the production server, so you can do some manual tests instead, like creating Test events, annotating with log messages, etc.
Basically, do whatever it takes for you to feel confident that the changes are in place and the server is working properly.
Send an all-clear e-mail the DASWG and ldg-announce mailing lists once everything is ready to go.
Memory management
=================
To get an idea for how much memory is currently being used, you can run the same plugin as nagios does for the memory check:
.. code-block:: bash
/usr/lib/nagios/plugins/check_memory -f -C -w 10 -c 5
To combat an issue with wsgi_daemon threads persisting and consuming memory, we've set up a monit instance on the production server which monitors the memory usage and kills all wsgi_daemon processes running longer than 12 hours if the memory usage goes over 70%.
If this doesn't work for some reason, you can:
* Run the script manually:
- Log in as root
- Run ``/root/kill_wsgi.sh``
* Kill the processes manually (not very nice, may result in dropped connections for users):
- Run ``sudo kill -s KILL $(pgrep -u wsgi_daemon)``
The aforementioned issue may be resolved in the near future.
Clearing file space
===================
Sometimes the GraceDB server's file system gets full for one reason or another.
A few common culprits are the ``apt`` logs and the Shibboleth cache.
To clean up ``apt`` logs (``/var/cache/apt``):
.. code-block:: bash
apt-get clean
Shibboleth seems to accumulate many MB of JSON cache files in ``/var/cache/shibboleth`` on a daily basis.
We have currently set up a cron job under the root user to clear files older than a day from this cache.
This may be fixed in future versions of Shibboleth.
To remove these files manually, you can do::
find /var/cache/shibboleth -type f -name '*.json' -mtime +1 -delete
To find directories which are using significant amounts of space, you can do something like:
.. code-block:: bash
sudo -i
cd /
du -h --max-depth=1
and iterate from there to identify large subdirectories.
Tips for emergencies
====================
1. Announce emergency maintenance, especially if downtime is expected. You'll probably want to send this announcement to DASWG, ldg-announce, and possibly DAC and the search groups.
2. Take a VM snapshot, if possible.
Stuck/overloaded server
-----------------------
If the server is "stuck", you might need to:
* Restart Apache: ``systemctl restart apache2``
* Restart Shibboleth: ``systemctl restart shibd``
* Free up some memory: see `Memory management`_.
* Clear up some space on the file system: see `Clearing file space`_.
* Reboot the entire server.
Server code bugfixes
--------------------
If at all possible, you'll want to do your testing/debugging on a test server.
To do this, you might need to copy the production database over to a test server.
See :ref:`sql_tips` for how to do this.
A few things that you may want to do after copying the database, but before beginning your debugging:
* Turn off phone alerts! Obviously the Contact and Notification instances are part of the database you just copied and will trigger and annoy people if you submit events for testing. The easiest solution is probably to just delete all of the Contact and/or Notification objects (in the copied database) through the Django shell.
* You may want to turn off XMPP alerts just to be safe.
.. _shibbolized_client:
================================
The Shibbolized Client
================================
Goal
====
Eventually, it would be nice to move towards not using any X509-based
authentication. If were able to use Shibboleth only, that would considerably
simplify the auth infrastructure of GraceDB. It's also nicer in the sense that
all of the necessary information comes through the Shibboleth session. That way
we could get rid of our dependence on the LIGO LDAP as well.
Installation and usage
======================
At present, there is an experimental Shibbolized client that lies on a separate
branch. I recommend installing it in a virtual environment::
virtualenv --system-site-packages test
source test/bin/activate
ecp-cookie-init LIGO.ORG https://versions.ligo.org/git albert.einstein
git clone https://versions.ligo.org/git/gracedb-client.git
cd gracedb-client
git checkout shibbolized_client
python setup.py install
In order to use the client you will need a Kerberos ticket cache::
kinit albert.einstein@LIGO.ORG
When you run the initialize method of the client, it uses this ticket cache to
authenticate against the LIGO IdP, and stores the resulting Shibboleth session
in a cookie jar::
from ligo.gracedb.rest import GraceDb
g = GraceDb()
g.initialize()
Now the client is ready to use.
Robots
======
It's possible to obtain LIGO robot keytabs by going to
`robots.ligo.org <https://robots.ligo.org/>`__ and clicking on "Apply for a
shibboleth automaton keytab." Once you have this keytab, you can obtain a
ticket cache by::
kinit myRobot/robot/my.ligo.host.edu -k -t myrobot.robot.my.ligo.host.edu
where ``myrobot.robot.my.ligo.host.edu`` is the name of the keytab file.
These ticket caches are only valid for 24 hours, so it is handy to put the
``kinit`` command into a cron job. When requesting the keytab, make sure to
specify that the robot should belong to the group ``Communities:LSCVirgoLIGOGroupMembers``.
If the robot is to create GraceDB events, then the robot user will need to be
authorized to do that as described in :ref:`new_pipeline`.
.. _sql_tips:
==============================
Database interactions with SQL
==============================
*Last updated 11 Aug 2017*
This section gives some tips on using the MySQL interface and some descriptions of currently available tables in the GraceDB and LVAlert databases.
The first step is to log in to the MySQL interface::
sudo -i
mysql -u root -p
Then enter the password when prompted.
Collection of tricks
====================
.. NOTE::
It's not safe to update tables or delete rows manually through the MySQL interface, unless you **REALLY** know what you're doing (due to cross-table dependencies). The safest bet is to do it through the Django manager shell.
Viewing data
------------
* See all databases: ``SHOW DATABASES;``
* Select a database to use: ``USE gracedb;``
* See tables in that database: ``SHOW TABLES;``
* Get column names of a table: ``SHOW COLUMNS FROM events_event;``
* Get number of rows in a table: ``SELECT COUNT(*) FROM events_event;``
* See all data in a table: ``SELECT * FROM events_event;``
* Get only specific columns from a table: ``SELECT id,created FROM events_event;``
* Get a specific row: ``SELECT * FROM events_event WHERE id=2;``. Can use ``!=`` as well.
* Get a set of rows with a regular expression: ``SELECT * FROM auth_user WHERE user like '%albert%';``
* ``%`` is a wildcard.
* Can use ``NOT LIKE`` as well.
* Show a few rows: ``SELECT * FROM auth_user LIMIT Ni,N;`` where ``Ni`` is the starting row and ``N`` is the number of rows to display.
* Get unique entries in a column: ``SELECT DISTINCT(group_id) FROM events_event;``
Modifying data
--------------
* Delete a row or set of rows: ``DELETE FROM events_event where id<4;``
* Edit rows matching a regex: ``UPDATE auth_user SET username='new_username' WHERE username LIKE 'albert%';``
More complicated stuff
----------------------
* Set a variable: ``SET @var='created';``
* Show variable value: ``SELECT @var;``
* Get table information: ``SHOW TABLE STATUS WHERE Name='events_eventlog';``
* Changing storage engine for a table: ``ALTER TABLE events_eventlog ENGINE=InnoDB;``
* Check foreign key relationships::
USE information_schema;
SELECT table_name,column_name,referenced_table_name,referenced_column_name FROM key_column_usage;
* Complex ``SELECT`` query with a join::
SELECT event.id, user.username
FROM events_event AS event
LEFT JOIN auth_user AS user ON event.submitter_id=user.id
WHERE id<4;
* Complex ``UPDATE`` query with a join::
UPDATE events_event AS event
LEFT JOIN events_group AS group ON event.group_id=group.id
SET event.far=0
WHERE (group.name='Stochastic' AND event.far > 0);
Database copying and checking
-----------------------------
Make a copy of the database::
mysqldump -u gracedb -p gracedb > backup.sql
Dump only specific tables::
mysqldump -u root -p gracedb events_event events_eventlog auth_user > backup.sql
Use this command to import a dump file (overwrites any databases or tables which already exist)::
mysql -u gracedb -p gracedb < backup.sql
Check the database for errors::
mysqlcheck -c gracedb -u gracedb -p
.. NOTE::
This won't resolve foreign key issues; for example, if an event is removed but the corresponding event logs are not.
GraceDB tables
--------------
The following tables should be the same across all GraceDB instances:
* auth_group
* auth_group_permissions
* auth_permission
* auth_user
* auth_user_groups
* auth_user_user_permissions
* django_content_type
* events_group
* events_label
* events_pipeline
* events_search
* guardian_groupobjectpermission
* guardian_userobjectpermission
* ligoauth_alternateemail
* ligoauth_ligoldapuser
* ligoauth_localuser
* ligoauth_x509cert
* ligoauth_x509cert_users
I'm not sure what the following tables do or if they are still in use:
* coinc_definer
* coinc_event
* coinc_event_map
* coinc_inspiral
* django_session
* django_site
* experiment
* experiment_map
* experiment_summary
* events_approval
* ligolwids
* multi_burst (somehow related to coinc_event)
* process
* process_params
* search_summary
* search_summvars
* sngl_inspiral
* time_slide
Notes on LVAlert databases
==========================
* To get the MySQL database password: ask Patrick Brady or the previous GraceDB developer.
* You'll want to do ``USE openfire;`` to select the database after logging into the MySQL interface.
Summary of tables
-----------------
All tables not listed here were found to be empty by Tanner in April 2017.
* Not empty, but probably not useful
* ofID: not sure
* ofPrivacyList: not sure, only contains stuff from Brian Moe
* ofRoster: users, JIDs, nicknames, but only 4 users...
* ofRosterGroups: shows roster groups (what are those?)
* ofUserProp: something to do with the admin user
* ofVCard: not sure
ofVersion: shows ``openfire`` version
* Possibly useful
* ofOffline: shows messages not received or waiting to be received? Seems out of date.
* ofPresence: shows offline users (?)
* ofPubsubDefaultConf: shows default configuration for users? Only contains info for certain users.
* ofPubsubItem: shows all messages? Seems out of date
* Useful
* ofProperty: properties of ``openfire`` server
* ofPubsubAffiliation: shows affiliations to nodes, most useful columns are nodeID, jid (username, affiliation. Note: affiliation='none' indicates a subscriber to the node.
* ofPubsubNode: shows all nodes
* ofPubsubSubscription: shows node subscriptions
* ofSecurityAuditLog: log of admin actions through the console
* ofUser: list of users
.. _user_permissions:
================================
Managing user permissions
================================
*Last updated 18 Sept 2017*
.. NOTE::
The examples here show how to work with permissions in the Django console.
Please adapt these examples and use a database migration to implement
any changes to permissions.
Background on the permissions infrastructure
============================================
Native Django permissions
-------------------------
I find the Django docs a bit too concise on the subject of Permissions.
So what I'd like to do in this section is to explain how the permissions
infrastructure works in GraceDB in a relatively self-contained way.
Let's start with the native Django ``Permission`` model itself.
Instances of this model correspond to permissions to do specific
things, such as the permission to add an event. The formal ``name`` of
this permission object would be "Can add Event."
They always consist of the modal verb "can" (i.e., "is permitted to"),
an infinitive (i.e., the permitted action), and an object, which is
always a Django model.
For each model, Django automatically creates three permission objects:
on each for the infinitives ``add``, ``change``, and ``delete``.
A full sentence about permissions (including the *subject*), such as
"Albert can add events"
comes when about you associate a ``Permission`` with a ``User``.
This is a many-to-many
relationship (any given user can have many permissions, and any given
permission can be held by many users). In order to answer the
question "Does Albert have permission to add events?", one queries the
database to see if
this relationship between ``User`` and ``Permission`` exists.
The ``Permission`` model itself has a fairly small set of attributes:
the human-friendly ``name`` (mentioned above), the ``content_type``,
and a ``codename``. The ``codename`` consists
of the infinitive and object, lower-cased and separated by an underscore,
such as ``add_event``. This code name makes for a very convenient way of looking up
permissions in the database. The ``content_type`` specifies
exactly which model the permission refers to (e.g., the ``Event`` model
from the ``events`` app).
(The content type entry contains both the model name *and* the app to which
the model belongs because both are necessary to fully specify the model. It
is not uncommon to have the same model name in multiple apps.)
Putting it all together, here's how a permission check could be done
in real life (from inside the Django shell)::
>>> from django.contrib.auth.models import User, Permission
>>> p = Permission.objects.get(codename='add_event')
>>> u = User.objects.get(username='albert.einstein@LIGO.ORG')
>>> if p in u.user_permissions.all():
...: print("Albert can add events!")
The Django ``User`` class has a convenience function ``has_perm`` to
make this easier::
>>> from django.contrib.auth.models import User
>>> u = User.objects.get(username='albert.einstein@LIGO.ORG')
>>> if u.has_perm('events.add_event'):
...: print("Albert can add events!")
Again, notice that the ``has_perm`` function needs the codename to be scoped by
the app to which the model belongs. Both are required to fully specify the model.
Permissions can also be granted to a ``Group`` of users. In practice, this
is the most common way of doing permissions in GraceDB. Thus, individual
users can have a given permission by virtue of one of their
group memberships. Here is an example from real life::
>>> from django.contrib.auth.models import User, Group, Permission
# Retreive a specific permission, user, and group from the database
>>> p = Permission.objects.get(codename='add_groupobjectpermission')
>>> u = User.objects.get(username='peter.shawhan@LIGO.ORG')
>>> g = Group.objects.get(name='executives')
# Peter is a member of the executives group.
>>> u in g.user_set.all()
True
# The permission is not in Peter's individual user permission set.
>>> p in u.user_permissions.all()
False
# But the permission *is* in the permission set for the executives group.
>>> p in g.permissions.all()
True
# Thus, Peter has permission by virtue of his group membership.
>>> u.has_perm('guardian.add_groupobjectpermission')
True
The significance of the permission used in the example above,
``add_groupobjectpermission``,
will be explained in the next section.
Custom permissions for GraceDB
------------------------------
To wrap up our discussions of the native Django permissions infrastructure,
we note that Django allows custom permission types
in addition to the default ``add``, ``change``, and ``delete`` permissions.
We have added three custom permissions (i.e. *infinitives*) for GraceDB.
First, it is important to control which users and groups can *view*
event data. Thus, we have added a custom ``view`` permission for the event models::
>>> from django.contrib.auth.models import Permission
>>> perms = Permission.objects.filter(codename__startswith='view')
>>> for p in perms:
...: print(p.codename)
...:
view_coincinspiralevent
view_event
view_grbevent
view_multiburstevent
view_siminspiralevent
A second custom permission arises because we need to control which users may
upload *non-Test* events for a given pipeline. Typically, a single robotic user
and a group of known pipeline developers are given permission to create
non-test events for a given pipeline. This is to prevent accidental contamination
of the event stream with test events. For lack of a better term, the infinitive
chosen for this purpose is "populate", and the model referred to is ``Pipeline``.
Thus the permission's codename is ``populate_pipeline``. Notice that ``add``
would not have worked here, since the user isn't adding a new pipeline, but rather
populating an existing one. (I suppose ``change`` could have been used, but that
would seem a little weird to me too. Adding an event for a pipeline doesn't
change the pipeline itself.)
The final custom permission applies only to GRB events. A trusted set of users
is allowed to edit GRB event information *after* the event has been created in
order to set values for quantities like the redshift and T90, which are not
known at the time of event creation. T90 was the first such attribute added, so
the infinitive chosen for this permission is ``t90``. In other words, a user
is said to *T90* a ``GrbEvent`` when he or she adds or updates these special attributes.
(I realize this is ugly, but I couldn't think of anything better at the time.
It has to be short.) Thus, the codename is ``t90_grbevent``. Again, one might wonder
whether ``change_grbevent`` would have made more sense to use for this purpose.
However, that permission is already being used to check for the ability to
add log messages or observation records to the event.
The row-level extension
-----------------------
The permissions described above apply at the Django model level, or
equivalently, to entire database tables. Thus, a user with the permission
``change_event`` in his or her permission set is able to change *any* entry in
the ``events_event`` table. However, GraceDB requires finer grained access
controls: we need to be able to grant individual users or groups permissions
on *individual objects*, or equivalently, individual rows of the database.
Thus these are sometimes called *row-level* (or object-level) permissions, as opposed to the
usual *table-level* (or model-level) permissions.
We use a third party package,
`django-guardian <https://github.com/django-guardian/django-guardian>`__,
to add support for row-level permissions.
In practice, row-level permissions are the most commonly used type for
GraceDB. (There are a few exceptions, for example the
``t90_grbevent`` permission discussed in the previous section, which is table-level.)
Row-level permissions work as a simple extension of system outlined above.
In order to specify a row-level permission for a user, we will need to
know three pieces of information: 1) the user in question, 2) the permission
being granted, and 3) the particular object for which the user will have
said permission.
Thus, the ``UserObjectPermission`` model has the following attributes: ``user``,
``permission``, ``content_type``, and ``object_pk``. Together, the ``content_type``
and the ``object_pk`` specify the individual object (or database row)
that this permission refers to.
.. NOTE::
In the case of table-level permissions, the association between ``User``
and ``Permission`` was handled with a many-to-many relationship. In this
case, the ``guardian`` package provides an entirely new model instead, (the
``UserObjectPermission``), with foreign keys to the user and permission.
This is necessary because of the additional required attributes.
.. NOTE::
One might ask: "Why should we use two separate fields (``content_type`` and
``object_pk``) to specify the object? Why not just have a foreign key to
the object instead?" But using a foreign key field would mean that we need
a different ``User{*}Permission`` model for *each and every* model that we
want to control access to. This is because the specific model is hardwired
into the declaration of a foreign key field. So, instead, the designers of
``guardian`` decided to store the primary key (``object_pk``) and the model
(``content_type``). This way, the ``UserObjectPermission`` model is
completely generic.
As with table-level permissions, row-level permissions can also be applied to
groups. Thus, there is also a ``GroupObjectPermission`` object. It is the same
as the ``UserObjectPermission``, except that there is a foreign key to a
``Group`` rather than a ``User``. The majority of the row-level permission
objects are actually for groups rather than users. In particular, we use group
object ``view`` permissions to expose events to various groups.
Now I can explain the example in the previous section with
``add_groupobjectpermission`` permission object. A user with this permission
may create new ``GroupObjectPermission`` objects. Thus, he or she will be
authorized to grant ``view`` permission on a particular event to a particular
group of users, such as the LV-EM observers group
(``gw-astronomy:LV-EM:Observers``). Similarly, the
``delete_groupobjectpermission`` permission controls whether a user can
*revoke* the view permissions on an event. Because releasing event information
to non-LVC users is a sensitive matter, only the ``executives`` group is
authorized to do this. Thus, we are using table-level permissions to authorize
the addition and deletion of row-level permissions.
Turtles all the way down (well, not *really*).
On permissions and searching for events
---------------------------------------
One of the main features of GraceDB is the ability to query for events matching
certain criteria. However, we clearly only want events for which the user has
``view`` permission to show up in the search results. Suppose a user has
searched for events from the ``gstlal`` pipeline, and we want to filter the
events according to the user's ``view`` permissions. One way to do this is by
querying the database for each event in the queryset to see if the user has
either an individual or group permission to view the event. There is a
`shortcut <http://django-guardian.readthedocs.org/en/stable/api/guardian.shortcuts.html#get-objects-for-user>`__
provided to do this from the guardian package::
from django.contrib.auth.models import User
from events.models import Event, Pipeline
from guardian.shortcuts import get_objects_for_user
user = User.objects.get(username='albert.einstein@LIGO.ORG')
events = Event.objects.filter(pipeline=Pipeline.objects.get(name='gstlal'))
filtered_events = get_objects_for_user(user, 'events.view_event', events)
However, behind the scenes, this requires creating a complex join query over
several tables, and the process is rather slow. Thus, I added a field to the
base event class itself called ``perms``, which is intended to store a
JSON-serialized list of the group permissions, where each
``GroupObjectPermission`` on the event is represented by a string like ``<group
name>_can_<shortname>`` (where the shortname corresponds to the permission's
infinitive). Thus, to filter a queryset of events, one can do the following::
from django.db.models import Q
from django.contrib.auth.models import User, Group
user = User.objects.get(username='albert.einstein@LIGO.ORG')
shortname = 'view' # Typically, we are filtering for view permissions.
auth_filter = Q()
for group in user.groups.all():
perm_string = '%s_can_%s' % (group.name, shortname)
auth_filter = auth_filter | Q(perms__contains=perm_string)
return events.filter(auth_filter)
This constructs a string for each group the user belongs to, and checks to
see whether that string occurs anywhere within the event's ``perm`` string.
These queries are combined with ``OR`` so that as long as one group has
``view`` permission on an event, that event will be present in the final
filtered queryset used to construct the search results page.
The utility ``filter_events_for_user`` in ``permission_utils.py`` uses
this technique. It improves the speed of a search by about a factor of 10
with respect to the ``get_objects_for_user`` method shown above, based
on some anecdotal testing.
There is also a method on the ``Event`` object to refresh this permissions
string::
from events.models import Event
e = Event.getByGraceid('G184098')
e.refresh_perms()
This operation is idempotent, so it should always be safe to do this. Sensible
defaults are applied when the event is created, but it is necessary to call
this ``refresh_perms`` method if the permissions are updated (i.e., if the
event is exposed or hidden from the LV-EM group).
Practical examples
==================
Granting permissions to expose events
-------------------------------------
So, practically speaking, how would you give someone permission to expose to
the LV-EM observers group? Or permission to hide an event that has already been
exposed? It's simple: Just add the person to the ``executives`` group::
>>> from django.contrib.auth.models import User, Group
>>> u = User.objects.get(username='albert.einstein@LIGO.ORG')
>>> g = Group.objects.get(name='executives')
>>> g.user_set.add(u)
Granting permission to edit GRB events
--------------------------------------
As mentioned in the discussion above, sometimes the GRB group requests that
a user be enabled to add values like T90 and redshift to GRB events. This
can be done by adding the permission by hand::
>>> from django.contrib.auth.models import User, Permission
>>> p = Permission.objects.get(codename='t90_grbevent')
>>> u = User.objects.get(username='albert.einstein@LIGO.ORG')
>>> u.user_permissions.add(p):
...: print("Albert can add events!")
Granting permission to populate a pipeline
------------------------------------------
Permission to populate a pipeline is typically given to individual users,
rather than to groups. Thus, a new ``UserObjectPermission`` needs to be
created. An example of this is shown in :ref:`new_pipeline`, in the section
on server-side changes.
.highlight .err {
border: inherit;
box-sizing: inherit;
}
html, body {
color: black;
background-color: white;
margin: 0;
padding: 0;
}
a.link, a, a.active {
color: #369;
}
h1,h2,h3,h4,h5,h6,#getting_started_steps {
font-family: "Century Schoolbook L", Georgia, serif;
font-weight: bold;
}
h1.docnav {
font-size: 25px;
}
#getting_started_steps li {
font-size: 80%;
margin-bottom: 0.5em;
}
#gracedb-nav-header {
color: black;
font-size: 127%;
background-color: white;
font: x-small "Lucida Grande", "Lucida Sans Unicode", geneva, verdana, sans-serif;
/* width: 757px; */
width: 95%;
margin: 0 auto 0 auto;
border: none;
/* border-left: 1px solid #aaa; */
/* border-right: 1px solid #aaa; */
padding: 10px 10px 0px 10px;
}
/* Nav */
#nav {
margin:0;
padding:0;
background:#eee; /* Nav base color */
float: left;
width: 100%;
font-size: 13px;
border:1px solid #42432d;
border-width:1px 1px;
}
#nav #nav-user
{
color:#000;
background:#eee; /* Nav base color */
padding:4px 20px 4px 20px;
float: right;
width: auto;
text-decoration:none;
font:bold 1em/1em Arial, Helvetica, sans-serif;
text-transform:uppercase;
/* text-shadow: 2px 2px 2px #555; */
}
#nav #nav-login, #nav #nav-logout {
float: right;
border-left: 1px solid black;
}
#nav li {
display:inline;
padding:0;
margin:0;
}
/*
#nav li:first-child a {
border-left:1px solid #42432d;
}
*/
#nav a:link,
#nav a:visited {
color:#000;
background:#eee; /* Nav base color */
/* padding:20px 40px 4px 10px; */
padding:4px 20px 4px 20px;
float: left;
width: auto;
border-right:1px solid #42432d;
text-decoration:none;
font:bold 1em/1em Arial, Helvetica, sans-serif;
text-transform:uppercase;
/* text-shadow: 2px 2px 2px #555; */
}
#nav a:hover {
/* color:#fff; / * Use if bg is dark */
background: #dce2ed; /* Nav hover color */
}
#home #nav-home a,
#public #nav-public a,
#search #nav-search a,
#pipelines #nav-pipelines a,
#alerts #nav-alerts a,
#password #nav-password a,
#doc #nav-doc a,
#other #nav-other a,
#about #nav-about a,
#archive #nav-archive a,
#lab #nav-lab a,
#reviews #nav-reviews a,
#latest #nav-latest a,
#contact #nav-contact a {
background: #a9b0ba; /* Nav selected color */
/* color:#fff; / * Use if bg is dark */
/* text-shadow:none; */
}
#home #nav-home a:hover,
#public #nav-public a,
#search #nav-search a,
#pipelines #nav-pipelines a,
#alerts #nav-alerts a,
#password #nav-password a,
#doc #nav-doc a,
#other #nav-other a,
#about #nav-about a:hover,
#archive #nav-archive a:hover,
#lab #nav-lab a:hover,
#reviews #nav-reviews a:hover,
#latest #nav-latest a:hover,
#contact #nav-contact a:hover {
/* background:#e35a00; */
background: #a9b0ba; /* Nav selected color */
}
#nav a:active {
/* background:#e35a00; */
background: #a9b0ba; /* Nav selected color */
color:#fff;
}
/* The Following is for the subclasses table. */
table.subclasses_main {
width: 100%;
}
td.subclasses_row {
vertical-align: top;
}
.. _auth:
================================
Authentication and Authorization
================================
Authentication methods
========================================
GraceDB supports three different types of authentication methods depending on the entry point:
- **Web interface**: ``gracedb.ligo.org/`` supports Shibboleth with
federated identities.
- **REST API**: The API has a single entry point which can handle the following types of authentication:
- **Shibboleth**
- **Scitokens**
- **X509**
Unauthenticated, read-only access is also available for both the web interface and the API.
Only a limited set of information is available to unauthenticated users.
GraceDB permissions
=========================================
After a user has successfully authenticated, GraceDB examines the user's
group memberships to determine whether the user is authorized to access
or modify a particular resource. These permissions apply at the level of
individual events. The relevant permissions are: ``view`` (which allows
viewing) and ``change`` (which allows annotation). In most cases, LVK
users have both permissions on all events. By contrast, LV-EM members
(and, in the future,
other external users) have permissions only on events that have
been vetted for
distribution through the EM followup alert network. This restriction
is to avoid possible confusion with injections, glitches, pipeline
testing etc. Membership in a special administrators group is
required to alter the
permissions on an event.
Creating new events in a *non-test* group (e.g., CBC or Burst) requires
a special ``populate`` permission on the relevant pipeline object. These
permissions are set at the user (rather than group) level and are
maintained by hand. Send email to the GraceDB maintainer
or the `IGWN Computing Helpdesk <mailto:computing-help@ligo.org>`__
if you need a new pipeline or pipeline permission.
Robot certificates
=====================
Access to the REST API through the X509 entry point requires a valid robot
certificate. (Note: This is only necessary for LVK users. LV-EM users can see
:ref:`basic_auth_for_lvem` .) Instructions for obtaining a certificate are
available
`here <https://wiki.ligo.org/AuthProject/LIGOCARobotCertificate>`__. When you
send the certificate signing request (CSR) to the
``rt-auth`` queue, make sure to indicate that you intend to use the
robot certificate to access GraceDB.
Robot keytabs
======================
Robot keytabs are now available by going to `robots.ligo.org <https://robots.ligo.org>`__ and
clicking on "Apply for a shibboleth automaton keytab." However, the version of the
GraceDB Python client that works with the Kerberos ticket cache has not yet
been released. You can get this version of the client by cloning the client source
and checking out the ``shibbolized_client`` branch. After running ``kinit``
with your robot keytab to obtain a valid ticket cache, you can use the
Shibbolized client as follows::
from ligo.gracedb.rest import GraceDb, HTTPError
SERVICE = "https://gracedb.ligo.org/api/"
SHIB_SESSION_ENDPOINT = "https://gracedb.ligo.org/Shibboleth.sso/Session"
client = GraceDb(SERVICE, SHIB_SESSION_ENDPOINT)
try:
r = client.ping()
except HTTPError as e:
print(e.message)
print("Response code: %d" % r.status)
print("Response content: %s" % r.json())
# -*- coding: utf-8 -*-
#
# GraceDB documentation build configuration file, created by
# sphinx-quickstart on Mon Jun 15 09:22:37 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
import shlex
# Import bootstrap theme:
import sphinx_rtd_theme
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
"sphinx_rtd_theme",
'sphinx.ext.autodoc',
'sphinx.ext.todo',
]
# Add any paths that contain templates here, relative to this directory.
#templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'GraceDB'
copyright = u'2020, Tanner Prestegard, Alexander Pace, Brian Moe, Branson Stephens, Patrick Brady'
author = u'Tanner Prestegard, Alexander Pace, Brian Moe, Branson Stephens, Patrick Brady'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
#version = '1.19.dev0'
# The full version, including alpha/beta/rc tags.
#release = '1.19.dev0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
sys.path.append(os.path.abspath('_themes'))
html_theme_path = ['_themes']
html_theme = 'sphinx_rtd_theme'
#html_translator_class = 'bootstrap.HTMLTranslator'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {
# 'navbar_site_name': "Documentation Contents",
#}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = sphinx_rtd_theme.get_html_theme_path()
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# These paths are either relative to html_static_path
# or fully qualified paths (eg. https://...)
html_css_files = [
'css/extra.css',
]
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
#html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# Now only 'ja' uses this config value
#html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'GraceDBdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# Latex figure (float) alignment
#'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'GraceDB.tex', u'GraceDB Documentation',
u'Brian Moe, Branson Stephens, Patrick Brady', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'gracedb', u'GraceDB Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'GraceDB', u'GraceDB Documentation',
author, 'GraceDB', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
{
"N": 1,
"comment": "message",
"created": "2018-09-19 18:28:57 UTC",
"dec": 6.5,
"decWidth": 3.6999999999999993,
"footprint_count": 4,
"footprints": [
{
"N": 4,
"dec": 8.0,
"decWidth": 0.7,
"exposure_time": 1,
"ra": 4.0,
"raWidth": 0.5,
"start_time": "2018-09-19 13:29:00 UTC"
},
{
"N": 3,
"dec": 7.0,
"decWidth": 0.7,
"exposure_time": 1,
"ra": 3.0,
"raWidth": 0.5,
"start_time": "2018-09-19 13:28:59 UTC"
},
{
"N": 2,
"dec": 6.0,
"decWidth": 0.7,
"exposure_time": 1,
"ra": 2.0,
"raWidth": 0.5,
"start_time": "2018-09-19 13:28:58 UTC"
},
{
"N": 1,
"dec": 5.0,
"decWidth": 0.7,
"exposure_time": 1,
"ra": 1.0,
"raWidth": 0.5,
"start_time": "2018-09-19 13:28:57 UTC"
}
],
"group": "ARI LJMU",
"ra": 2.5,
"raWidth": 3.5,
"submitter": "albert.einstein@LIGO.ORG"
}
{
"submitter": "albert.einstein@LIGO.ORG",
"created": "2022-03-16 16:24:22 UTC",
"group": "CBC",
"graceid": "G194533",
"pipeline": "gstlal",
"gpstime": 1331483080.381693,
"instruments": "H1,L1",
"nevents": 2,
"offline": true,
"search": "AllSky",
"far": 9.838364843405464e-07,
"far_is_upper_limit": false,
"likelihood": 7.102756279364618,
"labels": [],
"extra_attributes": {
"CoincInspiral": {
"ifos": "H1,L1",
"end_time": 1331483080,
"end_time_ns": 381693272,
"mass": 33.25822162628174,
"mchirp": 8.482876777648926,
"minimum_duration": 10.69916749000549,
"snr": 8.132830463379912,
"false_alarm_rate": 1.0,
"combined_far": 9.838364843405464e-07
},
"SingleInspiral": [
{
"alpha5": 0.0,
"Gamma9": 0.0,
"alpha3": 0.0,
"chi": 0.0,
"Gamma3": 0.0,
"mass2": 3.8596239,
"psi3": 0.0,
"bank_chisq_dof": 0,
"alpha4": 0.0,
"spin1x": 0.0,
"tau5": 0.0,
"cont_chisq_dof": 0,
"tau0": 13.399828,
"spin2z": -0.7718026,
"chisq": 0.84257615,
"ttotal": 0.0,
"Gamma5": 0.0,
"Gamma2": 0.0,
"Gamma7": 0.0,
"ifo": "H1",
"tau3": 2.0797865,
"coa_phase": 1.0123366,
"template_duration": 16.54621628092527,
"bank_chisq": 4.4854383,
"event_duration": 0.0,
"sigmasq": 94135947.51864928,
"Gamma4": 0.0,
"Gamma6": 0.0,
"mtotal": 33.258221,
"tau4": 0.0,
"psi0": 0.0,
"impulse_time": 0,
"alpha6": 0.0,
"spin2x": 0.0,
"cont_chisq": 0.0,
"end_time_ns": 296526746,
"mchirp": 8.4828768,
"alpha": 0.0,
"mass1": 29.398598,
"impulse_time_ns": 0,
"spin1y": 0.0,
"Gamma8": 0.0,
"spin2y": 0.0,
"spin1z": -0.023584178,
"channel": "GDS-CALIB_STRAIN_CLEAN",
"end_time": 1269006850,
"eta": 0.10258257,
"kappa": 0.0,
"search": "",
"amplitude": 0.0,
"snr": 4.4854383,
"alpha2": 0.0,
"beta": 0.0,
"rsqveto_duration": 0.0,
"alpha1": 0.0,
"chisq_dof": 1,
"end_time_gmst": 46546.36006766932,
"f_final": 1024.0,
"Gamma0": 8754102.0,
"tau2": 0.0,
"Gamma1": 682.0
},
{
"alpha5": 0.0,
"Gamma9": 0.0,
"alpha3": 0.0,
"chi": 0.0,
"Gamma3": 0.0,
"mass2": 3.8596239,
"psi3": 0.0,
"bank_chisq_dof": 0,
"alpha4": 0.0,
"spin1x": 0.0,
"tau5": 0.0,
"cont_chisq_dof": 0,
"tau0": 13.399828,
"spin2z": -0.7718026,
"chisq": 0.80979407,
"ttotal": 0.0,
"Gamma5": 0.0,
"Gamma2": 0.0,
"Gamma7": 0.0,
"ifo": "L1",
"tau3": 2.0797865,
"coa_phase": -2.6118183,
"template_duration": 16.54621628092527,
"bank_chisq": 6.7840824,
"event_duration": 0.0,
"sigmasq": 147851034.091153,
"Gamma4": 0.0,
"Gamma6": 0.0,
"mtotal": 33.258221,
"tau4": 0.0,
"psi0": 0.0,
"impulse_time": 0,
"alpha6": 0.0,
"spin2x": 0.0,
"cont_chisq": 0.0,
"end_time_ns": 300832569,
"mchirp": 8.4828768,
"alpha": 0.0,
"mass1": 29.398598,
"impulse_time_ns": 0,
"spin1y": 0.0,
"Gamma8": 0.0,
"spin2y": 0.0,
"spin1z": -0.023584178,
"channel": "GDS-CALIB_STRAIN_CLEAN",
"end_time": 1269006850,
"eta": 0.10258257,
"kappa": 0.0,
"search": "",
"amplitude": 0.0,
"snr": 6.7840824,
"alpha2": 0.0,
"beta": 0.0,
"rsqveto_duration": 0.0,
"alpha1": 0.0,
"chisq_dof": 1,
"end_time_gmst": 46546.36006798331,
"f_final": 1024.0,
"Gamma0": 8754102.0,
"tau2": 0.0,
"Gamma1": 682.0
}
]
},
"superevent": null,
"superevent_neighbours": {
"S220316o": {
"... S220316o dict excluded for clarity ..."
},
"links": {
"neighbors": "https://gracedb-test.ligo.org/api/events/G194533/neighbors/",
"log": "https://gracedb-test.ligo.org/api/events/G194533/log/",
"emobservations": "https://gracedb-test.ligo.org/api/events/G194533/emobservation/",
"files": "https://gracedb-test.ligo.org/api/events/G194533/files/",
"labels": "https://gracedb-test.ligo.org/api/events/G194533/labels/",
"self": "https://gracedb-test.ligo.org/api/events/G194533",
"tags": "https://gracedb-test.ligo.org/api/events/G194533/tag/"
}
}
{
"graceid": "G194536",
"gpstime": 1042312876.509,
"pipeline": "CWB",
"labels": [],
"group": "Burst",
"extra_attributes": {
"MultiBurst": {
"central_freq": 1392.169556,
"false_alarm_rate": null,
"confidence": null,
"start_time_ns": 500000000,
"start_time": 1042312876,
"ligo_angle_sig": null,
"bandwidth": 256.0,
"single_ifo_times": "1042312876.5073,1042312876.5090",
"snr": 7.298671111921677,
"ligo_angle": null,
"amplitude": 5.017162,
"ligo_axis_ra": 201.224625,
"duration": 0.023438,
"ligo_axis_dec": 69.422546,
"peak_time_ns": null,
"peak_time": null,
"ifos": "H1,L1"
}
},
"links": {
"neighbors": "https://gracedb-test.ligo.org/api/events/G194536/neighbors/",
"files": "https://gracedb-test.ligo.org/api/events/G194536/files/",
"log": "https://gracedb-test.ligo.org/api/events/G194536/log/",
"tags": "https://gracedb-test.ligo.org/api/events/G194536/tag/",
"self": "https://gracedb-test.ligo.org/api/events/G194536",
"labels": "https://gracedb-test.ligo.org/api/events/G194536/labels/",
"emobservations": "https://gracedb-test.ligo.org/api/events/G194536/emobservation/"
},
"created": "2022-03-16 18:18:52 UTC",
"far": 0.00019265,
"instruments": "H1,L1",
"warnings": [],
"search": "AllSky",
"nevents": null,
"superevent": null,
"submitter": "albert.einstein@LIGO.ORG",
"superevent_neighbours": {},
"offline": false,
"likelihood": 53.2706,
"far_is_upper_limit": false
}
{
"graceid": "E194539",
"gpstime": 1238065339.32,
"pipeline": "Fermi",
"labels": [],
"group": "External",
"extra_attributes": {
"GRB": {
"author_ivorn": "ivo://nasa.gsfc.tan/gcn",
"dec": -67.8274,
"designation": null,
"redshift": null,
"how_description": "Fermi Satellite, GBM Instrument",
"coord_system": "UTC-FK5-GEO",
"trigger_id": "123456789",
"error_radius": 8.8374,
"how_reference_url": "http://gcn.gsfc.nasa.gov/fermi.html",
"ra": 345.99,
"ivorn": "fake_ivorn",
"trigger_duration": null,
"author_shortname": "Fermi (via VO-GCN)",
"T90": null,
"observatory_location_id": "GEOLUN"
}
},
"links": {
"neighbors": "https://gracedb-test.ligo.org/api/events/E194539/neighbors/",
"files": "https://gracedb-test.ligo.org/api/events/E194539/files/",
"log": "https://gracedb-test.ligo.org/api/events/E194539/log/",
"tags": "https://gracedb-test.ligo.org/api/events/E194539/tag/",
"self": "https://gracedb-test.ligo.org/api/events/E194539",
"labels": "https://gracedb-test.ligo.org/api/events/E194539/labels/",
"emobservations": "https://gracedb-test.ligo.org/api/events/E194539/emobservation/"
},
"created": "2022-03-16 18:25:37 UTC",
"far": null,
"instruments": "",
"warnings": [],
"search": null,
"nevents": null,
"superevent": null,
"submitter": "enrico.fermi@LIGO.ORG",
"superevent_neighbours": {},
"offline": false,
"likelihood": null,
"far_is_upper_limit": false
}
{
"warnings": [],
"submitter": "wolfgang.pauli@ligo.org",
"created": "2024-03-01 20:18:46 UTC",
"group": "External",
"graceid": "E653136",
"pipeline": "IceCube",
"gpstime": 1384986914.64,
"reporting_latency": 8372630.079705,
"instruments": "",
"nevents": null,
"offline": false,
"search": "HEN",
"far": 4.667681380010147e-09,
"far_is_upper_limit": false,
"likelihood": null,
"labels": [],
"extra_attributes": {
"NeutrinoEvent": {
"ivorn": "ivo://nasa.gsfc.gcn/AMON#ICECUBE_GOLD_Event2023-11-25T22:34:56.64_24_138599_039138591_0",
"coord_system": "UTC-FK5-GEO",
"ra": 176.2601,
"dec": 52.6366,
"error_radius": 0.7792,
"far_ne": 0.1472,
"far_unit": "yr^-1",
"signalness": 0.6312,
"energy": 191.7344,
"src_error_90": 0.7792,
"src_error_50": 0.3035,
"amon_id": 13859939138591,
"run_id": 138599,
"event_id": 39138591,
"stream": 24
}
},
"superevent": null,
"superevent_neighbours": {},
"links": {
"neighbors": "https://gracedb-test.ligo.org/api/events/E653136/neighbors/",
"log": "https://gracedb-test.ligo.org/api/events/E653136/log/",
"emobservations": "https://gracedb-test.ligo.org/api/events/E653136/emobservation/",
"files": "https://gracedb-test.ligo.org/api/events/E653136/files/",
"labels": "https://gracedb-test.ligo.org/api/events/E653136/labels/",
"self": "https://gracedb-test.ligo.org/api/events/E653136",
"tags": "https://gracedb-test.ligo.org/api/events/E653136/tag/"
}
}
{
"submitter": "alan.turing@ligo.org",
"created": "2024-02-20 16:41:20 UTC",
"group": "Burst",
"graceid": "G648217",
"pipeline": "MLy",
"gpstime": 1392456472.379048,
"reporting_latency": 26026.564137,
"instruments": "H1,L1",
"nevents": null,
"offline": false,
"search": "AllSky",
"far": 5.855080848625316e-05,
"far_is_upper_limit": false,
"likelihood": null,
"labels": [],
"extra_attributes": {
"MLyBurst": {
"bandwidth": 64.0,
"central_freq": 309.246308659392,
"central_time": 1392456472.379048,
"duration": 0.1875,
"SNR": 6.015912207824155,
"detection_statistic": null,
"scores": {
"coherency": 0.0799756646156311,
"coincidence": 0.2124568223953247,
"combined": 0.016991375573191192
}
}
},
"superevent": null,
"superevent_neighbours": {},
"links": {
"neighbors": "https://gracedb-test.ligo.org/api/events/G648217/neighbors/",
"log": "https://gracedb-test.ligo.org/api/events/G648217/log/",
"emobservations": "https://gracedb-test.ligo.org/api/events/G648217/emobservation/",
"files": "https://gracedb-test.ligo.org/api/events/G648217/files/",
"labels": "https://gracedb-test.ligo.org/api/events/G648217/labels/",
"self": "https://gracedb-test.ligo.org/api/events/G648217",
"tags": "https://gracedb-test.ligo.org/api/events/G648217/tag/"
}
}
{
"graceid": "G194537",
"gpstime": 1216336200.66,
"pipeline": "oLIB",
"labels": [],
"group": "Burst",
"extra_attributes": {
"LalInferenceBurst": {
"omicron_snr_H1": 4.98,
"omicron_snr_L1": 4.99,
"hrss_mean": 8.12e-23,
"frequency_median": 718.03,
"hrss_median": 2.19e-23,
"omicron_snr_network": 6.91,
"quality_mean": 15.2,
"bsn": 7.19,
"frequency_mean": 721.23,
"quality_median": 15.1,
"omicron_snr_V1": null,
"bci": 1.111
}
},
"links": {
"neighbors": "https://gracedb-test.ligo.org/api/events/G194537/neighbors/",
"files": "https://gracedb-test.ligo.org/api/events/G194537/files/",
"log": "https://gracedb-test.ligo.org/api/events/G194537/log/",
"tags": "https://gracedb-test.ligo.org/api/events/G194537/tag/",
"self": "https://gracedb-test.ligo.org/api/events/G194537",
"labels": "https://gracedb-test.ligo.org/api/events/G194537/labels/",
"emobservations": "https://gracedb-test.ligo.org/api/events/G194537/emobservation/"
},
"created": "2022-03-16 18:21:37 UTC",
"far": 7.22e-06,
"instruments": "H1,L1",
"warnings": [],
"search": "AllSky",
"nevents": 1,
"superevent": null,
"submitter": "albert.einstein@LIGO.ORG",
"superevent_neighbours": {},
"offline": false,
"likelihood": null,
"far_is_upper_limit": false
}
{
"created": "2018-09-19 18:29:24 UTC",
"creator": "albert.einstein@LIGO.ORG",
"name": "DQV",
"self": "https://gracedb.ligo.org/api/events/T0140/labels/DQV"
}