GraceDB Server issueshttps://git.ligo.org/computing/gracedb/server/-/issues2020-01-15T12:13:58Zhttps://git.ligo.org/computing/gracedb/server/-/issues/188cWB skymap not found on the "public alerts" GraceDB page2020-01-15T12:13:58ZNicolas ArnaudcWB skymap not found on the "public alerts" GraceDB pageThe root cause of this is probably outside GraceDB and I apologize if this has already been discussed elsewhere -- or is being addressed at the moment. For the recent cWB public alert, https://gracedb.ligo.org/superevents/public/O3/ disp...The root cause of this is probably outside GraceDB and I apologize if this has already been discussed elsewhere -- or is being addressed at the moment. For the recent cWB public alert, https://gracedb.ligo.org/superevents/public/O3/ displays
> S200114f (...) No public skymap image found.
likely because the Bayestar sky maps are labelled "cWB" instead of "bayestar": https://gracedb.ligo.org/apiweb/superevents/S200114f/files/cWB.fits.gz. While fixing the naming convention for the cWB skymaps (that may be on purpose), a test could be added to GraceDB to look for cWB.fits.gz if bayestar.fits.gz is not found.https://git.ligo.org/computing/gracedb/server/-/issues/189Improve caching in django2020-06-17T02:45:49ZAlexander PaceImprove caching in djangoThe settings exist (https://git.ligo.org/lscsoft/gracedb/blob/master/config/settings/base.py#L242) in gracedb's configuration for memcached caching, which is a quick and easy way to cache webpage views. This will be particularly helpful ...The settings exist (https://git.ligo.org/lscsoft/gracedb/blob/master/config/settings/base.py#L242) in gracedb's configuration for memcached caching, which is a quick and easy way to cache webpage views. This will be particularly helpful when new events come in and hundreds of users start hitting the website. Even some modest caching (I don't know what that means yet? 10 seconds? 30 seconds?) would greatly reduce server load and prevent django from making db queries and rendering new templates every time someone goes to the website or hits "reload".
A couple of issues:
* The configuration is set up for `memcached` caching in memory, but the `memcached` daemon isn't actually running or installed on any of the development machines or AWS containers. I installed it manually on `gracedb-dev2` and it started memcaching almost immediately. Almost.
* The `MIDDLEWARE` section of `config/settings/base.py` needs to be edited to look like this:
```
# List of middleware classes to use.
MIDDLEWARE = [
'core.middleware.maintenance.MaintenanceModeMiddleware',
'events.middleware.PerformanceMiddleware',
'core.middleware.accept.AcceptMiddleware',
'core.middleware.api.ClientVersionMiddleware',
'core.middleware.api.CliExceptionMiddleware',
'django.middleware.cache.UpdateCacheMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.cache.FetchFromCacheMiddleware',
'core.middleware.proxy.XForwardedForMiddleware',
'user_sessions.middleware.SessionMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'ligoauth.middleware.ShibbolethWebAuthMiddleware',
'ligoauth.middleware.ControlRoomMiddleware',
]
```
Note that the order apparently matters.
One other complication: it occurred to me that we're running a docker swarm of nodes, and yeah, each one has plenty of memory. However, they won't be able to access each others' local memory cache. Hmmm. I can run some tests and monitor the memory and caching on each node, but it doesn't seem efficient.
Last thing:
Amazon offers something called memcached "Elasticache", which appears to be a shared memory cache for different nodes:
* https://aws.amazon.com/elasticache/memcached/
It seems to be what we're looking for. Also this requires a new django backend:
* https://pypi.org/project/django-elasticache/
So I'm guessing the process is going to look like:
1) Log into AWS and find out how to make a new elasticache partition for each one of the different tiers. This can probably be automated with ansible, but at first I'll just click through the web interface like a caveman.
2) Modify `requirements.txt` to install `django-elasticache`.
3) Modify `config/settings/container/base.py` to include the elasticache stuff under `CACHES`. The address is going to be different, but that can be automated with a deployment environment variable in the docker swarm deployment yml.
4) Modify `MIDDLEWARE` to include the django-elasticache middleware. I'm not sure what this will look like exactly, but it should probably model the block that I pasted up there.
Other useful links:
https://docs.djangoproject.com/en/3.0/topics/cache/
https://www.tutorialspoint.com/django/django_caching.htm
https://devcenter.heroku.com/articles/django-memcacheAlexander PaceAlexander Pacehttps://git.ligo.org/computing/gracedb/server/-/issues/190Indexing guardian_groupobjectpermission table2020-03-05T21:36:38ZAlexander PaceIndexing guardian_groupobjectpermission tableDocumenting some stuff here so I don't forget it.
The `guardian_groupobjectpermission` table is a huge mess. Basically it controls access to-and-by-whom for nearly every transaction on GraceDB. Not surprisingly, this is a big table com...Documenting some stuff here so I don't forget it.
The `guardian_groupobjectpermission` table is a huge mess. Basically it controls access to-and-by-whom for nearly every transaction on GraceDB. Not surprisingly, this is a big table compared to the rest of the db and apparently it's extremely inefficient at doing lookups.
For instance, [here's a line](https://git.ligo.org/lscsoft/gracedb/blame/master/gracedb/migrations/guardian/0001_initial.py#L41) showing that there's a primary key lookup that compares a string (`object_pk`) to an integer value (`model.id`) to grab a `GroupObjectPermission`. So instead of doing any sort efficient indexing, it's looping through the whole permissions table. So that's got to be improved.
There's other room for improvement too. After years of uploading and deleting stuff, there are countless "orphaned" object permissions littering the db (https://django-guardian.readthedocs.io/en/stable/userguide/caveats.html#orphaned-object-permissions). I went through and I deleted orphaned permissions on the development and playground boxes (didn't touch production yet), and this is what I found:
| Instance | Number of Orphaned Objects | Time to remove |
| ------ | ------ | ------ |
| gracedb-dev2 | 4 | O(instantly) |
| gracedb-dev | 371446| 2400s |
| gracedb-test | 308 | 665s |
| gracedb-playground | O(300,000) | 4400s |
| gracedb-production | 479899 | 7440s |
Definitely slow. I noticed some strange IOPs limits too, but I'll make a new ticket with that.
I went into the dev and test machines and created an index on `object_pk` to see if that would make a difference, and there were definite performance improvements so far. For my own reference, this was just a matter of running:
```
MariaDB [gracedb]> create index object_pk_idx on guardian_groupobjectpermission (object_pk);
```
I timed how long it took to perform that operation along with the before/after times of the client integration test (to make sure nothing broke) and the results are:
| Instance | Size of `guardian_groupobjectpermission` Table | Test Time Before Indexing | Test Time After Indexing | With python-requests |
| ------ | ------ | ------ | ------ | ------ |
| gracedb-dev2 | 3103 | 264.69 | 256.20 | 176.17 |
| gracedb-test | 472005| 309.72 | 245.93 | 175.18 |
| gracedb-playground | 1249035 | 408.66 | 261.96s | 168.57 |
| gracedb-production | 1240519 | 385.85| 297.86s | 211.11s |
So there's a definite improvement. I'm going to do same thing to playground during maintenance tomorrow, and if everyone's happy then I'll push it to production the week following.
**Edit**: Added gracedb-playground before/after testing data after indexing the permissions table. Alexander PaceAlexander Pacehttps://git.ligo.org/computing/gracedb/server/-/issues/191apache improvements2020-01-29T17:52:33ZAlexander Paceapache improvementsMaking notes here about ways to optimize connections via apache. I was reading through some sources about apache worker concurrency (since CPU usage seems to be a bottleneck? at least that's an operative theory), here's an informative on...Making notes here about ways to optimize connections via apache. I was reading through some sources about apache worker concurrency (since CPU usage seems to be a bottleneck? at least that's an operative theory), here's an informative one:
https://serverfault.com/questions/775855/how-to-configure-apache-workers-for-maximum-concurrency
So I need to find out which modules are being loaded and what settings are being used. Here's the setup for `dev2` and `playground` (AWS).
`gracedb-dev2`:
```
root@gracedb-dev2:/etc/apache2# apachectl -M | grep mpm
mpm_worker_module (shared)
```
```
root@gracedb-dev2:/etc/apache2# cat mods-enabled/worker.conf
<IfModule mpm_worker_module>
ServerLimit 25
StartServers 2
ThreadLimit 64
MaxClients 150
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild 0
ListenBacklog 511
</IfModule>
```
`gracedb-playground`:
```
root@17b0c88a4c4f:/etc/apache2# apachectl -M | grep mpm
mpm_event_module (shared)
```
```
root@17b0c88a4c4f:/etc/apache2# cat mods-enabled/mpm_event.conf
# event MPM
# StartServers: initial number of server processes to start
# MinSpareThreads: minimum number of worker threads which are kept spare
# MaxSpareThreads: maximum number of worker threads which are kept spare
# ThreadsPerChild: constant number of worker threads in each server process
# MaxRequestWorkers: maximum number of worker threads
# MaxConnectionsPerChild: maximum number of requests a server process serves
<IfModule mpm_event_module>
StartServers 2
MinSpareThreads 25
MaxSpareThreads 75
ThreadLimit 64
ThreadsPerChild 25
MaxRequestWorkers 150
MaxConnectionsPerChild 0
</IfModule>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
```
So that's a start at least. They look to be the same (I'm assuming the default...), but i have some knobs to tweak potentially. I'll look into doing some testing with [ab](https://httpd.apache.org/docs/2.4/programs/ab.html) and [siege](https://www.joedog.org/siege-home/) and see what i can come up with.https://git.ligo.org/computing/gracedb/server/-/issues/192Two suggestions for the "neighbors" table on G event pages2020-02-26T21:53:15ZTito Dal CantonTwo suggestions for the "neighbors" table on G event pagesThe "neighbors" table would be much more useful with the following features:
* A column showing the network SNR for CBC events
* The ability to click on a column to sort the table according to that columnThe "neighbors" table would be much more useful with the following features:
* A column showing the network SNR for CBC events
* The ability to click on a column to sort the table according to that columnhttps://git.ligo.org/computing/gracedb/server/-/issues/193Database improvements2022-03-01T02:36:11ZAlexander PaceDatabase improvementsMaking a ticket here to consolidate a few existing database-related tickets that have been stagnating for a while.
* Optimizing the `groupobjectpermission` table: https://git.ligo.org/lscsoft/gracedb/issues/190
* MyISAM--> InnoDB: ht...Making a ticket here to consolidate a few existing database-related tickets that have been stagnating for a while.
* Optimizing the `groupobjectpermission` table: https://git.ligo.org/lscsoft/gracedb/issues/190
* MyISAM--> InnoDB: https://git.ligo.org/lscsoft/gracedb/issues/51. I need to add my notes about testing this before and the deadlocking issues I had before. I'm curious if the permissions table changes would make a difference for this one.
* Retry transactions automatically on failure? I did not have good luck with [this suggestion](https://stackoverflow.com/questions/17183434/hook-available-for-automatic-retry-after-deadlock-in-django-and-mysql-setup) before, but maybe it can be generalized as a global retry hook? In the CI environment, GWCelery ran into a race condition when different tests were trying to `.save()` the same superevent at the same time (https://git.ligo.org/emfollow/gracedb-sdk/compare/6e6b6beb05388c52887299f65f5587b50a962240...ae83cdeb2940a258e2ab7fa174fe42cadde67d0a). The pipelines failed with an IntegrityError, but I wonder if a global retry feature would improve stability
* MariaDB--> PostgresQL (https://git.ligo.org/lscsoft/gracedb/issues/52). Definitely a post-O3b thing.
I'll update this ticket as I come across more issues.Alexander PaceAlexander Pacehttps://git.ligo.org/computing/gracedb/server/-/issues/194Add GraceDB ID to page title2020-02-26T17:15:46ZLeo P. SingerAdd GraceDB ID to page titleThe title of GraceDB superevent and event pages is just `GracedBD |`. It would be much easier to keep track of browser tabs and history if the GraceDB ID was included in the page title.
![Screen_Shot_2020-02-25_at_18.02.06](/uploads/ea1...The title of GraceDB superevent and event pages is just `GracedBD |`. It would be much easier to keep track of browser tabs and history if the GraceDB ID was included in the page title.
![Screen_Shot_2020-02-25_at_18.02.06](/uploads/ea1a5fc89afab2bb231df5875f39cded/Screen_Shot_2020-02-25_at_18.02.06.png)https://git.ligo.org/computing/gracedb/server/-/issues/195Normalization of decimal GPS times is not applied to form-encoded requests to...2022-12-06T22:31:07ZLeo P. SingerNormalization of decimal GPS times is not applied to form-encoded requests to API endpointsWhen making requests to API endpoints that expect decimal GPS times (for instance, when creating a superevent), normalization of GPS time inputs is done differently depending on whether the payload is JSON or form-encoded.
With form-enc...When making requests to API endpoints that expect decimal GPS times (for instance, when creating a superevent), normalization of GPS time inputs is done differently depending on whether the payload is JSON or form-encoded.
With form-encoded payloads, arguments with too many decimal points cause an error when they fail to validate against the format expected by Django:
```
$ curl -E /tmp/x509up_u$UID -d preferred_event=T63895 -d t_start=1000000000.1234568 -d t_0=1000000000.1234568 -d t_end=1000000000.1234568 -d category=T https://gracedb-test.ligo.org/api/superevents/
["Ensure that there are no more than 16 digits in total.","Ensure that there are no more than 16 digits in total.","Ensure that there are no more than 16 digits in total."]
```
If one makes the same request with a JSON payload, it succeeds because the GPS time fields get rounded and truncated to the appropriate number of digits:
```
$ curl -E /tmp/x509up_u$UID -H 'Content-Type: application/json' -d '{"preferred_event": "T63895", "t_start": 1000000000.1234568, "t_0": 1000000000.1234568, "t_end": 1000000000.1234568, "category": "T"}' https://gracedb-test.ligo.org/api/superevents/
```
This inconsistency is because the [CustomDecimalField](https://git.ligo.org/lscsoft/gracedb/-/blob/master/gracedb/api/v1/fields.py#L137-148) normalization is applied for JSON payloads but not form-encoded payloads. The normalization should be applied to all payloads, regardless of encoding.https://git.ligo.org/computing/gracedb/server/-/issues/196Warnings from event creation are mangled from a string into a list of charact...2022-03-22T18:53:48ZLeo P. SingerWarnings from event creation are mangled from a string into a list of characters in JSON outputWarnings from event creation are renders as a `['l', 'i', 's', 't', ' ', 'o', 'f', ' ', 'c', 'h', 'a', 'r', 's']` rather than as a `string`.
Here is an example using the attached [coinc.xml](/uploads/72e843d0d09de2f349d1ec2beee0bc75/coi...Warnings from event creation are renders as a `['l', 'i', 's', 't', ' ', 'o', 'f', ' ', 'c', 'h', 'a', 'r', 's']` rather than as a `string`.
Here is an example using the attached [coinc.xml](/uploads/72e843d0d09de2f349d1ec2beee0bc75/coinc.xml) file.
```
$ gracedb create event Test gstlal coinc.xml
{
"gpstime": null,
"superevent": null,
"far_is_upper_limit": false,
"search": null,
"group": "Test",
"created": "2020-04-15 14:08:32 UTC",
"links": {
"self": "https://gracedb.ligo.org/api/events/T370323",
"neighbors": "https://gracedb.ligo.org/api/events/T370323/neighbors/",
"labels": "https://gracedb.ligo.org/api/events/T370323/labels/",
"files": "https://gracedb.ligo.org/api/events/T370323/files/",
"tags": "https://gracedb.ligo.org/api/events/T370323/tag/",
"log": "https://gracedb.ligo.org/api/events/T370323/log/",
"emobservations": "https://gracedb.ligo.org/api/events/T370323/emobservation/"
},
"instruments": "",
"extra_attributes": {
"CoincInspiral": {
"false_alarm_rate": null,
"end_time_ns": null,
"end_time": null,
"mchirp": null,
"ifos": "",
"minimum_duration": null,
"combined_far": null,
"mass": null,
"snr": null
}
},
"offline": false,
"far": null,
"likelihood": null,
"submitter": "leo.singer@LIGO.ORG",
"labels": [],
"graceid": "T370323",
"pipeline": "gstlal",
"warnings": [
"C",
"o",
"u",
"l",
"d",
" ",
"n",
"o",
"t",
" ",
"e",
"x",
"t",
"r",
"a",
"c",
"t",
" ",
"c",
"o",
"i",
"n",
"c",
" ",
"i",
"n",
"s",
"p",
"i",
"r",
"a",
"l",
" ",
"t",
"a",
"b",
"l",
"e",
"."
],
"nevents": null
}
```https://git.ligo.org/computing/gracedb/server/-/issues/197time_slide_id must be non-empty2022-08-04T01:50:52ZLeo P. Singertime_slide_id must be non-emptyEven though GraceDB does not use or save the time_slide_id column, it must be non-empty. This is due to bugs in `glue.ligolw` that are fixed in `ligo.lw`. Migrating from `glue.ligolw` to `ligo.lw` would fix this.
Here's an example showi...Even though GraceDB does not use or save the time_slide_id column, it must be non-empty. This is due to bugs in `glue.ligolw` that are fixed in `ligo.lw`. Migrating from `glue.ligolw` to `ligo.lw` would fix this.
Here's an example showing that the `time_slide_id` column may be empty with `ligo.lw`, but may not be empty for `glue.ligolw`.
```python
# test.py
import ligo.lw.ligolw
import ligo.lw.lsctables
import ligo.lw.utils
import glue.ligolw.ligolw
import glue.ligolw.lsctables
import glue.ligolw.utils
for api in [glue.ligolw, ligo.lw]:
print('testing API:', api)
xmldoc = api.ligolw.Document()
xmldoc.appendChild(api.ligolw.LIGO_LW())
coinc_table = api.lsctables.New(api.lsctables.CoincTable)
row = coinc_table.RowType()
for colname in coinc_table.validcolumns:
setattr(row, colname, None)
coinc_table.append(row)
xmldoc.childNodes[0].appendChild(coinc_table)
api.utils.write_filename(xmldoc, 'test.xml')
```
```
$ python test.py
testing API: <module 'glue.ligolw' from '/usr/lib/python3.8/site-packages/glue/ligolw/__init__.py'>
testing API: <module 'ligo.lw' from '/usr/lib/python3.8/site-packages/ligo/lw/__init__.py'>
Traceback (most recent call last):
File "test.py", line 19, in <module>
api.utils.write_filename(xmldoc, 'test.xml')
File "/usr/lib/python3.8/site-packages/ligo/lw/utils/__init__.py", line 526, in write_filename
write_fileobj(xmldoc, fileobj, gz = gz, **kwargs)
File "/usr/lib/python3.8/site-packages/ligo/lw/utils/__init__.py", line 445, in write_fileobj
xmldoc.write(fileobj, **kwargs)
File "/usr/lib/python3.8/site-packages/ligo/lw/ligolw.py", line 798, in write
c.write(fileobj)
File "/usr/lib/python3.8/site-packages/ligo/lw/ligolw.py", line 389, in write
c.write(fileobj, indent + Indent)
File "/usr/lib/python3.8/site-packages/ligo/lw/ligolw.py", line 389, in write
c.write(fileobj, indent + Indent)
File "/usr/lib/python3.8/site-packages/ligo/lw/table.py", line 433, in write
line = next(rowdumper)
AttributeError: coinc_def_id
```https://git.ligo.org/computing/gracedb/server/-/issues/198Comments on GraceDB's Visual Interface2020-09-15T09:52:26ZAlexander PaceComments on GraceDB's Visual InterfaceHi All:
I'm opening up the forum to comments regarding GraceDB's visual interface. There are probably many items that I haven't caught yet, so I would appreciate any constructive feedback about anything you have encountered in the past ...Hi All:
I'm opening up the forum to comments regarding GraceDB's visual interface. There are probably many items that I haven't caught yet, so I would appreciate any constructive feedback about anything you have encountered in the past two months since I pushed the interface to [Playground](https://gracedb-playground.ligo.org/) two months ago.
Mostly I'm looking for:
* Lack of functionality that was present in the old site that isn't obviously available.
* Obviously broken visual interface elements (links and screenshots would be appreciated).
* Constructive feedback regarding the interface. I'm looking for comments along the lines of "visual element X should display Y information" or "it would be more clear if this element showed XYZ". Comments like "doesn't look good" do not give me much to work with and are largely subjective.
This ticket should be long-lasting, but I'm going to close the "official" comment period in one week (on June 25) so I can push changes into production with the next version of the [server code](https://git.ligo.org/lscsoft/gracedb/tree/gracedb-2.10.0). I'm not expecting it to be perfect, but I'd rather get an 85% done product out and then polish things as they come up since we're not in observation.
Thanks.https://git.ligo.org/computing/gracedb/server/-/issues/199silent setuptools upgrade breaks supervisor in containers2020-09-16T16:57:50ZAlexander Pacesilent setuptools upgrade breaks supervisor in containersIssue:
I built new containers that upgraded to the newest setuptools, which i guess breaks supervisor (which is inexplicably running python2?). This is a problem because apache, gunicorn and the lvalert_overseer are all run via supervis...Issue:
I built new containers that upgraded to the newest setuptools, which i guess breaks supervisor (which is inexplicably running python2?). This is a problem because apache, gunicorn and the lvalert_overseer are all run via supervisor. So the symptoms i saw were the containers in the swarm endlessly starting and restarting, tons of log output (below), and no running gracedb.
Here's a source link:
https://github.com/pypa/setuptools/issues/2164
Output on a working container:
```
root@962abe794655:/app/gracedb_project# pip list installed | grep setuptools
setuptools 46.1.1
WARNING: You are using pip version 20.0.2; however, version 20.1.1 is available.
You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command.
root@962abe794655:/app/gracedb_project# supervisord status
/usr/local/lib/python3.5/dist-packages/pkg_resources/py2_warn.py:22: UserWarning: Setuptools will stop working on Python 2
************************************************************
You are running Setuptools on Python 2, which is no longer
supported and
>>> SETUPTOOLS WILL STOP WORKING <<<
in a subsequent release (no sooner than 2020-04-20).
Please ensure you are installing
Setuptools using pip 9.x or later or pin to `setuptools<45`
in your environment.
If you have done those things and are still encountering
this message, please comment in
https://github.com/pypa/setuptools/issues/1458
about the steps that led to this unsupported combination.
************************************************************
sys.version_info < (3,) and warnings.warn(pre + "*" * 60 + msg + "*" * 60)
Error: positional arguments are not supported: ['status']
For help, use /usr/bin/supervisord -h
```
and on a "broken" container:
```
root@58924b4db1a5:/app/gracedb_project# pip list installed | grep setuptools
setuptools 47.3.1
root@58924b4db1a5:/app/gracedb_project# supervisord status
/usr/local/lib/python3.5/dist-packages/pkg_resources/py2_warn.py:15: UserWarning: Setuptools no longer works on Python 2
************************************************************
Encountered a version of Setuptools that no longer supports
this version of Python. Please head to
https://bit.ly/setuptools-py2-sunset for support.
************************************************************
warnings.warn(pre + "*" * 60 + msg + "*" * 60)
```
Unfortunately it looks like we're at the latest version (4.2.0 is the latest and has the fix) of supervisor for debian 9, at least in the repos that we have:
```
root@962abe794655:/app/gracedb_project# apt-get --only-upgrade install supervisor
Reading package lists... Done
Building dependency tree
Reading state information... Done
supervisor is already the newest version (3.3.1-1+deb9u1).
0 upgraded, 0 newly installed, 0 to remove and 16 not upgraded.
```
In the short term I'm going to pin setuptools to 46.1.1 and then look into upgrading the base container to debian 10 instead of debian 9.Alexander PaceAlexander Pacehttps://git.ligo.org/computing/gracedb/server/-/issues/200Proposals and comments for curated event pages2021-09-15T16:11:46ZAlexander PaceProposals and comments for curated event pagesAs the public-facing GWTC-* (curated) event pages are developed, I'll be taking feedback on this ticket before going live. Further background on the curation process can be found here: https://dcc.ligo.org/LIGO-T2000569.
In particular,...As the public-facing GWTC-* (curated) event pages are developed, I'll be taking feedback on this ticket before going live. Further background on the curation process can be found here: https://dcc.ligo.org/LIGO-T2000569.
In particular, I'll be looking for feedback as to how to properly distill all the information that's already in GraceDB in such a way to be digestible and clear for readers outside of the analyst community.
The related OpenProject charge can be found here: https://cbcprojects.ligo.org/projects/gracedb-event-curation/Alexander PaceAlexander Pace2020-11-24https://git.ligo.org/computing/gracedb/server/-/issues/201Update O3 public alerts page to point to published catalog2020-11-23T16:55:28ZJonah KannerUpdate O3 public alerts page to point to published catalog@alexander.pace
Here's a suggestion from Beverly Berger to add a pointer to GWTC-2 on the O3 public alerts page.
-jonah
Hi, Jonah.
I had a thought about how to add information on the final resolution of alert triggers to https://grac...@alexander.pace
Here's a suggestion from Beverly Berger to add a pointer to GWTC-2 on the O3 public alerts page.
-jonah
Hi, Jonah.
I had a thought about how to add information on the final resolution of alert triggers to https://gracedb.ligo.org/superevents/public/O3/. I think it would be sufficient to say something like see (relevant table of events in GWOSC) and (section in the catalog paper where the reasons for the final set of events are given).
Beverlyhttps://git.ligo.org/computing/gracedb/server/-/issues/202Add an obvious way to download images shown in light boxes2022-03-22T18:09:19ZLeo P. SingerAdd an obvious way to download images shown in light boxesThere is no obvious way to download images shown in light boxes. However the images are being displayed, the right-click menu does not give an option to save the image.
Here is a screenshot from Chrome:
![Screen_Shot_2021-01-13_at_08.5...There is no obvious way to download images shown in light boxes. However the images are being displayed, the right-click menu does not give an option to save the image.
Here is a screenshot from Chrome:
![Screen_Shot_2021-01-13_at_08.57.58](/uploads/75a2dc510d8670a8d6d64ef48b49927e/Screen_Shot_2021-01-13_at_08.57.58.png)https://git.ligo.org/computing/gracedb/server/-/issues/203Update signoff button is very difficult to click, mouse focus blocked by text2022-02-06T00:22:25ZLeo P. SingerUpdate signoff button is very difficult to click, mouse focus blocked by textIt is very difficult to click the "Update signoff" button because the text seems to steal the mouse focus away from the button itself. Only parts of the button that do not contain text are clickable. See attached screen recording.
![Scr...It is very difficult to click the "Update signoff" button because the text seems to steal the mouse focus away from the button itself. Only parts of the button that do not contain text are clickable. See attached screen recording.
![Screen_Recording_2021-03-01_at_15.24.10](/uploads/88c526533f6c9db39390f5f93de7f648/Screen_Recording_2021-03-01_at_15.24.10.mov)https://git.ligo.org/computing/gracedb/server/-/issues/204Uploading results of targeted GRB/FRB followup searches2023-07-31T15:01:40ZTito Dal CantonUploading results of targeted GRB/FRB followup searchesIn at least two occasions, people have recently requested PyGRB candidates to be uploaded to GraceDB for detchar and PE followup. One difficulty with this is that PyGRB, being a targeted followup search of an existing transient, reports ...In at least two occasions, people have recently requested PyGRB candidates to be uploaded to GraceDB for detchar and PE followup. One difficulty with this is that PyGRB, being a targeted followup search of an existing transient, reports a p-value (false-alarm probability) associated with data around that particular transient, instead of a false-alarm rate as commonly understood in GraceDB land. PyGRB is also a coherent search, which may create some more impedance mismatch in terms of LIGOLW tables. A similar issue would also arise for any X-pipeline candidate (also from targeted GRB/FRB followup searches), although we have not had any particularly interesting X-pipeline candidates yet.
@alexander.pace suggested opening this issue, but I am not entirely sure if this is a PyGRB/X-pipeline problem, or a GraceDB problem, or maybe more of a schema problem. If we had an interesting continuous-wave or stochastic candidate, for example, would people want to see that in GraceDB as well, and what would the solution be?
Tagging @derek.davis, @francesco-pannarale and @ian-harry.https://git.ligo.org/computing/gracedb/server/-/issues/205Reading a specified number of bytes from the beginning of a streamed file ret...2021-08-11T17:05:20ZLeo P. SingerReading a specified number of bytes from the beginning of a streamed file returns fewer bytes than expectedWhen streaming a file from GraceDB, calls to `fileobj.read(n)` return fewer than `n` bytes. For example, this script:
```python
#!/usr/bin/env python
from ligo.gracedb.rest import GraceDb
client = GraceDb(fail_if_noauth=True)
for i in r...When streaming a file from GraceDB, calls to `fileobj.read(n)` return fewer than `n` bytes. For example, this script:
```python
#!/usr/bin/env python
from ligo.gracedb.rest import GraceDb
client = GraceDb(fail_if_noauth=True)
for i in range(1, 32):
fileobj = client.files('G330564', 'psd.xml.gz')
magic = fileobj.read(i)
print('requested', i, 'got', len(magic))
```
produces the following output:
```
requested 1 got 0
requested 2 got 0
requested 3 got 0
requested 4 got 0
requested 5 got 0
requested 6 got 0
requested 7 got 0
requested 8 got 0
requested 9 got 0
requested 10 got 0
requested 11 got 0
requested 12 got 0
requested 13 got 0
requested 14 got 0
requested 15 got 0
requested 16 got 1
requested 17 got 2
requested 18 got 3
requested 19 got 4
requested 20 got 5
requested 21 got 6
requested 22 got 7
requested 23 got 8
requested 24 got 9
requested 25 got 10
requested 26 got 11
requested 27 got 12
requested 28 got 13
requested 29 got 14
requested 30 got 15
requested 31 got 16
```
I have reproduced this with a variety of versions of gracedb-client and Python and on both macOS and the LIGO-Caltech cluster, which leads me to think that it is due to a server-side change.
The minimum value of `n` for which output starts seems to depend on the file extension: .xml, .fits, and .png files have problems, whereas .log and .json files start producing output right away. @alexander.pace suggests that this may mean that it has to do with MIME type handling.https://git.ligo.org/computing/gracedb/server/-/issues/206Proposals and comments for public release of data products2022-08-03T18:46:59ZAlexander PaceProposals and comments for public release of data productsTicket to track discussion regarding O4 release of low-latency data products.
Please refer to the minutes from day-one of the low-latency virtual face-to-face on September 15, 2021 [here](https://docs.google.com/document/d/1L0hLw1A3H20...Ticket to track discussion regarding O4 release of low-latency data products.
Please refer to the minutes from day-one of the low-latency virtual face-to-face on September 15, 2021 [here](https://docs.google.com/document/d/1L0hLw1A3H20Xphjj4vhuN7aLWd2roTmpwDav9PUzNXM/edit#).
In particular, the way public exposure of data products works currently is:
- Only information on a _superevent_'s page is made public. The includes the web front-end and queries from the API.
- G-_event_ pages are not made public. The result of the F2F discussion on 2021/09/15 seemed to indicate that it should remain that way.
- _Superevent_ properties (such as FAR) are inherited from the superevent's preferred event. For reference: a superevent's preferred event is a one-to-one relationship with an event in the database. See here: (https://git.ligo.org/lscsoft/gracedb/-/blob/master/gracedb/superevents/models.py#L92). As such, when a superevent defines its FAR is just a relationship with its preferred event, for example: https://git.ligo.org/lscsoft/gracedb/-/blob/master/gracedb/superevents/models.py#L431
- Data products, such as skymaps, p_astro and such, are uploaded for a superevent and given a couple of tags, such as `skyloc`, which gives the file its own section on a superevent landing page:
![Screen_Shot_2021-09-15_at_12.57.48_PM](/uploads/b6001318e20cb5d5f97973b5a40b632e/Screen_Shot_2021-09-15_at_12.57.48_PM.png)
- Adding a `public` tag will, understandably, make that file (log message) public by having the Django template generator filter based on that public tag (example: https://git.ligo.org/lscsoft/gracedb/-/blob/master/gracedb/templates/superevents/preferred_event_info_table_public.html)
- Similarly, there is a filter of the `public` tag, as well as an authentication check that takes place that selectively returns information via API calls.
- A superevent is a super-set of G-events. This is defined by a ForeignKey database relation (here: https://git.ligo.org/lscsoft/gracedb/-/blob/master/gracedb/events/models.py#L189). This relationship can be created or destroyed by `superevent_manager`s. There's no internal logic deciding this relationship.
Okay. So that's how it works right now. What I propose to _guide the discussion_ is this:
1. What information or products do you want from what events to be made public? Are these event properties, such as FAR and SNR, are there skymaps or other data products?
2. Are these G-event properties coming from G-events are are not already part of a superevent's event relationship? In other words, outside events that are not part of a superevent's event list? If the superevent and event are already linked, then from a technical standpoint, the superevent's landing page and API call can return the information to the public. This is a policy limitation, not a technical limitation. I do not dictate policy.
3. For data products such as skymaps, omega scans, and whatnot, the files are attached to log message objects in the database. Currently, the log message objects are linked via a one-to-one relationship in the database to either a superevent or a g-event.
Elaborating on the last point: concerning data products and files. What I propose from a technical standpoint would be expanding the log-message (and by extension, file product) relationship to include a many-to-many relationship. What this would mean is, a log message and file would be created for a g-event. That's relationship number one. A superevent manager process would determine if that product should be displayed on the superevent page. A new log message is created for the superevent, and that log message would have a linked relationship with the g-event's log message. The set of tags that are usually applied to a superevent's log message (`skyloc`, `public`, etc) are applied to the inherited log message, then boom, it's public.
I think this approach is a way to start and guide the discussion. I believe it has the advantage of:
1. Allowing superevent managers to selectively choose what information and what products are inherited from what g-events.
2. The mechanism for making data products public is unchanged.
3. Adding data products to a superevent (not just public products) is reduced to one API call instead of the convoluted data-transfer bonanza that took place in O3. This is analogous to the "server-side copy" that was discussed in O3b but never brought to fruition.
4. The existing superevent pages, log messages, public views, and event relationships will remain unchanged. Unless a superevent manager changes them.
I see this as a sound technical approach, but I am open to other suggestions. From a policy standpoint, it is up to @erik-katsavounidis and @shaon.ghosh to determine what information from what events and at what time should be linked to a superevent.O4 CBC Improvementshttps://git.ligo.org/computing/gracedb/server/-/issues/207GraceDB considerations before Oct. 2021 MDC2021-10-05T01:37:24ZAlexander PaceGraceDB considerations before Oct. 2021 MDCI would like to be on the same page and tie up some loose ends before October 2021's MDC event. Specifically, please confirm that pipelines and GWCelery will be using the `gracedb-playground` and `lvalert-playground` infrastructure.
Ad...I would like to be on the same page and tie up some loose ends before October 2021's MDC event. Specifically, please confirm that pipelines and GWCelery will be using the `gracedb-playground` and `lvalert-playground` infrastructure.
Additionally, could pipeline head please confirm which `search`es they will be using for event uploads? I will then confirm that the appropriate LVAert nodes are in place to send messages?
Thanks. I'll add more to this ticket as they come up.
`CWB`, `gstlal`, `MBTAOnline`, `oLIB`, `pycbc`, `spiir`