Maintenance will be performed on git.ligo.org, containers.ligo.org, and docs.ligo.org on Tuesday 24 March 2025 starting at approximately 8:30am PDT. It is expected to take around 30 minutes and there will be several periods of downtime throughout the maintenance. Please address any comments, concerns, or questions to the helpdesk.
Many new external events no longer have detchar annotations
This might have something to do with #614 (closed), where the detchar check is being skipped or erroring out since the data hasn't arrived yet (as in, we could have now reduced our external event latency so much that the events are being uploaded before the GW data arrives at CIT). @brandon.piotrzkowski do you know if there have been external event specific changes which might have reduced this latency in the past few months, or perhaps this is just a byproduct of the other latency improvements?
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related or that one is blocking others.
Learn more.
[2023-05-09 13:44:18,899: INFO/MainProcess/MainThread] Task gwcelery.tasks.detchar.check_vectors[d3fa8719-747c-411f-ad3f-0c61e8ae6bd6] received[2023-05-09 13:44:18,945: INFO/MainProcess/IGWNReceiverThread] calling handlers [<@task: gwcelery.tasks.external_triggers.handle_grb_igwn_alert of gwcelery at 0x1554a82b88e0>] for key 'external_fermi'[2023-05-09 13:44:19,781: ERROR/ForkPoolWorker-61/MainThread] Task gwcelery.tasks.detchar.check_vectors[d3fa8719-747c-411f-ad3f-0c61e8ae6bd6] raised unexpected: ValueError('StateVector with span [1367700268.3125 ... 1367700271.0) does not cover requested interval [1367700268.36 ... 1367700272.4239998)')Traceback (most recent call last): File "/cvmfs/oasis.opensciencegrid.org/ligo/sw/conda/envs/igwn-py39-20221118/lib/python3.9/site-packages/celery/app/trace.py", line 451, in trace_task R = retval = fun(*args, **kwargs) File "/home/emfollow-test/.local/lib/python3.9/site-packages/sentry_sdk/integrations/celery.py", line 207, in _inner reraise(*exc_info) File "/home/emfollow-test/.local/lib/python3.9/site-packages/sentry_sdk/_compat.py", line 60, in reraise raise value File "/home/emfollow-test/.local/lib/python3.9/site-packages/sentry_sdk/integrations/celery.py", line 202, in _inner return f(*args, **kwargs) File "/cvmfs/oasis.opensciencegrid.org/ligo/sw/conda/envs/igwn-py39-20221118/lib/python3.9/site-packages/celery/app/trace.py", line 734, in __protected_call__ return self.run(*args, **kwargs) File "/home/emfollow-test/.local/lib/python3.9/site-packages/gwcelery/tasks/detchar.py", line 457, in check_vectors states.update(check_vector(caches[channel.split(':')[0]], channel, File "/home/emfollow-test/.local/lib/python3.9/site-packages/gwcelery/tasks/detchar.py", line 353, in check_vector statevector = StateVector.read(cache, channel, File "/home/emfollow-test/.local/lib/python3.9/site-packages/gwpy/timeseries/statevector.py", line 689, in read return super().read(source, *args, **kwargs) File "/home/emfollow-test/.local/lib/python3.9/site-packages/gwpy/timeseries/core.py", line 310, in read return timeseries_reader(cls, source, *args, **kwargs) File "/home/emfollow-test/.local/lib/python3.9/site-packages/gwpy/timeseries/io/core.py", line 50, in read return io_read_multi(joiner, cls, source, *args, **kwargs) File "/home/emfollow-test/.local/lib/python3.9/site-packages/gwpy/io/mp.py", line 101, in read_multi return flatten(out) File "/home/emfollow-test/.local/lib/python3.9/site-packages/gwpy/timeseries/io/core.py", line 81, in _join return _pad_series( File "/home/emfollow-test/.local/lib/python3.9/site-packages/gwpy/timeseries/io/core.py", line 139, in _pad_series raise ValueError(ValueError: StateVector with span [1367700268.3125 ... 1367700271.0) does not cover requested interval [1367700268.36 ... 1367700272.4239998)
So again, we are running into the issue where external events don't guarantee that meaningful GW data surround them. If the data doesn't exist then there's not much to do to fix these.
Also while we're thinking about this, it's probably worth just changing this to be a constant number since the trigger_duration doesn't really tell you how long the GRB lasted, just the length of data they used to analyze. The number should be chosen based on how much out we are interested in seeing GW data not GRB.