- Oct 13, 2022
-
-
Fixes #469.
-
Naresh Adhikari authored
-
Brandon Piotrzkowski authored
-
Deep Chatterjee authored
-
See userguide!134, userguide!138.
-
-
Cody Messick authored
-
Cody Messick authored
to specify payload suffix instead of hop model.
-
Leo P. Singer authored
The Online_EMFollow queue is not currently configured to accept jobs requiring 16GB of memory.
-
Leo P. Singer authored
-
- Oct 12, 2022
-
-
Cody Messick authored
earlywarning_preliminary_alert as chain to fix bug described in #464.
-
Cody Messick authored
This reverts commit bd92a718.
-
Leo P. Singer authored
The search string must encompass hosts kb-0, kb-1, etc.
-
Leo P. Singer authored
See #457.
-
Cody Messick authored
Closes #460.
-
Leo P. Singer authored
-
- Oct 08, 2022
-
-
Some of the OpenMP workers are being held due to memory use: ``` (igwn-py39-20220317) [emfollow-playground@emfollow-playground ~]$ condor_q -analyze 39552.1 -- Schedd: emfollow-playground.ligo.caltech.edu : <10.14.150.10:9618?... The Requirements expression for job 39552.001 is (OpSysAndVer == "Rocky8") && ((TARGET.Online_EMFollow is true)) && (TARGET.Arch == "X86_64") && (TARGET.OpSys == "LINUX") && (TARGET.Disk >= RequestDisk) && (TARGET.Memory >= RequestMemory) && (TARGET.Cpus >= RequestCpus) && ((TARGET.FileSystemDomain == MY.FileSystemDomain) || (TARGET.HasFileTransfer)) 39552.001: Job is held. Hold reason: Error from slot2@node744.cluster.ldas.cit: Job has gone over memory limit of 8192 megabytes. Peak usage: 7827 megabytes. Last successful match: Fri Oct 7 10:13:36 2022 39552.001: Run analysis summary ignoring user priority. Of 3393 machines, 3329 are rejected by your job's requirements 0 reject your job because of their own requirements 13 match and are already running your jobs 0 match but are serving other users 51 are able to run your job ```
-
Deep Chatterjee authored
-
- Oct 07, 2022
-
-
Cody Messick authored
-
Brandon Piotrzkowski authored
-
Leo P. Singer authored
Kafka has a default max message size of 1 MB. The base64-encoded sky maps are just barely too large to fit. Enabling Zstandard compression brings the message size below Kafka's broker-side default limits. However, there is also a producer client-side config called `message.max.bytes` that is applied _before_ compression. GCN will require the producer to set the following two client-side configs: ``` compression.type = zstd message.max.bytes = 1024 * 1024 * 2 ``` These options should be coming in the next release of adc-streaming with the following PRs: * https://github.com/astronomy-commons/adc-streaming/pull/61 * https://github.com/astronomy-commons/adc-streaming/pull/62 Until then, we can set the configs manually by monkeypatching the `adc.producer.ProducerConfig` class. Fixes #459.
-
Leo P. Singer authored
-
Cody Messick authored
-
- Oct 05, 2022
-
-
Cody Messick authored
from releases again
-
Cody Messick authored
notice payload being sent to GCN kafka
-
Cody Messick authored
list_topics function
-
-
Leo P. Singer authored
Fixes #426.
-
Leo P. Singer authored
Fixes #454.
-
- Oct 04, 2022
-
-
Deep Chatterjee authored
-
Leo P. Singer authored
-
- Oct 03, 2022
-
-
Brandon Piotrzkowski authored
-
Deep Chatterjee authored
-
-
- Sep 30, 2022
-
-
function
-
Cody Messick authored
-
* Changed intersphinx URLs * Changed syntax for extlinks in Sphinx 5 * Undefined language is now a warning
-
-
- Sep 29, 2022
-
-
Leo P. Singer authored
-