- Jan 01, 2024
-
-
Prathamesh Joshi authored
marginalize_likelihoods_online: Write noise PDF only if more than 99% of bins have undergone first round extinction
-
Prathamesh Joshi authored
-
Prathamesh Joshi authored
Correctly save the backup zerolag PDF in the backup dir within the analysis dir; Also save the backup diststat PDF there
-
- Dec 29, 2023
-
-
Shio Sakon authored
-
- Dec 14, 2023
-
-
Prathamesh Joshi authored
-
- Dec 13, 2023
-
-
Prathamesh Joshi authored
-
- Dec 12, 2023
-
-
GstLAL CBC authored
-
- Dec 11, 2023
-
-
Zach Yarbrough authored
-
- Dec 09, 2023
-
-
Prathamesh Joshi authored
-
Prathamesh Joshi authored
-
Prathamesh Joshi authored
-
- Dec 08, 2023
-
-
Zach Yarbrough authored
-
- Dec 07, 2023
-
-
Prathamesh Joshi authored
-
Prathamesh Joshi authored
-
- Apr 07, 2023
-
-
Rebecca Ewing authored
-
- Feb 08, 2023
-
-
Rebecca Ewing authored
-
- Feb 06, 2023
-
-
Rachael Huxford authored
-
- Jun 17, 2021
-
-
Patrick Godwin authored
gstlal_inspiral_marginalize_likelihoods_online: replace file naming for file used for resetting zerolag counts by one generated by mktemp
-
- Mar 13, 2020
-
-
(cherry picked from commit 6b4b1bb2)
-
- Jan 19, 2019
-
-
- Dec 04, 2018
-
-
chad.hanna authored
-
- Nov 23, 2018
-
-
chad.hanna authored
-
- Nov 17, 2018
-
-
chad.hanna authored
-
- Sep 27, 2018
-
-
chad.hanna authored
-
chad.hanna authored
gstlal_inspiral_marginalize_likelihoods_online: FIXME: for now lower the number of samples per bin to 40,000 giving about 10 million for the O2 bank. We could use more but this program will need to be parallelized.
-
- Sep 04, 2018
-
-
gstlal_inspiral_marginalize_likelihoods_online: up the number of samples: FIXME we need to parallelize this so that we can do more samples online
-
- May 01, 2018
-
-
Chad Hanna authored
-
- Apr 27, 2018
-
-
Chad Hanna authored
-
- Apr 20, 2018
-
-
Kipp Cannon authored
- reduce the number of distinct steps - enable --density-estimate-zero-lag in marginalization step
-
- Apr 17, 2018
-
-
Kipp Cannon authored
-
- Dec 05, 2017
-
-
Kipp Cannon authored
-
- Nov 03, 2015
-
-
Cody Messick authored
periodically to help with debugging
-
- Nov 02, 2015
-
-
Cody Messick authored
in For loops contained in While loop to exit 1. See PR 2711 for more details
-
- Sep 11, 2015
-
-
Chad Hanna authored
gstlal_inspiral_marginalize_likelihoods_online: don't accumulate background samples since they are already drawn from a cumulative distribution
-
- Sep 09, 2015
-
-
Kipp Cannon authored
-
- Sep 05, 2015
-
-
Chad Hanna authored
bin/gstlal_inspiral_marginalize_likelihoods_online: fix startup case before a background file exists, remove extraneous rm
-
- Sep 03, 2015
-
-
Kipp Cannon authored
- the recent work to separate the zero-lag ranking statistic histograms into their own data path has left a feedback loop in this script wherein the zero-lag counts are fed back into the sum on each interation. this patch fixes.
-
- Aug 31, 2015
-
-
Kipp Cannon authored
- a histogram of likelihood ratios assigned to zero-lag candidates is required to implement the low-significance extinction model. this patch reorganizes the source and sink of that information to be a separate file rather than piggy backing the data on the candidate parameter PDF file supplied to the trigger generator, e.g., during online running. - immediately, this allows the statistics uploaded to gracedb along with each candidate to contain the actual ranking statistic PDF data used to rank the event, whereas prior to this patch it contains a mix of the candidate parameter PDFs and whatever was in the zero-lag histogram data supplied on input. going forward, keeping distinct, unrelated, pieces of information in distinct places with different names that get supplied to programs using distinct command line options will prevent confusion over what is being stored where. - refs #2429
-
Kipp Cannon authored
- each time through the loop this script causes a relatively small number of ranking statistic samples to be generated, with the original intention being that over time the total number of samples would accumulate to something large. this script discards the samples from the previous iteration, however, and starts over again from scratch each time. - this patch modifies the script to include the output from the previous iteration in this iteration so that the ranking statistic histogram gathers additional samples over time. - surprisingly the results seemed to be OK in the past, so it's possible that we don't need to do this. in that case, this is a trivial stand-alone patch that can be easily reverted at a later time if desired.
-
- Jun 03, 2015
-
-
Chad Hanna authored
-