Fstat on-the-fly injections for transients: bandwidth issue
There's an old redmine issue I never resolved : https://bugs.ligo.org/redmine/issues/5574
The phenomenon is weird F-stat results (can be either losses or gains) when using "CFSv2 --injectionSources" vs a "MFDv5 | CFSv2" pipeline, when doing transient injections where the start time does not line up with a SFT timestamp.
My first hunch back then was a bandwith issue, but then the timestamps-dependence led me to believe that couldn't be (all of) it.
However this has now cropped up for me again, with the interesting observation that the strength of the differences depends on the search band, thus pointing to an issue with the internally used bandwidth again.
I still haven't tracked down what actually happens deep inside XLALGeneratePulsarSignal(). Most likely some leakage effect due to the transient turning on abruptly, which might require a much smarter solution to fully resolve.
But I now know that I can make any example case I've so far constructed converge by passing a wider band to the one-level-up XLALCWMakeFakeData(). In particular, I have a hacky patch on a branch where I add user options for this to CFSv2, and then XLALCreateFstatInput() first loads that wider band from noiseSFTs, simulates the signal with the wider band, and only crops to the internally estimated "covering frequencies" after noise+signal have been added.
I clearly need to investigate this further, but should be documenting what I've got so far.
cc @karl-wette