LensingFlow Development
As discussed today (13-07-23) we would like to use a combination of Asimov and CBCFlow to set up a fully automated framework for lensing analyses. Here, you can find a more complete description of what we are aiming for.
A representation of the flow is given here:
Each pipeline would have its own specific requirements as input and output a different quantity. For the low-latency pipelines, the latter would be whether the event (pair) is interesting or not and should be followed up according to the pipeline. The specific inputs for each pipeline would be a subset of those specified in the grey box.
For the strong lensing part of the flow (blue, green, and pinkish boxes), we were thinking of using the following logic:
- if two low-latency pipelines (light blue in the blue box) flag the event pair as interesting, it is passed to the high-latency pipelines
- if the condition above is met, for every additional LL pipeline flagging the event pair as interesting, job priority in HL is increased by one unit
- as soon as the first condition is met, if the observed lensing parameters match the expected values for lensed events (dark blue in the blue box) then the HL job priority is increased by one unit
- finally. the high latency pipelines should say whether the event is lensed or not.
Sub-threshold pipelines (for strong lensing) will look for possible sub-threshold lensed counterparts to observed strongly-lensed events. They should add interesting triggers to CBCflow (not entirely fixed yet, lower priority).
For microlensing and millilensing, the analyses are made by a single pipeline (Gravelamps) in different configuration modes. This part of the workflow is more straightforward:
- an initial point mass microlens run is done for each completed unlensed run
- the result of the lens and unlensed runs are compared to determine whether the event is interesting
- if so, the additional models and millilensing runs are started