Skip to content

patches/additions to idq-report

Patrick Godwin requested to merge idq_report_work into master

This merge request is intended to cover two purposes:

  1. Fix idq-report so that it runs without errors, due to breaking additions in the rest of the batch workflow. In particular, it updates idq-report to incorporate factories throughout rather than using config2reporter, config2classifier.
  2. Update idq-report to use new information derived from CalibrationMap/idq-timeseries.
  • ROC plots are generated using results generated from idq-timeseries (using eff/FAP from CalibrationMap) rather than using quivers generated from idq-evaluate. This is the more correct way of doing things, and also results in removing an sklearn dependency within plots.py.
  • Add plots for several timeseries (rank, log L) and generate p(glitch) timeseries using log L rather than uncalibrated results from classifiers. This also solves the issue where classifiers had to be separated depending on whether they would be trained by quivers or not.

A more comprehensive overhaul of idq-report where we would handle uploading of results to GraceDB, having more detailed metrics to be seen, etc. is not within the scope of this merge request, but it would be good to have this merged in to have basic results set up. However, there are a few things I do want to add in before we do this.

In particular,

  • There should be an easy way to gobble up results from round-robin batch jobs in addition to the manual workflow that idq-report can do. Right now, idq-report takes in a rootdir and grabs relevant directories to produce all its results. There are several ways to approach this in the round-robin workflow because results are stored within rootdir/batch-%nickname/gps-start_gps_end rather than rootdir. Should idq-report just 'do the right thing' and make it seamless for the user, or should this be specified somewhere, i.e. config/CLI option?
  • Changes in timeseries plotting. Right now, a single timeseries plot will be produced per segment per metric, but if the segment is particularly long, it could make the plot unreadable. Maybe a limit in the length of timeseries plotted, and if it's too long, split it up into multiple plots? How has iDQ handled this in the past?

Update to this:

  • I added a percentile plot in plots.py so that we can plot associated latencies we measure at different stages of the pipeline. A simple way to do this is to measure gps_now() - timestamp of data. We collect all this data, and plots.percentile reads in all these latency samples, measures a CDF and makes a latency percentile plot from this.
  • I've decided not to change anything in the timeseries plotting for the time being. We can easily add in a maximum length for the timeseries plotted, but I'll leave it alone for now since I'd rather get this long-needed patch to idq-report that actually makes meaningful plots for a while now.
  • I've also added a JSONReporter to push out JSON-formatted data. The main reason for this is that many plotting libraries tend to handle JSON fairly well and this extends well if we find that python-based plotting libraries (e.g. dynamic or streaming visualizations) do not meet our needs. There's also plotly which now has a python-based API but updating data can also take in JSON-formatted arrays.
  • Added HTMLReport class to deal with making nicely formatted HTML reports. For now it uses jinja2's templating engine to make html pages. It's a dependency of sphinx and so it won't introduce more unneeded dependencies. It produces a single page for the time being that just dumps the ROC curve, p(glitch) and log L plots, but it's a nice launching point where we can improve things as we see fit and is set up to extend the summary to several pages/tabs that are all cross-linked when we get ideas for that. The actual html templates are in etc/web.
  • Moved report() and store HTMLReport in a new report.py to mimic batch.py and all that to keep the executables clean.
Edited by Patrick Godwin

Merge request reports