IFOModel comparison for test suite
In light of #23 (closed), perhaps we should have the test suite kick up a fuss if the pygwinc aLIGO.yaml and matgwinc IFOModel.m don't match (within some suitable equivalence class).
Maintenance will be performed on git.ligo.org, containers.ligo.org, and docs.ligo.org on Tuesday 22 April 2025 starting at approximately 9am PDT. It is expected to take around 30 minutes and there will be several periods of downtime throughout the maintenance. Please address any comments, concerns, or questions to the helpdesk. This maintenance will be upgrading the GitLab database in order to be ready for the migration.
In light of #23 (closed), perhaps we should have the test suite kick up a fuss if the pygwinc aLIGO.yaml and matgwinc IFOModel.m don't match (within some suitable equivalence class).
Would an alternative be to have the aLIGO, A+ and whatever else test suites check noise curves generated not from the python yaml settings in pygwinc/matgwinc, but from the correspondingly named IFOModel.m with matgwinc vs. the yaml+pygwinc?
This requires both the settings to be correct and the code to be equivalent (or a coincidence where settings diverge but code compensates), but does not require conversion between the pygwinc settings and matgwinc settings formats.
I guess what I'm trying to say is that the most appropriate equivalence class are "those settings which generate equivalent noise curves for their respective codes"
Our test suite is no longer comparing against matgwinc, so I'm just going to close this.
closed