Related to the lack of
scale command in Finesse 3 as highlighted in #167, we discussed the idea of having real units associated with Finesse 3 output.
Units are obviously important for Finesse output, especially for plotting but also to help with error checking when performing operations on results (multiplying with other results, for instance). This will surely also be useful for post-processing of Finesse results such as projecting noise around a separate control loop - for example taking some quantum noise on a photodetector (e.g. W/sqrt(Hz)) and calculating what its equivalent actuator noise would look like (e.g. N/sqrt(Hz)) by multiplying by some N/W scaling. (Though ideally Finesse would be able to handle all this inside the matrix... but we surely won't cover all bases and someone will eventually want to just scale the data themselves.)
There was a discussion a long time ago that is partially documented on the wiki. Let's use this issue to track this feature from now on.
Packages providing units
There are a few existing examples of projects providing a mechanism to handle units of data. Feel free to update...
This is a large package of utilities for astronomy, such as handling units (and unit conversion), display, plotting, I/O, etc. There are quite a lot of useful tools in this package we could use - such as its tables (replacement for
tabulate?) or handling of uncertainties (hah, could we use this to handle noise in simulations?).
GWpy uses Astropy units in its core
Array class GWpy subclasses from
astropy.units.quantity. All operations on arrays using the dunder methods like
__mul__ in GWpy have to be done with units, so
x * 2 would have to be
x * 2 * x.unit. This might get a little irritating for end users if they want to do something with the final data, but a workaround is to instead do
x.value * 2 which performs the operation directly on the underlying numpy array. We might want to implement some helper methods on the data containers like
scale(n) which simply scales assuming existing units, or overload the dunder methods to actually allow scaling without explicit units.
The use of Astropy would allow us to use their unit functions, so we can e.g. rescale and operate on units. For instance, meters can be scaled into kilometers this way (see their docs):
>>> x = 1.0 * u.parsec >>> x.to(u.km) <Quantity 30856775814671.914 km>
Units can also be projected:
>>> 15.1 * u.meter / (32.0 * u.second) <Quantity 0.471875 m / s> >>> 3.0 * u.kilometer / (130.51 * u.meter / u.second) <Quantity 0.022986744310780783 km s / m> >>> (3.0 * u.kilometer / (130.51 * u.meter / u.second)).decompose() <Quantity 22.986744310780782 s>
(The brackets though strictly not required are there to enforce precedence to prevent making many operations on the leftmost quantity, which could be a huge array.)
However, whenever something becomes a quantity in Astropy, it subclasses from
numpy.ndarray so this would probably be overkill for single values.
Astropy is a moderately large dependency, so we should check whether it's importing only what it needs when used as a library (unlike NetworkX).
Another library based on numpy. I've not used this but it purports to have lots of data analysis and serialisation features akin to Pandas. xarray does not enforce strict units but instead provides a labelling mechanism, so we'd have to implement our own unit compatibility checking if we decide that's a requirement.
Supports units operations and works with numpy. Widely used. Seems like it's similar to Astropy's units system, though without the overhead of subclassing everything from
numpy.ndarray unless the quantity is actually an
ndarray. Probably worth looking into.
I use this for Zero, not for handling output units but for parsing units. This doesn't support operations at all (intentionally) but is very lightweight, providing an object that holds units and SI scale factors (i.e. kilo, mega, etc.) that is a subclass of
float. Very nice for displaying units. Relatively new and not widely used (yet).
Other things to think about
Do we want to allow SI prefixes in Python constructors/setters (see #43)?
Should model nodes be aware of their own units? Then when data is produced by a simulation, it would grab the units from the corresponding source (transfer functions) and sink (transfer functions and noise) nodes.
Do we want to remember data provenance, akin to LTPDA? Probably not such an important use case for Finesse users.
How would plotting be handled? I guess we'd have to update our plot handlers to look at the data it is being provided with and grab its units to display on the axis labels and legend. Do we have the plotters be "unit dumb" in the sense that they just display what they're given and all rescaling should happen to the data before the call to the plotter? Should they choose an appropriate scale factor like nanometers if the data is within only one or two orders of magnitude of each other? What about dB - there we would want not only to rescale the y-axis but also go from log to linear y-axis scaling.