prior sampling slows down processing of large sample files
As just discussed with @charlie.hoy by chat, the reason it can take pesummary ages to process large sample files (i.e. with tens or hundreds of thousands of samples, as e.g. bilby likes to produce) is that after loading it but before doing any conversions it draws an equal number of samples from the priors. There's no logging output making this clear, but in one example I have (215k samples) just adding --disable_prior_sampling
brings that initial step down from >2hours to 3min.
Suggestions:
-
add a logging line saying that it's doing this -
possibly turn the default to "don't sample prior" with a CLA to activate the feature, either a binary flag --enable_prior_sampling
or a--prior_samples int