bilby issueshttps://git.ligo.org/lscsoft/bilby/-/issues2019-02-25T23:23:29Zhttps://git.ligo.org/lscsoft/bilby/-/issues/257Check for duplicate parameters2019-02-25T23:23:29ZGregory Ashtongregory.ashton@ligo.orgCheck for duplicate parametersCurrently, if you pass in a prior on mass1, mass2, chirp_mass, and mass ratio, the sample will sample in all of these parameters (unclear which prior is actually being used). It is ugly and unlikely to be completely safe, but we need som...Currently, if you pass in a prior on mass1, mass2, chirp_mass, and mass ratio, the sample will sample in all of these parameters (unclear which prior is actually being used). It is ugly and unlikely to be completely safe, but we need some way to alert the user. At the basic level, a series of if statements will suffice. @eric.thrane had some ideas for common examples.FutureMoritz HuebnerMoritz Huebnerhttps://git.ligo.org/lscsoft/bilby/-/issues/185Formalise the "npoints", "nlive" etc equivalent list2018-10-02T04:26:57ZGregory Ashtongregory.ashton@ligo.orgFormalise the "npoints", "nlive" etc equivalent listCurrently, each implemented sampler implements its own way to dealing with the equivalents kwargs. As such, the list of equivalents is not the same between samplers. We should move this to the `Sampler` level and provide a universal way ...Currently, each implemented sampler implements its own way to dealing with the equivalents kwargs. As such, the list of equivalents is not the same between samplers. We should move this to the `Sampler` level and provide a universal way to handle it.https://git.ligo.org/lscsoft/bilby/-/issues/616Bilby jobs hanging on reconstructing the marginalized parameters2022-02-25T09:22:30ZGregory Ashtongregory.ashton@ligo.orgBilby jobs hanging on reconstructing the marginalized parametersI've noticed a number of test jobs which get stuck after reconstructing the marginalized parameters, the logs look like this
```
08:42 bilby INFO : Sampling time: 1:47:47.097487
08:42 bilby INFO : Reconstructing marginalised param...I've noticed a number of test jobs which get stuck after reconstructing the marginalized parameters, the logs look like this
```
08:42 bilby INFO : Sampling time: 1:47:47.097487
08:42 bilby INFO : Reconstructing marginalised parameters.
08:42 bilby INFO : Using a pool with size 16 for nsamples=19747
11:54 bilby_pipe INFO : Running bilby_pipe version: 1.0.5: (CLEAN) e9a68f9 2022-02-01 14:40:55 +0000
11:54 bilby_pipe INFO : Running bilby: 1.1.5: release
```
The `.out` files show the `tqdm` progress bar getting to 100%, but then the job just seems to sit. This happens sporadically and, if you leave it long enough, it eventually completes. But, that is rather wasteful.Gregory Ashtongregory.ashton@ligo.orgGregory Ashtongregory.ashton@ligo.orghttps://git.ligo.org/lscsoft/bilby/-/issues/396Issue when sampling using BasicGravitationalWaveTransient2019-08-20T00:56:27ZJonathan DaviesIssue when sampling using BasicGravitationalWaveTransientI've been trying to modify the bilby code so that uncertainty in the form of the PSD can be folded into the analysis i.e. multiple PSD files are loaded in and the indices of these are implemented as parameters to be fitted. The error sho...I've been trying to modify the bilby code so that uncertainty in the form of the PSD can be folded into the analysis i.e. multiple PSD files are loaded in and the indices of these are implemented as parameters to be fitted. The error shown below got thrown up in run_sampler, as I am using BasicGravitationalWaveTransient rather than GravitationalWaveTransient in bilby/bilby/gw/likelihood.py. It gets called like this:
result = bilby.run_sampler(
likelihood, prior, outdir=outdir, label=label,
sampler=sampler, npoints=npoints, use_ratio=False,
injection_parameters=None,
conversion_function=bilby.gw.conversion.generate_all_bbh_parameters, resume=False, n_check_point=5000)
It seems that the issue is that the generate_all_bbh_parameters function is expecting the likelihood to have the various marginalization flags as attributes, and it fails for the Basic version which doesn't have those options.
Traceback (most recent call last):
File "GW150914_advanced.py", line 130, in <module>
conversion_function=bilby.gw.conversion.generate_all_bbh_parameters, resume=True, n_check_point=5000)
File "/home/jonathan.davies/bilby/bilby/core/sampler/__init__.py", line 203, in run_sampler
conversion_function=conversion_function)
File "/home/jonathan.davies/bilby/bilby/core/result.py", line 1072, in samples_to_posterior
data_frame = conversion_function(data_frame, likelihood, priors)
File "/home/jonathan.davies/bilby/bilby/gw/conversion.py", line 703, in generate_all_bbh_parameters
likelihood=likelihood, priors=priors)
File "/home/jonathan.davies/bilby/bilby/gw/conversion.py", line 667, in _generate_all_cbc_parameters
samples=output_sample, likelihood=likelihood)
File "/home/jonathan.davies/bilby/bilby/gw/conversion.py", line 1015, in generate_posterior_samples_from_marginalized_likelihood
if not any([likelihood.phase_marginalization,
AttributeError: 'BasicGravitationalWaveTransient' object has no attribute 'phase_marginalization'FutureSylvia BiscoveanuSylvia Biscoveanuhttps://git.ligo.org/lscsoft/bilby/-/issues/359Calibration names/prefix mismatch2019-05-27T04:02:37ZGregory Ashtongregory.ashton@ligo.orgCalibration names/prefix mismatchFor reading in calibration files, there is some confusion. The calibration model `CubicSpline` takes `prefix`, but the `bilby.gw.prior.CalibrationPriorDict` takes `label`. To get this to work, I think you need `"prefix=recalib_{}_".forma...For reading in calibration files, there is some confusion. The calibration model `CubicSpline` takes `prefix`, but the `bilby.gw.prior.CalibrationPriorDict` takes `label`. To get this to work, I think you need `"prefix=recalib_{}_".format(ifo.name)` and `label=ifo.name`. The `recalib` is hard-coded.
I think we should ideally just have them both take the same argument since they mean the same thing.Gregory Ashtongregory.ashton@ligo.orgGregory Ashtongregory.ashton@ligo.orghttps://git.ligo.org/lscsoft/bilby/-/issues/320Add checking to the ROQ usage2019-11-21T03:21:03ZGregory Ashtongregory.ashton@ligo.orgAdd checking to the ROQ usageThe ROQ-basis comes with a `params.dat` file specifying things like the minimum and maximum chirp mass and sampling frequency. We should add a check to the `ROQLikelihood` that makes sure the given prior/sampling frequency is suitable.
...The ROQ-basis comes with a `params.dat` file specifying things like the minimum and maximum chirp mass and sampling frequency. We should add a check to the `ROQLikelihood` that makes sure the given prior/sampling frequency is suitable.
Chatting with @eric.thrane and @cjhaster it seems to me to make sense that, if the user-specific-prior is narrower then we should print a statement saying "prior okay" and continue, if the prior is not given, we should set the bounds correctly, and if the user-specified-prior is larger than the allowed range, `bilby` should raise an Error.1.0.0Carl-Johan HasterMichael PuerrerCarl-Johan Hasterhttps://git.ligo.org/lscsoft/bilby/-/issues/319Make bilby (or bilby_pipe) default PSD creation equivalent to LALInference2019-03-05T11:37:13ZGregory Ashtongregory.ashton@ligo.orgMake bilby (or bilby_pipe) default PSD creation equivalent to LALInferenceWe want to (by default) recreate what happens in LALInference for the PSD creation. This might need a combination of changes between `bilby` and `bilby_pipe`.
### Notes
In !366 we allowed the PSD to overlap the analysis segment. This t...We want to (by default) recreate what happens in LALInference for the PSD creation. This might need a combination of changes between `bilby` and `bilby_pipe`.
### Notes
In !366 we allowed the PSD to overlap the analysis segment. This takes the PSD duration of data, cuts out the analysis segment, then creates a PSD from the remaining data. As such, if one requests 100s of data for the PSD, one actually gets to use 100 - segment_length s of data.
We could add a catch to actually calc psd_duration + segment_length amount of data which then gets reduced to the proper amount.
1.0.0Vivien RaymondRhys GreenCharlie HoyVivien Raymondhttps://git.ligo.org/lscsoft/bilby/-/issues/244import bilby fails out-of-the-box due to matplotlib backend issues2019-04-23T06:24:21ZDuncan Macleodduncan.macleod@ligo.orgimport bilby fails out-of-the-box due to matplotlib backend issuesTo reproduce (on macOS):
```shell
python -m virtualenv bilbyenv
./bilbyenv/bin/python -m pip install bilby
./bilbyenv/bin/python -c "import bilby"
```
The error presents as
```pytb
/Users/duncan/tmp/./bilbyenv/bin/../lib/python3.6/sit...To reproduce (on macOS):
```shell
python -m virtualenv bilbyenv
./bilbyenv/bin/python -m pip install bilby
./bilbyenv/bin/python -c "import bilby"
```
The error presents as
```pytb
/Users/duncan/tmp/./bilbyenv/bin/../lib/python3.6/site.py:165: DeprecationWarning: 'U' mode is deprecated
f = open(fullname, "rU")
13:10 bilby INFO : Running bilby version: 0.3.3:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/duncan/tmp/bilbyenv/lib/python3.6/site-packages/bilby/__init__.py", line 21, in <module>
from . import core, gw, hyper
File "/Users/duncan/tmp/bilbyenv/lib/python3.6/site-packages/bilby/core/__init__.py", line 2, in <module>
from . import likelihood, prior, result, sampler, utils
File "/Users/duncan/tmp/bilbyenv/lib/python3.6/site-packages/bilby/core/result.py", line 6, in <module>
import corner
File "/Users/duncan/tmp/bilbyenv/lib/python3.6/site-packages/corner/__init__.py", line 37, in <module>
from .corner import corner, hist2d, quantile
File "/Users/duncan/tmp/bilbyenv/lib/python3.6/site-packages/corner/corner.py", line 7, in <module>
import matplotlib.pyplot as pl
File "/Users/duncan/tmp/bilbyenv/lib/python3.6/site-packages/matplotlib/pyplot.py", line 2374, in <module>
switch_backend(rcParams["backend"])
...
File "/Users/duncan/tmp/bilbyenv/lib/python3.6/site-packages/matplotlib/backends/qt_compat.py", line 165, in <module>
raise ImportError("Failed to import any qt binding")
ImportError: Failed to import any qt binding
```
I think the fix is to update `bilby.core.results` to either call `matplotlib.use('agg')` _before_ `import matplotlib.pyplot`:
```diff
diff --git a/bilby/core/result.py b/bilby/core/result.py
index df4374b..ac55d84 100644
--- a/bilby/core/result.py
+++ b/bilby/core/result.py
@@ -1,12 +1,16 @@
import os
from distutils.version import LooseVersion
+from collections import OrderedDict, namedtuple
+
import numpy as np
import deepdish
import pandas as pd
-import corner
+
import matplotlib
+matplotlib.use('agg')
+
+import corner
import matplotlib.pyplot as plt
-from collections import OrderedDict, namedtuple
from . import utils
from .utils import logger, infer_parameters_from_function
```
I can happily post this patch as a merge request.Futurehttps://git.ligo.org/lscsoft/bilby/-/issues/197default `plot_corner()` is broken2018-10-01T23:43:55ZPaul Laskydefault `plot_corner()` is brokenthe refactoring in MR!202 seems to have broken the `plot_corner()` command when no arguments are being passed into it. This is functionality that we still want/need. i.e., for `plot_corner()` to just work out of the box with a simple d...the refactoring in MR!202 seems to have broken the `plot_corner()` command when no arguments are being passed into it. This is functionality that we still want/need. i.e., for `plot_corner()` to just work out of the box with a simple default to pot everything…0.3https://git.ligo.org/lscsoft/bilby/-/issues/191Pymultinest `outputfiles_basename` too long2018-10-02T04:39:03ZSylvia BiscoveanuPymultinest `outputfiles_basename` too longI'm getting this error associated with the usual nuisance of multinest cutting off the filenames if the `outputfiles_basename` is too long:
```Traceback (most recent call last):
File "/home/sylvia.biscoveanu/GRB_git/call_tupak_grb.py",...I'm getting this error associated with the usual nuisance of multinest cutting off the filenames if the `outputfiles_basename` is too long:
```Traceback (most recent call last):
File "/home/sylvia.biscoveanu/GRB_git/call_tupak_grb.py", line 83, in <module>
main(int(sys.argv[1]), sys.argv[2], sys.argv[3], sys.argv[4])
File "/home/sylvia.biscoveanu/GRB_git/call_tupak_grb.py", line 72, in main
evidence_tolerance=0.05, outdir=outdir, label=label, clean=True)
File "/home/sylvia.biscoveanu/.local/lib/python2.7/site-packages/tupak-0.2.2-py2.7.egg/tupak/core/sampler/__init__.py", line 146, in run_sampler
result = sampler._run_external_sampler()
File "/home/sylvia.biscoveanu/.local/lib/python2.7/site-packages/tupak-0.2.2-py2.7.egg/tupak/core/sampler/pymultinest.py", line 61, in _run_external_sampler
n_dims=self.ndim, **self.kwargs)
File "/home/sylvia.biscoveanu/.local/lib/python2.7/site-packages/pymultinest/solve.py", line 74, in solve
samples = analyzer.get_equal_weighted_posterior()[:,:-1]
File "/home/sylvia.biscoveanu/.local/lib/python2.7/site-packages/pymultinest/analyse.py", line 66, in get_equal_weighted_posterior
self.equal_weighted_posterior = loadtxt2d(self.equal_weighted_file)
File "/home/sylvia.biscoveanu/.local/lib/python2.7/site-packages/pymultinest/analyse.py", line 10, in loadtxt2d
return numpy.loadtxt(intext)
File "/home/sylvia.biscoveanu/.local/lib/python2.7/site-packages/numpy/lib/npyio.py", line 896, in loadtxt
fh = iter(open(fname, 'U'))
IOError: [Errno 2] No such file or directory: u'/home/sylvia.biscoveanu/GRB_git/outdir/pymultinest_sgrb_0_sample_prior_topHatUniformElse/post_equal_weights.dat'
```
Can we workshop some way around this? I'm running on condor so I need to specify the full file path. I tried changing the call in `_run_external_sampler` to `pymultinest.run` instead of `pymultinest.solve` but I'm getting an error with the prior, which confuses me because the pymultinest source code seems to invoke the prior in the same way between the two functions.
```
Traceback (most recent call last):
File "_ctypes/callbacks.c", line 314, in 'calling callback function'
File "/home/sylvia.biscoveanu/.local/lib/python2.7/site-packages/pymultinest/run.py", line 210, in loglike
Prior(cube, ndim, nparams)
TypeError: prior_transform() takes exactly 2 arguments (4 given)
```
If we fixed this error with the prior, we could write our own version of the multinest analyzer to load the posterior samples and the evidences. There has to be a way around this... it's frustrating not to be able to use a sampler because the file paths are too long. I think @colm.talbot was having issues with this earlier as well.https://git.ligo.org/lscsoft/bilby/-/issues/126`tupak.utils.core.create_time_series` bug2018-06-27T04:43:36ZMoritz Huebner`tupak.utils.core.create_time_series` bug@paul\-lasky ,@gregory.ashton ,@colm.talbot
The function returns incorrect time series for certain input values.
For example:
```
ts = tupak.utils.create_time_series(sampling_frequency=1000, duration=0.51, starting_time=-0.5)
```
retur...@paul\-lasky ,@gregory.ashton ,@colm.talbot
The function returns incorrect time series for certain input values.
For example:
```
ts = tupak.utils.create_time_series(sampling_frequency=1000, duration=0.51, starting_time=-0.5)
```
returns
```
[-0.5 -0.499 -0.498 ... 0.507 0.508 0.509]
```
even though it should end with 0.01.
I will create a merge request to fix this today0.3Moritz HuebnerMoritz Huebnerhttps://git.ligo.org/lscsoft/bilby/-/issues/726Linestyle bug in bilby.core.result.plot_multiple2024-03-15T22:00:00ZMichael Williamsmichael.williams@ligo.orgLinestyle bug in bilby.core.result.plot_multipleWhen calling `bilby.core.result.plot_multiple` with multiple results, the following warning is printed:
```
/mnt/lustre/shared_conda/envs/mjwill/bilby/lib/python3.10/site-packages/corner/core.py:795: UserWarning: The following kwargs we...When calling `bilby.core.result.plot_multiple` with multiple results, the following warning is printed:
```
/mnt/lustre/shared_conda/envs/mjwill/bilby/lib/python3.10/site-packages/corner/core.py:795: UserWarning: The following kwargs were not used by contour: 'linestyle'
ax.contour(X2, Y2, H2.T, V, **contour_kwargs)
```
This appears to happen because of the `linestyle` keyword argument that is added [on this line](https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/core/result.py?ref_type=heads#L2060).
Based on the [corner documentation](https://corner.readthedocs.io/en/latest/api/#corner.hist2d), `contour_kwargs` is passed to the `contour` method, which I think refers to `matplotlib.pyplot.contour`. Looking at the [documentation for that function](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.contour.html), I think it should be `linestyles` instead of `linestyle` but I have not tested this.Ben PattersonBen Pattersonhttps://git.ligo.org/lscsoft/bilby/-/issues/724Missing import in bilby.gw.detector.load_data_from_cache_file2024-03-15T22:02:25ZHoward DeshongMissing import in bilby.gw.detector.load_data_from_cache_file`bilby.gw.detector.load_data_from_cache_file` fails since it appears to be missing an `import lal`:
```
python bilby_run.py
Traceback (most recent call last):
File "/Users/howard/Desktop/output/2024-02-12/g...`bilby.gw.detector.load_data_from_cache_file` fails since it appears to be missing an `import lal`:
```
python bilby_run.py
Traceback (most recent call last):
File "/Users/howard/Desktop/output/2024-02-12/gw150914_bilby/bilby_run.py", line 49, in <module>
interferometer = bilby.gw.detector.load_data_from_cache_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/bilby/lib/python3.11/site-packages/bilby/gw/detector/__init__.py", line 254, in load_data_from_cache_file
cache = lal.utils.cache.CacheEntry(line)
^^^
NameError: name 'lal' is not defined
```
The line mentioned in the traceback is here: https://git.ligo.org/lscsoft/bilby/-/blob/f8b004a80fa2f735b171b144154d5563f569abdf/bilby/gw/detector/__init__.py#L254
Sure enough, that file does not `import lal`, so I believe adding it would fix this issue.Sama Al-ShammariSama Al-Shammarihttps://git.ligo.org/lscsoft/bilby/-/issues/722ptemcee parallelisation issue2024-01-29T05:15:48ZColm Talbotcolm.talbot@ligo.orgptemcee parallelisation issueA downstream user reported an issue with ptemcee on MacOS with parallelization.
I found that it works with the multiprocessing start method set to `fork`, which is a temporary workaround.
As a fix, we need to make sure the [`_sampling_...A downstream user reported an issue with ptemcee on MacOS with parallelization.
I found that it works with the multiprocessing start method set to `fork`, which is a temporary workaround.
As a fix, we need to make sure the [`_sampling_convenience_dump`](https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/core/sampler/base_sampler.py#L47) works with the [`LikePriorEvaluator`](https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/core/sampler/ptemcee.py#L1406).
An initial solution that seemed to work was to update the `LikePriorEvaluator` as
```python
def __init__(self):
from .base_sampler import _sampling_convenience_dump
self.periodic_set = False
self.dump = _sampling_convenience_dump
```
and then using `self.dump` rather than `_sampling_convenience_dump` throughout.
This might cause issues if `emcee/kombine/zeus` pass the `LikePriorEvaluator` through the multiprocessing pool.
See https://github.com/bandframework/Taweret/issues/53https://git.ligo.org/lscsoft/bilby/-/issues/715BUG: Waveform kwargs are silently ignored2023-11-13T03:34:36ZRodrigo TenorioBUG: Waveform kwargs are silently ignoredSlightly related to #681.
`lal_binary_black_hole` (and similar functions in bilby/gw/source.py) *silently ignore* waveform arguments
if they are written using the wrong convention.
For example, if I were to use
```
waveform_arguments=...Slightly related to #681.
`lal_binary_black_hole` (and similar functions in bilby/gw/source.py) *silently ignore* waveform arguments
if they are written using the wrong convention.
For example, if I were to use
```
waveform_arguments={"f_min": 10, "f_ref": 10}
```
the `lal_binary_black_hole` function would ignore it, as the actual kwargs are `minimum_frequency`
and `reference_frequency`. As a result, the inputs would become whatever default values are coded
inside the function (in this case 20 and 50).
The reason is because the default values are overwritten using the `update` method in a dictionary,
which I would say is not the right tool for this purpose as it doesn't check the validity of the given keys.https://git.ligo.org/lscsoft/bilby/-/issues/711Compatibility between sampling frequency and maximum frequency when using Mul...2024-02-14T17:53:37ZJacob GolombCompatibility between sampling frequency and maximum frequency when using Mulitbanded likelihoodWhen using a multibanding likelihood, if the sampling frequency is not the next power of two from 2 * maximum-frequency, bilby raises an error. For example, when setting a sampling rate of 4096 Hz and a maximum frequency of 800 Hz, I get...When using a multibanding likelihood, if the sampling frequency is not the next power of two from 2 * maximum-frequency, bilby raises an error. For example, when setting a sampling rate of 4096 Hz and a maximum frequency of 800 Hz, I get `IndexError: boolean index did not match indexed array along dimension 0; dimension is 524289 but corresponding boolean dimension is 1048577`. This seems to be fixed by setting the sampling frequency to 2048 Hz. Is this intended?https://git.ligo.org/lscsoft/bilby/-/issues/710Dynesty sampler unexpectedly closing all plots2023-11-09T21:08:29ZGitLab Support BotDynesty sampler unexpectedly closing all plotsI've noticed when I run bilby with either the dynesty or dynamic dynesty samplers, any plots I had open before calling run_sampler are closed. I think this may be caused by a few calls to matplotlib closing all figures (plt.close('all'))...I've noticed when I run bilby with either the dynesty or dynamic dynesty samplers, any plots I had open before calling run_sampler are closed. I think this may be caused by a few calls to matplotlib closing all figures (plt.close('all')) in the plot_current_state() function of the dynesty sampler. Would it be possible to change these so that they only close the figures created by running bilby, and not any other plots opened by other parts of a script?
Thanks!https://git.ligo.org/lscsoft/bilby/-/issues/702Cannot save editable installs with hdf5 without conda2023-10-05T16:22:05ZColm Talbotcolm.talbot@ligo.orgCannot save editable installs with hdf5 without condaWhen there is no conda available, [we take version information from `pip`](https://git.ligo.org/lscsoft/bilby/-/blob/23159d4553e00e560c08592b01a42faf4bccd731/bilby/core/utils/log.py#L116-L130).
If there are editable installed packages, t...When there is no conda available, [we take version information from `pip`](https://git.ligo.org/lscsoft/bilby/-/blob/23159d4553e00e560c08592b01a42faf4bccd731/bilby/core/utils/log.py#L116-L130).
If there are editable installed packages, this version information includes an `editable_package_location` flag for some packages and NaN for all others.
This is then not writeable as the first element is usually `np.nan` and so falls into the wrong branch of [this logic](https://git.ligo.org/lscsoft/bilby/-/blob/23159d4553e00e560c08592b01a42faf4bccd731/bilby/core/utils/io.py#L281-L295).
When this happens Bilby dumps the result to a pickle file.
Here is an example of the message that shows.
```python
19:43 bilby ERROR :
Saving the data has failed with the following message:
No conversion path for dtype: dtype('<U32')
Data has been dumped to result/label_result.pkl.
```https://git.ligo.org/lscsoft/bilby/-/issues/695Possible bug in delta_lambda prior key in binary_neutron_star_example.py2023-11-15T20:51:11ZKyle WongPossible bug in delta_lambda prior key in binary_neutron_star_example.py@philippe.landry and I have been trying to run the [binary neutron star example](https://git.ligo.org/lscsoft/bilby/-/blob/master/examples/gw_examples/injection_examples/binary_neutron_star_example.py) and suspect there is a typo with th...@philippe.landry and I have been trying to run the [binary neutron star example](https://git.ligo.org/lscsoft/bilby/-/blob/master/examples/gw_examples/injection_examples/binary_neutron_star_example.py) and suspect there is a typo with the "delta_lambda" prior key.
When running the example as is, we are finding recovered lambda_1 and lambda_2 values which are not consistent with the injected values. This inconsistency vanishes if the two instances of "delta_lambda" in the prior keys are changed to "delta_lambda_tilde" in line 105.
In the plots below, the first two are the example run without changes, and the second two are the same with the aforementioned change in the prior key.
![L1L2_notilde](/uploads/a8ae496ea8de76b7c3705fa2e24d807c/L1L2_notilde.png)
![LtdLt_notilde](/uploads/7f63f931b06f697425d5c8b6a151d30b/LtdLt_notilde.png)
![L1L2_tilde](/uploads/49a8a10d6723e12d2a9858eb276a942d/L1L2_tilde.png)
![LtdLt_tilde](/uploads/f6b8ec0286bc35611b4558ea53f62e75/LtdLt_tilde.png)
In both cases, the "lambda_tilde" and "delta_lambda_tilde" recovery is as expected, but when using "delta_lambda" as the prior key, the mapping from the sampled parameters to "lambda_1" and "lambda_2" seems to be incorrect. We notice that "delta_lambda" does not appear in the prior dictionaries in priors.py or conversion.py, whereas "delta_lambda_tilde" does. We are unsure whether this is by design, but we are proposing to edit the example to use the "delta_lambda_tilde" prior key instead of "delta_lambda", and wanted to bring the issue of conversion.py compatibility with the "delta_lambda" key to your attention.
If desired, we can prepare a merge request on [our fork of bilby](https://git.ligo.org/kyle.wong/bilby) to fix binary_neutron_star_example.py.https://git.ligo.org/lscsoft/bilby/-/issues/693Saving result file after sampling fails if lal dict is passed2023-12-20T19:42:01ZAditya Vijaykumaraditya.vijaykumar@ligo.orgSaving result file after sampling fails if lal dict is passed@ish.gupta recently found out that adding a lal_waveform_dictionary to the waveform_arguments works fine while sampling, but fails to save to file at the end of sampling with the following error traceback.
```
File "/Users/aytida/work...@ish.gupta recently found out that adding a lal_waveform_dictionary to the waveform_arguments works fine while sampling, but fails to save to file at the end of sampling with the following error traceback.
```
File "/Users/aytida/work/rough/bilby/bilby/core/result.py", line 795, in save_to_file
json.dump(dictionary, file, indent=2, cls=BilbyJsonEncoder)
File "/Users/aytida/miniconda3/envs/bilby2/lib/python3.11/json/__init__.py", line 179, in dump
for chunk in iterable:
File "/Users/aytida/miniconda3/envs/bilby2/lib/python3.11/json/encoder.py", line 432, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/Users/aytida/miniconda3/envs/bilby2/lib/python3.11/json/encoder.py", line 406, in _iterencode_dict
yield from chunks
File "/Users/aytida/miniconda3/envs/bilby2/lib/python3.11/json/encoder.py", line 406, in _iterencode_dict
yield from chunks
File "/Users/aytida/miniconda3/envs/bilby2/lib/python3.11/json/encoder.py", line 406, in _iterencode_dict
yield from chunks
File "/Users/aytida/miniconda3/envs/bilby2/lib/python3.11/json/encoder.py", line 439, in _iterencode
o = _default(o)
^^^^^^^^^^^
File "/Users/aytida/work/rough/bilby/bilby/core/utils/io.py", line 87, in default
return json.JSONEncoder.default(self, obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/aytida/miniconda3/envs/bilby2/lib/python3.11/json/encoder.py", line 180, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type Dict is not JSON serializable
<more stuff>
TypeError: cannot pickle 'lal.Dict' object
```
I [created a patch](https://git.ligo.org/aditya.vijaykumar/bilby/-/commit/c6506d2891857666a793429855f7e25c208ca538) that resolves this issue by ignoring the `lal_waveform_dictionary` altogether while saving the file, but I guess that isn't the nicest way to fix this. Happy to start an MR with any suggestions!