bilby issueshttps://git.ligo.org/lscsoft/bilby/-/issues2024-01-29T05:15:48Zhttps://git.ligo.org/lscsoft/bilby/-/issues/722ptemcee parallelisation issue2024-01-29T05:15:48ZColm Talbotcolm.talbot@ligo.orgptemcee parallelisation issueA downstream user reported an issue with ptemcee on MacOS with parallelization.
I found that it works with the multiprocessing start method set to `fork`, which is a temporary workaround.
As a fix, we need to make sure the [`_sampling_...A downstream user reported an issue with ptemcee on MacOS with parallelization.
I found that it works with the multiprocessing start method set to `fork`, which is a temporary workaround.
As a fix, we need to make sure the [`_sampling_convenience_dump`](https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/core/sampler/base_sampler.py#L47) works with the [`LikePriorEvaluator`](https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/core/sampler/ptemcee.py#L1406).
An initial solution that seemed to work was to update the `LikePriorEvaluator` as
```python
def __init__(self):
from .base_sampler import _sampling_convenience_dump
self.periodic_set = False
self.dump = _sampling_convenience_dump
```
and then using `self.dump` rather than `_sampling_convenience_dump` throughout.
This might cause issues if `emcee/kombine/zeus` pass the `LikePriorEvaluator` through the multiprocessing pool.
See https://github.com/bandframework/Taweret/issues/53https://git.ligo.org/lscsoft/bilby/-/issues/715BUG: Waveform kwargs are silently ignored2023-11-13T03:34:36ZRodrigo TenorioBUG: Waveform kwargs are silently ignoredSlightly related to #681.
`lal_binary_black_hole` (and similar functions in bilby/gw/source.py) *silently ignore* waveform arguments
if they are written using the wrong convention.
For example, if I were to use
```
waveform_arguments=...Slightly related to #681.
`lal_binary_black_hole` (and similar functions in bilby/gw/source.py) *silently ignore* waveform arguments
if they are written using the wrong convention.
For example, if I were to use
```
waveform_arguments={"f_min": 10, "f_ref": 10}
```
the `lal_binary_black_hole` function would ignore it, as the actual kwargs are `minimum_frequency`
and `reference_frequency`. As a result, the inputs would become whatever default values are coded
inside the function (in this case 20 and 50).
The reason is because the default values are overwritten using the `update` method in a dictionary,
which I would say is not the right tool for this purpose as it doesn't check the validity of the given keys.https://git.ligo.org/lscsoft/bilby/-/issues/710Dynesty sampler unexpectedly closing all plots2023-11-09T21:08:29ZGitLab Support BotDynesty sampler unexpectedly closing all plotsI've noticed when I run bilby with either the dynesty or dynamic dynesty samplers, any plots I had open before calling run_sampler are closed. I think this may be caused by a few calls to matplotlib closing all figures (plt.close('all'))...I've noticed when I run bilby with either the dynesty or dynamic dynesty samplers, any plots I had open before calling run_sampler are closed. I think this may be caused by a few calls to matplotlib closing all figures (plt.close('all')) in the plot_current_state() function of the dynesty sampler. Would it be possible to change these so that they only close the figures created by running bilby, and not any other plots opened by other parts of a script?
Thanks!https://git.ligo.org/lscsoft/bilby/-/issues/693Saving result file after sampling fails if lal dict is passed2023-12-20T19:42:01ZAditya Vijaykumaraditya.vijaykumar@ligo.orgSaving result file after sampling fails if lal dict is passed@ish.gupta recently found out that adding a lal_waveform_dictionary to the waveform_arguments works fine while sampling, but fails to save to file at the end of sampling with the following error traceback.
```
File "/Users/aytida/work...@ish.gupta recently found out that adding a lal_waveform_dictionary to the waveform_arguments works fine while sampling, but fails to save to file at the end of sampling with the following error traceback.
```
File "/Users/aytida/work/rough/bilby/bilby/core/result.py", line 795, in save_to_file
json.dump(dictionary, file, indent=2, cls=BilbyJsonEncoder)
File "/Users/aytida/miniconda3/envs/bilby2/lib/python3.11/json/__init__.py", line 179, in dump
for chunk in iterable:
File "/Users/aytida/miniconda3/envs/bilby2/lib/python3.11/json/encoder.py", line 432, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/Users/aytida/miniconda3/envs/bilby2/lib/python3.11/json/encoder.py", line 406, in _iterencode_dict
yield from chunks
File "/Users/aytida/miniconda3/envs/bilby2/lib/python3.11/json/encoder.py", line 406, in _iterencode_dict
yield from chunks
File "/Users/aytida/miniconda3/envs/bilby2/lib/python3.11/json/encoder.py", line 406, in _iterencode_dict
yield from chunks
File "/Users/aytida/miniconda3/envs/bilby2/lib/python3.11/json/encoder.py", line 439, in _iterencode
o = _default(o)
^^^^^^^^^^^
File "/Users/aytida/work/rough/bilby/bilby/core/utils/io.py", line 87, in default
return json.JSONEncoder.default(self, obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/aytida/miniconda3/envs/bilby2/lib/python3.11/json/encoder.py", line 180, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type Dict is not JSON serializable
<more stuff>
TypeError: cannot pickle 'lal.Dict' object
```
I [created a patch](https://git.ligo.org/aditya.vijaykumar/bilby/-/commit/c6506d2891857666a793429855f7e25c208ca538) that resolves this issue by ignoring the `lal_waveform_dictionary` altogether while saving the file, but I guess that isn't the nicest way to fix this. Happy to start an MR with any suggestions!https://git.ligo.org/lscsoft/bilby/-/issues/683RelativeBinning optimisation doesn't account for Constraint priors2023-11-13T03:51:23ZCarl-Johan HasterRelativeBinning optimisation doesn't account for Constraint priorsWhen using the `RelativeBinningGravitationalWaveTransient` I've specified the fiducial waveform in terms of `{"chirp_mass": 3.163, "mass_ratio": 0.383, etc}`. This is a NSBH injection, that I want to analyse using the `SEOBNRv4_ROM_NRTid...When using the `RelativeBinningGravitationalWaveTransient` I've specified the fiducial waveform in terms of `{"chirp_mass": 3.163, "mass_ratio": 0.383, etc}`. This is a NSBH injection, that I want to analyse using the `SEOBNRv4_ROM_NRTidalv2_NSBH` waveform.
When the likelihood is initialised, it by default does the [update_fiducal_parameters](https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/gw/likelihood/relative.py#L155) step, and with the setup I used (optimising over the prior, since I hadn't specified any dedicated `parameter_bounds`) it "only" looks to optimise over `chirp_mass` and `mass_ratio`.
The problem is that the `SEOBNRv4_ROM_NRTidalv2_NSBH` model is only supported for `1 < mass_2 < 3`. I account for this in the actual Bilby configuration by having a `mass_2 = Constraint(name='mass_2', minimum=1.0, maximum=3.0)` in my prior, but since `mass_2` isn't in the `self.parameters_to_be_updated` list (see [here](https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/gw/likelihood/relative.py#L155)) that Constraint will not be accounted for, and the optimisation will fail since it'll try to evaluate `SEOBNRv4_ROM_NRTidalv2_NSBH` at a point that's nominally inside the `chirp_mass` and `mass_ratio` priors, but has a too-high NS mass.https://git.ligo.org/lscsoft/bilby/-/issues/680Interaction between relative binning initial search and calibration parameter...2023-11-13T03:53:29ZAaron ZimmermanInteraction between relative binning initial search and calibration parameter samplingWhen I run `bilby_pipe` with dynesty with both `calibration-model = CubicSpline` (using a standard calibration envelope) and `likelihood-type=bilby.gw.likelihood.relative.RelativeBinningGravitationalWaveTransient` the run fails when atte...When I run `bilby_pipe` with dynesty with both `calibration-model = CubicSpline` (using a standard calibration envelope) and `likelihood-type=bilby.gw.likelihood.relative.RelativeBinningGravitationalWaveTransient` the run fails when attempting to use the optimizer to find a fiducial point. @colm.talbot points out this is because the optimizer does not support domains with infinite prior support. I will check if specifying a fiducial point can allow for sampling to begin and update this issue.
My environment info is:
```
bilby_pipe version: 1.0.8
Running bilby version: 2.0.1
```
The traceback of the failed run is in the .err file in `log_data_generation`, and is
```
Traceback (most recent call last):
File "/home/sylvia.biscoveanu/.conda/envs/bilby_o4review_230314/bin/bilby_pipe_generation", line 10, in <module>
sys.exit(main())
File "/home/sylvia.biscoveanu/.conda/envs/bilby_o4review_230314/lib/python3.9/site-packages/bilby_pipe/data_generation.py", line 1357, in main
data.save_data_dump()
File "/home/sylvia.biscoveanu/.conda/envs/bilby_o4review_230314/lib/python3.9/site-packages/bilby_pipe/data_generation.py", line 1193, in save_data_dump
likelihood = self.likelihood
File "/home/sylvia.biscoveanu/.conda/envs/bilby_o4review_230314/lib/python3.9/site-packages/bilby_pipe/input.py", line 1197, in likelihood
likelihood = Likelihood(**likelihood_kwargs)
File "/home/sylvia.biscoveanu/.conda/envs/bilby_o4review_230314/lib/python3.9/site-packages/bilby/gw/likelihood/relative.py", line 166, in __init__
self.fiducial_parameters = self.find_maximum_likelihood_parameters(
File "/home/sylvia.biscoveanu/.conda/envs/bilby_o4review_230314/lib/python3.9/site-packages/bilby/gw/likelihood/relative.py", line 270, in find_maximum_likelihood_parameters
output = differential_evolution(
File "/home/sylvia.biscoveanu/.conda/envs/bilby_o4review_230314/lib/python3.9/site-packages/scipy/optimize/_differentialevolution.py", line 377, in differential_evolution
with DifferentialEvolutionSolver(func, bounds, args=args,
File "/home/sylvia.biscoveanu/.conda/envs/bilby_o4review_230314/lib/python3.9/site-packages/scipy/optimize/_differentialevolution.py", line 689, in __init__
raise ValueError('bounds should be a sequence containing '
ValueError: bounds should be a sequence containing real valued (min, max) pairs for each value in x
```https://git.ligo.org/lscsoft/bilby/-/issues/608Systematic errors of sidereal time in time marginalization2023-05-23T17:37:42ZSoichiro MorisakiSystematic errors of sidereal time in time marginalizationFor a PE run with time marginalization, `geocent_time` is fixed to the start time of data (See [this line](https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/gw/likelihood.py#L162)). This means the sidereal time is calculated at that...For a PE run with time marginalization, `geocent_time` is fixed to the start time of data (See [this line](https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/gw/likelihood.py#L162)). This means the sidereal time is calculated at that time rather than the coalescence time of signal. It leads to systematic errors of `~2pi * duration / (24 * 60 * 60)` in right ascension.
This could be an issue for BNS, whose duration is long. Below is the p-p plot for BNS signals, which are observed by `[H1, L1, V1]`, whose maximum luminosity distance is 100Mpc, and whose median SNR is ~25.
![p-p-plot](/uploads/e7652d7f2d87a9670a9b8fca6018c4ec/p-p-plot.jpeg)
The bad p-p plot for right ascension is fixed after the right ascension is incremented by the expected systematic error as follows:
```
correction = 2. * np.pi * duration / (24. * 60. * 60.)
samples["ra"] = np.fmod(samples["ra"] + correction, 2. * np.pi)
```
The plot attached below compares the p-p plots of right ascension before and after this correction.
![ra_corrected](/uploads/a63b2b52a4154b0b1ab95fccb170513f/ra_corrected.jpeg)
I should also note that the p-p runs were done with the ROQ likelihood in my local environment, into which I have implemented time marginalization. Since its implementation is almost same as that for `GravitationalWaveTransient`, I think the same thing happens for `GravitationalWaveTransient`.https://git.ligo.org/lscsoft/bilby/-/issues/548Issue with injection parameters in run_sampler2020-12-10T13:41:50ZGregory Ashtongregory.ashton@ligo.orgIssue with injection parameters in run_samplerEmail from Stephen Feeney:
> It's nothing enormous, but it can result in the results.injection_parameter storing the wrong inclination angle, iota. This happens for me because I'm using a reference frequency that is different from bilby...Email from Stephen Feeney:
> It's nothing enormous, but it can result in the results.injection_parameter storing the wrong inclination angle, iota. This happens for me because I'm using a reference frequency that is different from bilby's default. In bilby/core/sampler/__init__.py, in run_sampler, when appending the injection_parameters to the result object, conversion_function is called without passing in the likelihood object (which in turn contains the user-specified minimum and reference frequencies). This means result.injection_parameters uses the default reference_frequency when converting theta_jn etc. into iota, and as a result my results objects' iotas don't match the ones actually used when injected signals. Niche :) But I thought you'd want to know :)
Pointing to the code, [this line](https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/core/sampler/__init__.py#L209) needs to also pass in the [likelihood](https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/gw/conversion.py#L758).
However, this creates a bit of a problem: if users pass in their own conversion functions which don't accept the likelihood it will fall over.
Fundamentally, I think we are missing a standard interface for the "conversion" function. I think the resolution here is to set a convention for the conversion function. What do folks think?https://git.ligo.org/lscsoft/bilby/-/issues/537Bugs in reweighing samples2022-09-09T09:01:03ZKa-Lok LoBugs in reweighing samplesHi,
While trying to reweigh samples using the routine `bilby.core.result.reweight`, I encountered two bugs (in `bilby-v1.0.2`)
1. In `bilby.core.result.get_weights_for_reweighting`, it assumed that the label for each row in `result.pos...Hi,
While trying to reweigh samples using the routine `bilby.core.result.reweight`, I encountered two bugs (in `bilby-v1.0.2`)
1. In `bilby.core.result.get_weights_for_reweighting`, it assumed that the label for each row in `result.posterior` ranges from [0, len(result.posterior)-1] which is _not necessarily the case_ especially rejection sampling was used prior to calling this `get_weights_for_reweighting` function. This will cause index-out-of-bound error. The simplest fix would be to change in [L123 of bilby/core/result.py](https://git.ligo.org/lscsoft/bilby/-/blob/1.0.2/bilby/core/result.py#L123) from
```
for ii, sample in result.posterior.iterrows():
```
to
```
for ii, (_, sample) in enumerate(result.posterior.iterrows()):
```
and also changing [L174 of bilby/core/result.py](https://git.ligo.org/lscsoft/bilby/-/blob/1.0.2/bilby/core/result.py#L174)
from
```
return posterior[keep]
```
to
```
return posterior[keep].reset_index(drop=True)
```
2. It seems that the action of doing `convert_to_flat_in_component_mass_prior` does not commute with doing `bilby.core.result.get_weights_for_reweighting`. Doing reweighing before converting to flat in component mass prior yields no error but doing in the reverse order will yield an error saying that the key 'mass_1' is missing when the column actually exists. This is not desirable since `convert_to_flat_in_component_mass_prior` can be configured to run right after sampling, thus prohibiting reweighing of the samples in the future. This can be solved by properly assigning the component masses as one of the searched parameters and chirp mass and mass ratio being constrained in the result.search_parameter_keys and result.constraint_parameter_keys respectively. For example [starting from L67 of bilby/gw/prior.py](https://git.ligo.org/lscsoft/bilby/-/blob/1.0.2/bilby/gw/prior.py#L67) can be changed from
```
for key in ['chirp_mass', 'mass_ratio']:
priors[key] = Constraint(priors[key].minimum, priors[key].maximum, key, latex_label=priors[key].latex_label)
for key in ['mass_1', 'mass_2']:
priors[key] = Uniform(priors[key].minimum, priors[key].maximum, key, latex_label=priors[key].latex_label,
unit="$M_{\odot}$")
```
to
```
for key in ['chirp_mass', 'mass_ratio']:
priors[key] = Constraint(priors[key].minimum, priors[key].maximum, key, latex_label=priors[key].latex_label)
result.constraint_parameter_keys.append(key)
result.search_parameter_keys.remove(key)
for key in ['mass_1', 'mass_2']:
priors[key] = Uniform(priors[key].minimum, priors[key].maximum, key, latex_label=priors[key].latex_label,
unit="$M_{\odot}$")
result.search_parameter_keys.append(key)
result.constraint_parameter_keys.remove(key)
```
3.(?) Probably the function should be called `reweigh` instead of `reweight` but whatever
If people prefer, I can submit a merge request fixing issue 1 and 2 or it would be faster if the developers simply affect these simple changes.https://git.ligo.org/lscsoft/bilby/-/issues/524Issue with caching in bilby.hyper.model2020-09-28T15:24:54ZShanika GalaudageIssue with caching in bilby.hyper.modelThe current method does not check if the input data is the same
e.g.
```bash
prob1 = hyper_model.prob(data1)
prob2 = hyper_model.prob(data2)
```
will return the same probabilities.
A data check is required [here](https://git.ligo.org/...The current method does not check if the input data is the same
e.g.
```bash
prob1 = hyper_model.prob(data1)
prob2 = hyper_model.prob(data2)
```
will return the same probabilities.
A data check is required [here](https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/hyper/model.py#L28).https://git.ligo.org/lscsoft/bilby/-/issues/504Joint distributions difficult to subclass.2021-08-27T10:40:53ZColm Talbotcolm.talbot@ligo.orgJoint distributions difficult to subclass.I'm trying to subclass the `GaussianMixture` and `GaussianMixtureDist` classes and ran into an issue with the way the underlying distribution is checked. I think that [this line](https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/cor...I'm trying to subclass the `GaussianMixture` and `GaussianMixtureDist` classes and ran into an issue with the way the underlying distribution is checked. I think that [this line](https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/core/prior/joint.py#L669) should be changed to
```python
if not isinstance(dist, BaseJointPriorDist):
```
Unless I'm missing something @matthew-pitkin @bruce.edelman, I think you're most familiar with this bit of the code?
EDIT:
I'm now having an issue reading the prior from a standard `.prior` file. There are specifically [enumerated names](https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/core/prior/dict.py#L217). We should also look for subclasses.https://git.ligo.org/lscsoft/bilby/-/issues/484sys.exit in jupyter2022-09-29T09:53:31ZColm Talbotcolm.talbot@ligo.orgsys.exit in jupyterI'm running into issues stopping jobs in Jupiter. When I interrupt `dynesty` it triggers a call to `sys.exit` which kills the kernel.
Maybe we should avoid calling `sys.exit` and instead raise a custom error that can be caught and then ...I'm running into issues stopping jobs in Jupiter. When I interrupt `dynesty` it triggers a call to `sys.exit` which kills the kernel.
Maybe we should avoid calling `sys.exit` and instead raise a custom error that can be caught and then the exit can be called by downstream users.