Likelihood for chi_eff vs. q correlations
The likelihood used for the chi_eff vs. q correlation measurements is defined in spin_v_q_likelihood.py
This analysis takes as input several dictionaries containing preprocessed or precomputed data.
-
A dictionary containing found injections passing our FAR/SNR threshold, created by prep_injections.py; see #2 (closed). This dictionary contains the following info:
-
injectionDict['m1']: Source-frame primary masses of found injections -
injectionDict['m2']: Source-frame secondary masses of found injections -
injectionDict['s1z']: Primary spin z-components for found injections -
injectionDict['s2z']: Secondary spin z-components for found injections -
injectionDict['weights']: Precomputed factors of(p_\mathrm{inj}(m_1,m_2,\chi_\mathrm{eff},z))^{-1}for each found injection, used to enable estimate of the detection efficiency for any proposed set of hyperparameters.
-
-
A dictionary of posterior samples for each event is created by the notebook preprocess_samples_conditionalEval.ipynb, under review in Issue #1 (closed). This dictionary stores samples and reweighting info in the following form:
-
sampleDict[event]['m1']: Array of source-frame primary mass samples -
sampleDict[event]['m2']: Array of source-frame secondary mass samples -
sampleDict[event]['Xeff']: Array of chi_effective samples -
sampleDict[event]['z']: Array of redshift samples -
sampleDict['Xeff_priors']: Marginal prior on chi_effective, given a prior that is uniform in component spin magnitude and isotropic in component spin orientation -
sampleDict[event]['weights']: Pre-computed factors to reweight masses and redshifts from the default PE priorp(m_1,m_2,z) \propto (1+z)^2 D_L(z)^2to a priorp(m_1,m_2,z) \propto \frac{1}{1+z} \frac{dV_c}{dz} (1+z)^2.
-
The models adopted for each parameters are the following:
- A power law + peak model for
p(m_1)- with power law index
lmbda - upper power law cutoff
mMax - peak location
m0 - peak width
sigM - peak mixture fraction
fPeak - lower power law cutoff is fixed to
mMin=5.
- with power law index
- A power law for
p(m_2|m_1)- power law index
bq
- power law index
- Redshift distribution growing as
p(z)\propto \frac{dV_c}{dz} (1+z)^{\kappa-1}- evolution parameter
kappa
- evolution parameter
- A Gaussian for
p(\chi_\mathrm{eff})with mass-ratio dependent mean\mu_\chi(q) = \mu_{\chi,0} + \alpha(q-0.5)and standard deviation\log_{10}\sigma_\chi(q) = \log_{10}\sigma_{\chi,0} + \beta(q-0.5)- Mean "intercept"
mu0 - Std deviation "intercept"
sigma0 - Slope of mean with q
alpha - Slope of log-std deviation with q
beta
- Mean "intercept"
I implement the population likelihood that has been marginalized over total rate:
p(\{d\}|\Lambda) \propto \xi(\Lambda)^{-N} \prod_{\mathrm{Events}\,i=1}^N \Big\langle \frac{p(\theta_{ij}|\Lambda)}{p_\mathrm{prior}(\theta_{ij})}\Big\rangle_{\mathrm{Posterior\,samples}\,j},
The detection efficiency term is added to the log-likelihood in Line 127, while the sample averaging for each event is calculated and contributed to the log-likelihood in Lines 164 and 167.