Update Additional Review (marginalization over joint population+EoS uncertainty) authored by Reed Essick's avatar Reed Essick
...@@ -41,13 +41,13 @@ w_i = \frac{\sum_p p(\theta_i|\Lambda_p)}{p(\theta_i|\mathrm{default\ PE})} ...@@ -41,13 +41,13 @@ w_i = \frac{\sum_p p(\theta_i|\Lambda_p)}{p(\theta_i|\mathrm{default\ PE})}
This implies that the total sum would be This implies that the total sum would be
```math ```math
S_\mathrm{old} = \sum_i w_i \Theta_i = \sum_i \left(\frac{\sum_p p(\theta_i)|\Lambda_p)}{p(\theta_i|\mathrm{default\ PE})}\right) \Theta_i = \sum_p \sum_i \frac{p(\theta_i|\Lambda_p)}{p(\theta_i|\mathrm{default\ PE})} \Theta_i S_\mathrm{old} = \sum_i w_i \Theta_i = \sum_i \left(\frac{\sum_p p(\theta_i|\Lambda_p)}{p(\theta_i|\mathrm{default\ PE})}\right) \Theta_i = \sum_p \sum_i \frac{p(\theta_i|\Lambda_p)}{p(\theta_i|\mathrm{default\ PE})} \Theta_i
``` ```
It is straightforward to show that It is straightforward to show that
```math ```math
\sum_i \frac{p(\theta_i|\Lambda)}{p(\theta_i|\mathrm{default\ PE})} \Theta_i \approx \int d\theta p(\theta|\mathrm{data},\Lambda) \Theta(\theta) \frac{p(\mathrm{data}|\Lambda)}{p(\mathrm{data}|\mathrm{default\ PE})} \sum_i \frac{p(\theta_i|\Lambda)}{p(\theta_i|\mathrm{default\ PE})} \Theta_i \approx \left(\frac{p(\mathrm{data}|\Lambda)}{p(\mathrm{data}|\mathrm{default\ PE})}\right) \int d\theta p(\theta|\mathrm{data},\Lambda) \Theta(\theta)
``` ```
which implies the total sum would approximate which implies the total sum would approximate
... ...
......