... | @@ -35,7 +35,7 @@ Now, we discuss that the covariance matrix is ill-conditioned (i.e., nearly sing |
... | @@ -35,7 +35,7 @@ Now, we discuss that the covariance matrix is ill-conditioned (i.e., nearly sing |
|
The above plot shows the array of the eigenvalues. This plot indicates that the covariance matrix is ill-conditioned since several eigenvalues are close to zero. Also, the eigenvalues are oscillating around and after the index number 400 (the eigenvalues below ~1e-11), which occurs due to the numerical inaccuracies. This can affect the inverse calculation since the inverse is proportional to the determinant of that matrix. Therefore, we impose a threshold on eigenvalues to calculate the inverse in the `pinv` method. In other words, a reduced singular matrix can be used in the SVD based inverse computation, which can capture the maximum feature of the covariance matrix and also excludes the numerical instabilities. The parameter `rcond` in `numpy.linalg.pinv` function controls this criterion such that singular values less than or equal to `rcond * largest_singular_value` are set to zero.
|
|
The above plot shows the array of the eigenvalues. This plot indicates that the covariance matrix is ill-conditioned since several eigenvalues are close to zero. Also, the eigenvalues are oscillating around and after the index number 400 (the eigenvalues below ~1e-11), which occurs due to the numerical inaccuracies. This can affect the inverse calculation since the inverse is proportional to the determinant of that matrix. Therefore, we impose a threshold on eigenvalues to calculate the inverse in the `pinv` method. In other words, a reduced singular matrix can be used in the SVD based inverse computation, which can capture the maximum feature of the covariance matrix and also excludes the numerical instabilities. The parameter `rcond` in `numpy.linalg.pinv` function controls this criterion such that singular values less than or equal to `rcond * largest_singular_value` are set to zero.
|
|
|
|
|
|
# Choice of the rank of the covariance matrix
|
|
# Choice of the rank of the covariance matrix
|
|
|
|
## GW190814
|
|
First, we demonstrate an example from the GW190814 event to find an appropriate cut-off on the low-rank approximation of the covariance matrix, and its effect on $`\beta`$. We choose top 5 PE samples and plot $`\beta`$ as a function of the rank of the covariance matrix. The spectrum of the eigenvalues is also plotted.
|
|
First, we demonstrate an example from the GW190814 event to find an appropriate cut-off on the low-rank approximation of the covariance matrix, and its effect on $`\beta`$. We choose top 5 PE samples and plot $`\beta`$ as a function of the rank of the covariance matrix. The spectrum of the eigenvalues is also plotted.
|
|
|
|
|
|
<img src="uploads/c93fa55e8c322397cdd710513d862054/beta_lambda_all.png" width="440" >
|
|
<img src="uploads/c93fa55e8c322397cdd710513d862054/beta_lambda_all.png" width="440" >
|
... | | ... | |