Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
bilby
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Iterations
Requirements
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Locked files
Build
Pipelines
Jobs
Pipeline schedules
Test cases
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Code review analytics
Issue analytics
Insights
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
lscsoft
bilby
Commits
1cc5fc49
There was a problem fetching the pipeline summary.
Commit
1cc5fc49
authored
6 years ago
by
Gregory Ashton
Browse files
Options
Downloads
Patches
Plain Diff
Clean up of example and add explanation
parent
7bf26c23
No related branches found
No related tags found
1 merge request
!86
Add Occam factor
Pipeline
#
Changes
2
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
examples/other_examples/occam_factor_example.py
+43
-7
43 additions, 7 deletions
examples/other_examples/occam_factor_example.py
tupak/core/result.py
+3
-2
3 additions, 2 deletions
tupak/core/result.py
with
46 additions
and
9 deletions
examples/other_examples/occam_factor_example.py
+
43
−
7
View file @
1cc5fc49
#!/bin/python
"""
As part of the :code:`tupak.result.Result` object, we provide a method to
calculate the Occam factor (c.f., Chapter 28, `Mackay
"
Information Theory,
Inference, and Learning Algorithms
"
<http://www.inference.org.uk/itprnn/book.html>`). This is an approximate
estimate based on the posterior samples, and assumes the posteriors are well
approximate by a Gaussian.
The Occam factor penalizes models with larger numbers of parameters (or
equivalently a larger
"
prior volume
"
). This example won
'
t try to go through
explaining the meaning of this, or how it is calculated (those details are
sufficiently well done in Mackay
'
s book linked above). Insetad, it demonstrates
how to calculate the Occam factor in :code:`tupak` and shows an example of it
working in practise.
If you have a :code:`result` object, the Occam factor can be calculated simply
from :code:`result.occam_factor(priors)` where :code:`priors` is the dictionary
of priors used during the model fitting. These priors should be uniform
priors only. Other priors may cause unexpected behaviour.
In the example, we generate a data set which contains Gaussian noise added to a
quadratic function. We then fit polynomials of differing degree. The final plot
shows that the largest evidence favours the quadratic polynomial (as expected)
and as the degree of polynomial increases, the evidence falls of in line with
the increasing (negative) Occam factor.
Note - the code uses a course 100-point estimation for speed, results can be
improved by increasing this to say 500 or 1000.
"""
from
__future__
import
division
import
tupak
...
...
@@ -70,24 +98,32 @@ def fit(n):
result
=
tupak
.
run_sampler
(
likelihood
=
likelihood
,
priors
=
priors
,
npoints
=
100
,
outdir
=
outdir
,
label
=
label
)
return
result
.
log_evidence
,
result
.
log_evidence_err
,
np
.
log
(
result
.
occam_factor
(
priors
))
return
(
result
.
log_evidence
,
result
.
log_evidence_err
,
np
.
log
(
result
.
occam_factor
(
priors
)))
fig
,
ax
=
plt
.
subplots
()
fig
,
ax1
=
plt
.
subplots
()
ax2
=
ax1
.
twinx
()
log_evidences
=
[]
log_evidences_err
=
[]
log_occam_factors
=
[]
ns
=
range
(
1
,
1
0
)
ns
=
range
(
1
,
1
1
)
for
l
in
ns
:
e
,
e_err
,
o
=
fit
(
l
)
log_evidences
.
append
(
e
)
log_evidences_err
.
append
(
e_err
)
log_occam_factors
.
append
(
o
)
ax
.
errorbar
(
ns
,
log_evidences
-
np
.
max
(
log_evidences
),
yerr
=
log_evidences_err
,
fmt
=
'
-o
'
,
color
=
'
C0
'
)
ax
.
plot
(
ns
,
log_occam_factors
-
np
.
max
(
log_occam_factors
),
'
-o
'
,
color
=
'
C1
'
,
alpha
=
0.5
)
ax1
.
errorbar
(
ns
,
log_evidences
,
yerr
=
log_evidences_err
,
fmt
=
'
-o
'
,
color
=
'
C0
'
)
ax1
.
set_ylabel
(
"
Unnormalized log evidence
"
,
color
=
'
C0
'
)
ax1
.
tick_params
(
'
y
'
,
colors
=
'
C0
'
)
ax2
.
plot
(
ns
,
log_occam_factors
,
'
-o
'
,
color
=
'
C1
'
,
alpha
=
0.5
)
ax2
.
tick_params
(
'
y
'
,
colors
=
'
C1
'
)
ax2
.
set_ylabel
(
'
Occam factor
'
,
color
=
'
C1
'
)
ax1
.
set_xlabel
(
'
Degree of polynomial
'
)
fig
.
savefig
(
'
{}/{}_test
'
.
format
(
outdir
,
label
))
This diff is collapsed.
Click to expand it.
tupak/core/result.py
+
3
−
2
View file @
1cc5fc49
...
...
@@ -210,8 +210,9 @@ class Result(dict):
def
occam_factor
(
self
,
priors
):
"""
The Occam factor,
See Mackay
"
Information Theor, Inference and Learning Algorithms,
Cambridge University Press (2003) Eq. (28.10)
See Chapter 28, `Mackay
"
Information Theory, Inference, and Learning
Algorithms
"
<http://www.inference.org.uk/itprnn/book.html>`_ Cambridge
University Press (2003).
"""
return
self
.
posterior_volume
/
self
.
prior_volume
(
priors
)
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment