diff --git a/docs/basics-of-parameter-estimation.txt b/docs/basics-of-parameter-estimation.txt
new file mode 100644
index 0000000000000000000000000000000000000000..a278d40ef593000ffe443f9a474ad875df48c925
--- /dev/null
+++ b/docs/basics-of-parameter-estimation.txt
@@ -0,0 +1,113 @@
+==============================
+Basics of parameter estimation
+==============================
+
+In this example, we'll go into some of the basics of parameter estimation and
+how they are implemented in `tupak`.
+
+Firstly, consider a situation where you have discrete data :math:`\{y_0,
+y_1,\ldots, y_n\}` taken at a set of times :math:`\{t_0, t_1, \ldots, y_n\}`.
+Further, we know that this data is generated by a process which can be modelled
+by a linear function of the form :math:`y(t) = m t + c`. We will refer to
+this model as :math:`H`. Given a set of data,
+how can we figure out the coefficients :math:`m` and :math:`c`?
+
+This is a well studied problem known as `linear regression
+<https://en.wikipedia.org/wiki/Linear_regression>`_ and there already exists
+many ways to estimate the coefficients (you may already be familiar with a
+least squares estimator for example).
+
+Here, we will describe a Bayesian approach using `nested sampling
+<https://en.wikipedia.org/wiki/Nested_sampling_algorithm>`_ which might feel
+like overkill for this problem. However, it is a useful way to introduce some
+of the basic features of `tupak` before seeing them in more complicated
+settings.
+
+The maths
+---------
+
+Given the data, the posterior distribution for the model parameters is given
+by
+
+.. math::
+
+   P(m, c| \{y_i, t_i\}, H) \propto P(\{y_i, t_i\}| m, c, H) \times P(m, c| H)
+
+where the first term on the right-hand-side is the *likelihood* while the
+second is the *prior*. In the model :math:`H`, the likelihood of the data point
+:math:`y_i, t_i`, given a value for the coefficients we will define to be
+Gaussian distributed as such:
+
+.. math::
+
+   P(y_i, t_i| m, c, H) = \frac{1}{\sqrt{2\pi\sigma^2}}
+                         \mathrm{exp}\left(\frac{-(y_i - (t_i x + c))^2}{2\sigma^2}\right)
+
+Next, we assume that all data points are independent. As such,
+
+.. math::
+
+   P(\{y_i, t_i\}| m, c, H) = \prod_{i=1}^n P(y_i, t_i| m, c, H)
+
+When solving problems on a computer, it is often convienient to work with
+the log-likelihood. Indeed, a `tupak` *likelihood* must have a `log_likelihood()`
+method. For the normal distribution, the log-likelihood for *n* data points is
+
+.. math::
+
+   \log(P(\{y_i, t_i\}| m, c, H)) = -\frac{1}{2}\left[
+       \sum_{i=1}^n \left(\frac{(y_i - (t_i x + c))}{\sigma}\right)^2
+       + n\log\left(2\pi\sigma^2\right)\right]
+
+Finally, we need to specify a *prior*. In this case we will use uncorrelated
+uniform priors
+
+.. math::
+
+  P(m, c| H) = P(m| H) \times P(c| H) = \textrm{Unif}(0, 5) \times \textrm{Unif}(-2, 2)
+
+the choice of prior in general should be guided by physical knowledge about the
+system and not the data in question.
+
+The key point to take away here is that the **likelihood** and **prior** are
+the inputs to figuring out the **posterior**. There are many ways to go about
+this, we will now show how to do so in `tupak`. In this case, we explicitly
+show how to write the `GaussianLikelihood` so that one can see how the
+maths above gets implemented. For the prior, this is done implicitly by the
+naming of the priors.
+
+The code
+--------
+
+In the following example, also available under
+`examples/other_examples/linear_regression.py
+<https://git.ligo.org/Monash/tupak/tree/master/examples/other_examples/linear_regression.py>`_
+we will step through the process of generating some simulated data, writing
+a likelihood and prior, and running nested sampling using `tupak`.
+
+.. literalinclude:: /../examples/other_examples/linear_regression.py
+   :language: python
+   :emphasize-lines: 12,15-18
+   :linenos:
+
+Running the script above will make a few images. Firstly, the plot of the data:
+
+.. image:: images/linear-regression_data.png
+
+The dashed red line here shows the simulated signal.
+
+Secondly, because we used the `plot=True` argument in `run_sampler` we generate
+a corner plot
+
+.. image:: images/linear-regression_corner.png
+
+The solid lines indicate the simulated values which are recovered quite
+easily. Note, you can also make a corner plot with `result.plot_corner()`.
+
+Final thoughts
+--------------
+
+While this example is somewhat trivial, hopefully you can see how this script
+can be easily modified to perform parameter estimation for almost any
+time-domain data where you can model the background noise as Gaussian and write
+the signal model as a python function (i.e., replacing `model`).
diff --git a/docs/images/linear-regression_corner.png b/docs/images/linear-regression_corner.png
new file mode 100644
index 0000000000000000000000000000000000000000..1ea7c2abb14a0f031b97bc6879e249105e84f23e
Binary files /dev/null and b/docs/images/linear-regression_corner.png differ
diff --git a/docs/images/linear-regression_data.png b/docs/images/linear-regression_data.png
new file mode 100644
index 0000000000000000000000000000000000000000..6c1acb5e472da473c1967d21e6e29eabf3b4134f
Binary files /dev/null and b/docs/images/linear-regression_data.png differ
diff --git a/docs/index.txt b/docs/index.txt
index dc2ce14609fcaa5f8d8f47bf1454e9a68b5d79c8..f77c3fccebe84f323a41742c9d415396ffd736f6 100644
--- a/docs/index.txt
+++ b/docs/index.txt
@@ -8,6 +8,7 @@ Welcome to tupak's documentation!
    :maxdepth: 3
    :caption: Contents:
 
+   basics-of-parameter-estimation
    likelihood
    samplers
    writing-documentation
diff --git a/examples/other_examples/linear_regression.py b/examples/other_examples/linear_regression.py
index 115eb1bb137becf4a356c256b7d6d083469bb37e..1aee5f0243bdee5180bc384c09aa70cfe83a950e 100644
--- a/examples/other_examples/linear_regression.py
+++ b/examples/other_examples/linear_regression.py
@@ -48,7 +48,7 @@ ax.plot(time, model(time, **injection_parameters), '--r', label='signal')
 ax.set_xlabel('time')
 ax.set_ylabel('y')
 ax.legend()
-fig.savefig('{}/data.png'.format(outdir))
+fig.savefig('{}/{}_data.png'.format(outdir, label))
 
 
 # Parameter estimation: we now define a Gaussian Likelihood class relevant for