# Bayesian update with continuous prior and likelihood

Prior
Density function: $$f_0(\theta) = P(\Theta=\theta) = {\frac {1}{\sigma_ 0 {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2}}\left({\frac {\theta-\mu_ 0 }{\sigma_ 0 }}\right)^{2} \right)$$
Density function: $$f_0(\theta) = P(\Theta=\theta) = {\frac {1}{\theta\sigma_ 0 {\sqrt {2\pi }}}}\ \exp \left(-{\frac {\left(\ln \theta-\mu_ 0 \right)^{2}}{2\sigma_ 0 ^{2}}}\right)$$
Density function: $$f_0(\theta) = P(\Theta=\theta) = \frac {\theta^{\alpha_ 0 -1}(1-\theta)^{\beta_ 0 -1}}{\mathrm {B} (\alpha_ 0 ,\beta_ 0 )}$$ (See Wikipedia for the definition of the function $$\mathrm {B}$$ in the normalization constant.)

The distribution of log(R) where R is the ratio of a Beta distribution on the numerator and another Beta distribution on the denominator. Useful when R is a risk ratio, and a point estimate and standard error for the log risk ratio is provided by the study.

The distribution is calculated using Monte Carlo simulation (with 10,000 samples) and kernel density estimation. This is a slight approximation, which becomes bigger as you go out towards the tails of the distribution. It's also a little bit slower than other distribution families (Monte Carlo simulation is the fastest method I know of).

Likelihood
Likelihood function: $$f_1(\theta) = P(E \mid \Theta=\theta) = {\frac {1}{\sigma_ 1 {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2}}\left({\frac {\theta-\mu_ 1 }{\sigma_ 1 }}\right)^{2} \right)$$
Likelihood function: $$f_1(\theta) = P(E \mid \Theta=\theta) = {\frac {1}{\theta\sigma_ 1 {\sqrt {2\pi }}}}\ \exp \left(-{\frac {\left(\ln \theta-\mu_ 1 \right)^{2}}{2\sigma_ 1 ^{2}}}\right)$$
Likelihood function: $$f_1(\theta) = P(E \mid \Theta=\theta) = \frac {\theta^{\alpha_ 1 -1}(1-\theta)^{\beta_ 1 -1}}{\mathrm {B} (\alpha_ 1 ,\beta_ 1 )}$$ (See Wikipedia for the definition of the function $$\mathrm {B}$$ in the normalization constant.)
Likelihood function: $$f_1(\theta) = P(E \mid \Theta=\theta) = \binom{s+f}{s} \theta^{s}(1-\theta)^{f}$$
Click here to see an example.
One use case that may be of particular interest is updating a prior on a parameter B based on b, an a statistical estimate of B (for example from a study you conducted or are reading about).
• If b is a mean or a difference in means (such as a treatment effect), the likelihood distribution will be a normal distribution centered around b with a standard deviation equal to the standard error of b. The log-normal distribution may be a good choice of prior for positive quantities.
Quick link: Update from statistical estimate of a mean or treatment effect
• If b is a ratio, its error distribution converges to normality slowly. In the case of a risk ratio, both risks are positive, so the error distribution of log(b), which converges faster, is often used. In this case, you can take logs of both prior and likelihood, so that the likelihood becomes a normal distribution. A good choice for the prior over a risk ratio is a ratio of Beta distributions, whose log is a difference of logs of betas distributions.
Quick link: Update from statistical estimate of a risk ratio (log space)
This tool may be helpful for converting between 95% confidence intervals, standard errors, and p-values.