# Bayesian update with continuous prior and likelihood

Prior
Density function: $$f_0(\theta) = P(\Theta=\theta) = {\frac {1}{\sigma_ 0 {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2}}\left({\frac {\theta-\mu_ 0 }{\sigma_ 0 }}\right)^{2} \right)$$
Density function: $$f_0(\theta) = P(\Theta=\theta) = {\frac {1}{\theta\sigma_ 0 {\sqrt {2\pi }}}}\ \exp \left(-{\frac {\left(\ln \theta-\mu_ 0 \right)^{2}}{2\sigma_ 0 ^{2}}}\right)$$
Density function: $$f_0(\theta) = P(\Theta=\theta) = \frac {\theta^{\alpha_ 0 -1}(1-\theta)^{\beta_ 0 -1}}{\mathrm {B} (\alpha_ 0 ,\beta_ 0 )}$$ (See Wikipedia for the definition of the function $$\mathrm {B}$$ in the normalization constant.)

The distribution of log(R) where R is the ratio of a Beta distribution on the numerator and another Beta distribution on the denominator. Useful when R is a risk ratio, and a point estimate and standard error for the log risk ratio is provided by the study.

The distribution is calculated using Monte Carlo simulation (with 10,000 samples) and kernel density estimation. This is a slight approximation, which becomes bigger as you go out towards the tails of the distribution. It's also a little bit slower than other distribution families (Monte Carlo simulation is the fastest method I know of).

The distribution of the ratio of two beta distributions. Useful for a prior over a risk ratio.

The distribution is calculated using Monte Carlo simulation (with 10,000 samples) and kernel density estimation. This is a slight approximation, which becomes bigger as you go out towards the tails of the distribution. It's also a little bit slower than other distribution families (Monte Carlo simulation is the fastest method I know of).

Likelihood
Likelihood function: $$f_1(\theta) = P(E \mid \Theta=\theta) = {\frac {1}{\sigma_ 1 {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2}}\left({\frac {\theta-\mu_ 1 }{\sigma_ 1 }}\right)^{2} \right)$$
Likelihood function: $$f_1(\theta) = P(E \mid \Theta=\theta) = {\frac {1}{\theta\sigma_ 1 {\sqrt {2\pi }}}}\ \exp \left(-{\frac {\left(\ln \theta-\mu_ 1 \right)^{2}}{2\sigma_ 1 ^{2}}}\right)$$
Likelihood function: $$f_1(\theta) = P(E \mid \Theta=\theta) = \frac {\theta^{\alpha_ 1 -1}(1-\theta)^{\beta_ 1 -1}}{\mathrm {B} (\alpha_ 1 ,\beta_ 1 )}$$ (See Wikipedia for the definition of the function $$\mathrm {B}$$ in the normalization constant.)
Likelihood function: $$f_1(\theta) = P(E \mid \Theta=\theta) = \binom{s+f}{s} \theta^{s}(1-\theta)^{f}$$