Bayesian update with continuous prior and likelihood

Prior
\( \mu_0\)
\( \sigma_0\)
Density function: $$ f_0(\theta) = P(\Theta=\theta) = {\frac {1}{\sigma_ 0 {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2}}\left({\frac {\theta-\mu_ 0 }{\sigma_ 0 }}\right)^{2} \right) $$
\( \mu_0\)
\( \sigma_0\)
Density function: $$ f_0(\theta) = P(\Theta=\theta) = {\frac {1}{\theta\sigma_ 0 {\sqrt {2\pi }}}}\ \exp \left(-{\frac {\left(\ln \theta-\mu_ 0 \right)^{2}}{2\sigma_ 0 ^{2}}}\right) $$
\( \alpha_0\)
\( \beta_0\)
Density function: $$ f_0(\theta) = P(\Theta=\theta) = \frac {\theta^{\alpha_ 0 -1}(1-\theta)^{\beta_ 0 -1}}{\mathrm {B} (\alpha_ 0 ,\beta_ 0 )} $$ (See Wikipedia for the definition of the function \(\mathrm {B} \) in the normalization constant.)
Numerator Beta, parameter \( a_1 \)
Numerator Beta, parameter \( b_1 \)
Denominator Beta, parameter \( a_2 \)
Denominator Beta, parameter \( b_2 \)

The distribution of log(R) where R is the ratio of a Beta distribution on the numerator and another Beta distribution on the denominator. Useful when R is a risk ratio, and a point estimate and standard error for the log risk ratio is provided by the study.

The distribution is calculated using Monte Carlo simulation (with 10,000 samples) and kernel density estimation. This is a slight approximation, which becomes bigger as you go out towards the tails of the distribution. It's also a little bit slower than other distribution families (Monte Carlo simulation is the fastest method I know of).

Likelihood
\( \mu_1\)
\( \sigma_1\)
2.5%
97.5%
Likelihood function: $$ f_1(\theta) = P(E \mid \Theta=\theta) = {\frac {1}{\sigma_ 1 {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2}}\left({\frac {\theta-\mu_ 1 }{\sigma_ 1 }}\right)^{2} \right) $$
\( \mu_1\)
\( \sigma_1\)
Likelihood function: $$ f_1(\theta) = P(E \mid \Theta=\theta) = {\frac {1}{\theta\sigma_ 1 {\sqrt {2\pi }}}}\ \exp \left(-{\frac {\left(\ln \theta-\mu_ 1 \right)^{2}}{2\sigma_ 1 ^{2}}}\right) $$
\( \alpha_1\)
\( \beta_1\)
Likelihood function: $$ f_1(\theta) = P(E \mid \Theta=\theta) = \frac {\theta^{\alpha_ 1 -1}(1-\theta)^{\beta_ 1 -1}}{\mathrm {B} (\alpha_ 1 ,\beta_ 1 )} $$ (See Wikipedia for the definition of the function \(\mathrm {B} \) in the normalization constant.)
successes \(s\)
failures \(f\)
Likelihood function: $$ f_1(\theta) = P(E \mid \Theta=\theta) = \binom{s+f}{s} \theta^{s}(1-\theta)^{f} $$
From
To
Click here to see an example.
One use case that may be of particular interest is updating a prior on a parameter B based on b, an a statistical estimate of B (for example from a study you conducted or are reading about). This tool may be helpful for converting between 95% confidence intervals, standard errors, and p-values.