TransportMaps.Likelihoods.LikelihoodBase
¶
Module Contents¶
Classes¶
Abstract class for log-likelihood \(\log \pi({\bf y} \vert {\bf x})\) |
|
Log-likelihood \(\log \pi({\bf y} \vert {\bf x})=\log\pi({\bf y} - T({\bf x}))\) |
|
Define the log-likelihood for the additive Gaussian model |
|
Define the log-likelihood for the additive linear Gaussian model |
|
Define the log-likelihood for the additive linear Gaussian model |
|
Class for log-likelihood of \({\bf y}\), where \(y_i \vert {\bf x}; {\bf f}_i \sim \mathrm{Bernoulli}(\mathrm{logit}^{-1}({\bf f}_i^\top {\bf x})\) |
|
See Stan example |
|
Handling likelihoods in the form \(\pi({\bf y}\vert{\bf x}) = \prod_{i=1}^{n}\pi_i({\bf y}_i\vert{\bf x}_i)\) |
Attributes¶
- class TransportMaps.Likelihoods.LikelihoodBase.LogLikelihood(y, dim)[source]¶
Bases:
TransportMaps.Maps.Functionals.Functional
Abstract class for log-likelihood \(\log \pi({\bf y} \vert {\bf x})\)
Note that \(\log\pi:\mathbb{R}^d \rightarrow \mathbb{R}\) is considered a function of \({\bf x}\), while the data \({\bf y}\) is fixed.
- abstract evaluate(x, *args, **kwargs)[source]¶
[Abstract] Evaluate \(\log\pi({\bf y} \vert {\bf x})\).
- abstract grad_x(x, *args, **kwargs)[source]¶
[Abstract] Evaluate \(\nabla_{\bf x}\log\pi({\bf y} \vert {\bf x})\).
- tuple_grad_x(x, cache=None, **kwargs)[source]¶
Evaluate \(\left(\log\pi({\bf y} \vert {\bf x}),\nabla_{\bf x}\log\pi({\bf y} \vert {\bf x})\right)\).
- abstract hess_x(x, *args, **kwargs)[source]¶
[Abstract] Evaluate \(\nabla^2_{\bf x}\log\pi({\bf y} \vert {\bf x})\).
- class TransportMaps.Likelihoods.LikelihoodBase.AdditiveLogLikelihood(y, pi, T)[source]¶
Bases:
LogLikelihood
Log-likelihood \(\log \pi({\bf y} \vert {\bf x})=\log\pi({\bf y} - T({\bf x}))\)
- Parameters:
y (
ndarray
\([d_y]\)) – datapi (
Distribution
) – distribution \(\nu_\pi\)T (
Map
) – map \(T:\mathbb{R}^d\rightarrow\mathbb{R}^{d_y}\)
- evaluate(x, idxs_slice=slice(None, None, None), cache=None, **kwargs)[source]¶
Evaluate \(\log\pi({\bf y} \vert {\bf x})\).
- grad_x(x, idxs_slice=slice(None, None, None), cache=None, **kwargs)[source]¶
Evaluate \(\nabla_{\bf x}\log\pi({\bf y} \vert {\bf x})\).
- Parameters:
- Returns:
(
ndarray
[\(m,1,d\)]) – gradient evaluations
Todo
caching is not implemented
- tuple_grad_x(x, idxs_slice=slice(None, None, None), cache=None, **kwargs)[source]¶
Evaluate \(\log\pi({\bf y} \vert {\bf x}), \nabla_{\bf x}\log\pi({\bf y} \vert {\bf x})\).
- Parameters:
- Returns:
(
tuple
) – function and gradient evaluations
Todo
caching is not implemented
- hess_x(x, idxs_slice=slice(None, None, None), cache=None, **kwargs)[source]¶
Evaluate \(\nabla^2_{\bf x}\log\pi({\bf y} \vert {\bf x})\).
- Parameters:
- Returns:
(
ndarray
[\(m,1,d,d\)]) – Hessian evaluations
Todo
caching is not implemented
- class TransportMaps.Likelihoods.LikelihoodBase.AdditiveNormalLogLikelihood(y, T, mu, covariance=None, precision=None)[source]¶
Bases:
AdditiveLogLikelihood
Define the log-likelihood for the additive Gaussian model
The model is
\[{\bf y} = {\bf T}({\bf x}) + \varepsilon \;, \quad \varepsilon \sim \mathcal{N}(\mu, \Sigma)\]where \(T \in \mathbb{R}^{d_y \times d_x}\), \(\mu \in \mathbb{R}^{d_y}\) and \(\Sigma \in \mathbb{R}^{d_y \times d_y}\) is symmetric positve definite
- __init__(y, T, mu, covariance=None, precision=None)[source]¶
- Parameters:
y (
ndarray
[\(d_y\)]) – dataT (
Map
) – map \(T:\mathbb{R}^d\rightarrow\mathbb{R}^{d_y}\)mu (
ndarray
[\(d_y\)] orMap
) – noise mean or parametric map returning the meancovariance (
ndarray
[\(d_y,d_y\)] orMap
) – noise covariance or parametric map returning the covarianceprecision (
ndarray
[\(d_y,d_y\)] orMap
) – noise precision matrix or parametric map returning the precision matrix
- class TransportMaps.Likelihoods.LikelihoodBase.AdditiveLinearNormalLogLikelihood(y, c, T, mu, covariance=None, precision=None)[source]¶
Bases:
AdditiveNormalLogLikelihood
Define the log-likelihood for the additive linear Gaussian model
The model is
\[{\bf y} = {\bf c} + {\bf T}{\bf x} + \varepsilon \;, \quad \varepsilon \sim \mathcal{N}(\mu, \Sigma)\]where \(T \in \mathbb{R}^{d_y \times d_x}\), \(\mu \in \mathbb{R}^{d_y}\) and \(\Sigma \in \mathbb{R}^{d_y \times d_y}\) is symmetric positve definite
- Parameters:
y (
ndarray
[\(d_y\)]) – datac (
ndarray
[\(d_y\)] orMap
) – system constant or parametric map returning the constantT (
ndarray
[\(d_y,d_x\)] orMap
) – system matrix or parametric map returning the system matrixmu (
ndarray
[\(d_y\)] orMap
) – noise mean or parametric map returning the meancovariance (
ndarray
[\(d_y,d_y\)] orMap
) – noise covariance or parametric map returning the covarianceprecision (
ndarray
[\(d_y,d_y\)] orMap
) – noise precision matrix or parametric map returning the precision matrix
- class TransportMaps.Likelihoods.LikelihoodBase.AdditiveConditionallyLinearNormalLogLikelihood(y, c, T, mu, covariance=None, precision=None, active_vars_system=[], active_vars_distribution=[], coeffs=None)[source]¶
Bases:
AdditiveLogLikelihood
Define the log-likelihood for the additive linear Gaussian model
The model is
\[{\bf y} = {\bf c}(\theta) + {\bf T}(\theta){\bf x} + \varepsilon \;, \quad \varepsilon \sim \mathcal{N}(\mu(\theta), \Sigma(\theta))\]where \(T \in \mathbb{R}^{d_y \times d_x}\), \(\mu \in \mathbb{R}^{d_y}\) and \(\Sigma \in \mathbb{R}^{d_y \times d_y}\) is symmetric positve definite
- Parameters:
y (
ndarray
[\(d_y\)]) – datac (
ndarray
[\(d_y\)] orMap
) – system constant or parametric map returning the constantT (
ndarray
[\(d_y,d_x\)] orMap
) – system matrix or parametric map returning the system matrixmu (
ndarray
[\(d_y\)] orMap
) – noise mean or parametric map returning the meancovariance (
ndarray
[\(d_y,d_y\)] orMap
) – noise covariance or parametric map returning the covarianceprecision (
ndarray
[\(d_y,d_y\)] orMap
) – noise precision matrix or parametric map returning the precision matrixactive_vars_system (
list
ofint
) – active variables identifying the parameters for for \(c(\theta), T(\theta)\).active_vars_distribution (
list
ofint
) – active variables identifying the parameters for for \(\mu(\theta), \Sigma(\theta)\).coeffs (
ndarray
) – initialization coefficients
- class TransportMaps.Likelihoods.LikelihoodBase.LogisticLogLikelihood(y, F)[source]¶
Bases:
LogLikelihood
Class for log-likelihood of \({\bf y}\), where \(y_i \vert {\bf x}; {\bf f}_i \sim \mathrm{Bernoulli}(\mathrm{logit}^{-1}({\bf f}_i^\top {\bf x})\)
This implements the vectorized log-likelihood:
\[\log\pi({\bf y}\vert{\bf x}; {\bf f}_{1:N}) = \sum_{i=1}^N y_i \frac{1}{1+\exp(-{\bf f}_i^\top {\bf x}} + (1 - y_i) \left[ 1 - \frac{1}{1+\exp(-{\bf f}_i^\top {\bf x}} \right]\]
- class TransportMaps.Likelihoods.LikelihoodBase.TwoParametersLogisticLogLikelihood(y)[source]¶
Bases:
LogLikelihood
See Stan example
\(I\): number of words, \(J\): number of people
- class TransportMaps.Likelihoods.LikelihoodBase.IndependentLogLikelihood(factors)[source]¶
Bases:
TransportMaps.Maps.Functionals.Functional
Handling likelihoods in the form \(\pi({\bf y}\vert{\bf x}) = \prod_{i=1}^{n}\pi_i({\bf y}_i\vert{\bf x}_i)\)
- Parameters:
factors (
list
oftuple
) – each tuple contains a log-likelihood (LogLikelihood
) and the sub-set of variables of \({\bf x}\) on which it acts.
Example
Let \(\pi(y_0,y_2\vert x_0,x_1,x_2)= \pi_0(y_0\vert x_0) \pi_2(y_2,x_2)\).
>>> factors = [(ll0, [0]), >>> (ll2, [2])] >>> ll = IndependentLogLikelihood(factors)
- append(factor)[source]¶
Add a new factor to the likelihood
- Parameters:
factors (
tuple
) – tuple containing a log-likelihood (LogLikelihood
) and the sub-set of variables of \({\bf x}\) on which it acts.
Example
Let \(\pi(y_0,y_2\vert x_0,x_1,x_2)= \pi_0(y_0\vert x_0) \pi_2(y_2,x_2)\) and let’s add the factor \(\pi_1(y_1\vert x_1)\), obtaining:
\[\pi(y_0,y_1,y_2\vert x_0,x_1,x_2)= \pi_0(y_0\vert x_0) \pi_1(y_1\vert x_1) \pi_2(y_2,x_2)\]>>> factor = (ll1, [1]) >>> ll.append(factor)
- evaluate(x, idxs_slice=slice(None, None, None), cache=None, **kwargs)[source]¶
Evaluate \(\log\pi({\bf y} \vert {\bf x})\).
- grad_x(x, idxs_slice=slice(None, None, None), cache=None, **kwargs)[source]¶
Evaluate \(\nabla_{\bf x}\log\pi({\bf y} \vert {\bf x})\).
- tuple_grad_x(x, idxs_slice=slice(None, None, None), cache=None, **kwargs)[source]¶
Evaluate \(\log\pi({\bf y} \vert {\bf x}), \nabla_{\bf x}\log\pi({\bf y} \vert {\bf x})\).
- hess_x(x, idxs_slice=slice(None, None, None), cache=None, **kwargs)[source]¶
Evaluate \(\nabla^2_{\bf x}\log\pi({\bf y} \vert {\bf x})\).