TransportMaps.Maps.Decomposable.SequentialInferenceMaps
¶
Module Contents¶
Classes¶
Given a map \(T\) of dimension \(d_\theta + 2 d_{\bf x}\), where \(d_\theta\) is the number of hyper-parameters and \(d_{\bf x}\) is the state dimension, lift it to a \(\hat{d}\) state dimensional map. |
|
Compose the lower triangular 1-lag smoothing maps into the smoothing map |
- class TransportMaps.Maps.Decomposable.SequentialInferenceMaps.LiftedTransportMap(idx, tm, dim, hyper_dim)[source]¶
Bases:
TransportMaps.Maps.TransportMap
Given a map \(T\) of dimension \(d_\theta + 2 d_{\bf x}\), where \(d_\theta\) is the number of hyper-parameters and \(d_{\bf x}\) is the state dimension, lift it to a \(\hat{d}\) state dimensional map.
Let
\[\begin{split}T(\Theta, {\bf x}) = \begin{bmatrix} T^{(0)}(\Theta) \\ T^{(1)}(\Theta, {\bf x}) \\ \end{bmatrix}\end{split}\]be the map to be lifted at index \(i\) into a \(\hat{d}\) dimensional map.
\[\begin{split}T_{\rm lift}(\Theta, {\bf x}) = \left[ \begin{array}{c} T^{(0)}(\theta) \\ x_{1} \\ \; \vdots \\ x_{i-1} \\ T^{(1)}(\theta, x_{i}, \ldots, x_{i+2 d_{\bf x}}) \\ x_{i+2d_{\bf x}+1} \\ \; \vdots \\ x_{\hat{d}} \end{array} \right]\end{split}\]- Parameters:
- property n_coeffs[source]¶
Returns the total number of coefficients.
- Returns:
- (
int
) – total number \(N\) of coefficients characterizing the map.
- (
- property coeffs[source]¶
Returns the actual value of the coefficients.
- Returns:
(
ndarray
[\(N\)]) – coefficients.
- evaluate(x, precomp=None, idxs_slice=slice(None), cache=None)[source]¶
[Abstract] Evaluate the map \(T\) at the points \({\bf x} \in \mathbb{R}^{m \times d_x}\).
- Parameters:
- Returns:
(
ndarray
[\(m,d_y\)]) – transformed points- Raises:
NotImplementedError – to be implemented in sub-classes
- inverse(x, *args, **kwargs)[source]¶
[Abstract] Compute: \(T^{-1}({\bf x})\)
- Parameters:
- Returns:
(
ndarray
[\(m,d\)]) – \(T^{-1}({\bf x})\) for every evaluation point
- grad_x(x, precomp=None, idxs_slice=slice(None), cache=None)[source]¶
[Abstract] Evaluate the gradient \(\nabla_{\bf x}T\) at the points \({\bf x} \in \mathbb{R}^{m \times d_x}\).
- Parameters:
- Returns:
(
ndarray
[\(m,d_y,d_x\)]) – transformed points- Raises:
NotImplementedError – to be implemented in sub-classes
- hess_x(x, precomp=None, idxs_slice=slice(None), cache=None)[source]¶
[Abstract] Evaluate the Hessian \(\nabla^2_{\bf x}T\) at the points \({\bf x} \in \mathbb{R}^{m \times d_x}\).
- Parameters:
- Returns:
(
ndarray
[\(m,d_y,d_x,d_x\)]) – transformed points- Raises:
NotImplementedError – to be implemented in sub-classes
- grad_x_inverse(x, *args, **kwargs)[source]¶
[Abstract] Compute \(\nabla_{\bf x} T^{-1}({\bf x})\).
- Parameters:
- Returns:
(
ndarray
[\(m,d,d\)]) – gradient matrices for every evaluation point.- Raises:
NotImplementedError – to be implemented in subclasses
- hess_x_inverse(x, *args, **kwargs)[source]¶
[Abstract] Compute \(\nabla_{\bf x}^2 T^{-1}({\bf x})\).
- Parameters:
- Returns:
(
ndarray
[\(m,d,d\)]) – Hessian tensors for every evaluation point.- Raises:
NotImplementedError – to be implemented in subclasses
- log_det_grad_x(x, precomp=None, idxs_slice=slice(None), cache=None)[source]¶
[Abstract] Compute: \(\log \det \nabla_{\bf x} T({\bf x}, {\bf a})\).
- Parameters:
- Returns:
(
ndarray
[\(m\)]) – \(\log \det \nabla_{\bf x} T({\bf x}, {\bf a})\) at every evaluation point
- grad_x_log_det_grad_x(x, precomp=None, idxs_slice=slice(None), cache=None)[source]¶
[Abstract] Compute: \(\nabla_{\bf x} \log \det \nabla_{\bf x} T({\bf x}, {\bf a})\)
- Parameters:
- Returns:
(
ndarray
[\(m,d\)]) – \(\nabla_{\bf x} \log \det \nabla_{\bf x} T({\bf x}, {\bf a})\) at every evaluation point
See also
- hess_x_log_det_grad_x(x, precomp=None, idxs_slice=slice(None), cache=None)[source]¶
[Abstract] Compute: \(\nabla^2_{\bf x} \log \det \nabla_{\bf x} T({\bf x}, {\bf a})\)
- Parameters:
- Returns:
(
ndarray
[\(m,d,d\)]) – \(\nabla^2_{\bf x} \log \det \nabla_{\bf x} T({\bf x}, {\bf a})\) at every evaluation point
See also
- log_det_grad_x_inverse(x, *args, **kwargs)[source]¶
[Abstract] Compute: \(\log \det \nabla_{\bf x} T^{-1}({\bf x}, {\bf a})\).
- Parameters:
- Returns:
(
ndarray
[\(m\)]) – \(\log \det \nabla_{\bf x} T^{-1}({\bf x}, {\bf a})\) at every evaluation point
- grad_x_log_det_grad_x_inverse(x, *args, **kwargs)[source]¶
[Abstract] Compute: \(\nabla_{\bf x} \log \det \nabla_{\bf x} T^{-1}({\bf x}, {\bf a})\)
- Parameters:
- Returns:
(
ndarray
[\(m,d\)]) – \(\nabla_{\bf x} \log \det \nabla_{\bf x} T^{-1}({\bf x}, {\bf a})\) at every evaluation point
See also
- hess_x_log_det_grad_x_inverse(x, *args, **kwargs)[source]¶
[Abstract] Compute: \(\nabla^2_{\bf x} \log \det \nabla_{\bf x} T^{-1}({\bf x}, {\bf a})\)
- Parameters:
- Returns:
(
ndarray
[\(m,d,d\)]) – \(\nabla^2_{\bf x} \log \det \nabla_{\bf x} T^{-1}({\bf x}, {\bf a})\) at every evaluation point
See also
- class TransportMaps.Maps.Decomposable.SequentialInferenceMaps.SequentialMarkovChainTransportMap(tm_list, hyper_dim)[source]¶
Bases:
TransportMaps.Maps.ListCompositeTransportMap
Compose the lower triangular 1-lag smoothing maps into the smoothing map
- Parameters:
Warning
this works only for one dimensional states! It will be extended for higher dimensional states in the future.