TransportMaps.Algorithms.SequentialInference.SequentialInferenceBase
¶
Module Contents¶
Classes¶
Perform the on-line filtering of a sequential Hidded Markov chain. |
|
Perform the on-line smoothing and filtering of a sequential Hidded Markov chain. |
- class TransportMaps.Algorithms.SequentialInference.SequentialInferenceBase.Filter(pi_hyper=None, **kwargs)[source]¶
Bases:
TransportMaps.ObjectBase.TMO
Perform the on-line filtering of a sequential Hidded Markov chain.
Given the prior distribution on the hyper-parameters \(\pi(\Theta)\), provides the functions neccessary to assimilate new pieces of data or missing data (defined in terms of transition densities \(\pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right)\) and log-likelihoods \(\log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right)\)), to return the maps pushing forward \(\mathcal{N}(0,{\bf I})\) to the filtering/forecast distributions \(\{\pi\left(\Theta, {\bf Z}_k \middle\vert {\bf y}_{0:k} \right)\}_k\).
For more details see also [TM3] and the tutorial.
- Parameters:
pi_hyper (
Distribution
) – prior distribution on the hyper-parameters \(\pi(\Theta)\)
Note
This is a super-class. Part of its methods need to be implemented by sub-classes.
- property filtering_map_list[source]¶
Returns the maps \(\{ \widetilde{\mathfrak{M}}_k({\bf x}_\theta, {\bf x}_{k+1}) \}_{i=0}^{k-1}\) pushing forward \(\mathcal{N}(0,{\bf I})\) to the filtering/forecast distributions \(\{\pi\left(\Theta, {\bf Z}_k \middle\vert {\bf y}_{0:k} \right)\}_k\).
The maps \(\widetilde{\mathfrak{M}}_k({\bf x}_\theta, {\bf x}_{k+1})\) are defined as follows:
\[\begin{split}\widetilde{\mathfrak{M}}_k({\bf x}_\theta, {\bf x}_{k+1}) = \left[\begin{array}{l} \mathfrak{M}_0^\Theta \circ \cdots \circ \mathfrak{M}_{k}^\Theta ({\bf x}_\theta) \\ \mathfrak{M}_k^1\left({\bf x}_\theta, {\bf x}_{k+1}\right) \end{array}\right] = \left[\begin{array}{l} \mathfrak{H}_{k}({\bf x}_\theta) \\ \mathfrak{M}_k^1\left({\bf x}_\theta, {\bf x}_{k+1}\right) \end{array}\right]\end{split}\]- Returns:
(
list
ofTransportMap
) – list of transport maps \(\widetilde{\mathfrak{M}}_k({\bf x}_\theta, {\bf x}_{k+1})\)
- assimilate(pi, ll, *args, **kwargs)[source]¶
Assimilate one piece of data \(\left( \pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right), \log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right) \right)\).
Given the new piece of data \(\left( \pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right), \log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right) \right)\), determine the maps pushing forward \(\mathcal{N}(0,{\bf I})\) to the filtering/forecast distributions \(\{\pi\left(\Theta, {\bf Z}_k \middle\vert {\bf y}_{0:k} \right)\}_k\).
- Parameters:
pi (
Distribution
) – transition distribution \(\pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right)\)ll (
LogLikelihood
) – log-likelihood \(\log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right)\). The valueNone
stands for missing observation.*args – arguments required by the particular sub-classes implementations of
_assimilation_step()
.**kwargs –
arguments required by the particular sub-classes implementations of
_assimilation_step()
.
Note
This method requires the implementation of the function
_assimilation_step()
in sub-classes
- abstract _assimilation_step(*args, **kwargs)[source]¶
[Abstract] Implements the map approximation for one step in the sequential inference.
- get_filtering_map_list()[source]¶
Deprecated since version Use:
filtering_map_list
instead
- class TransportMaps.Algorithms.SequentialInference.SequentialInferenceBase.Smoother(pi_hyper=None, **kwargs)[source]¶
Bases:
Filter
Perform the on-line smoothing and filtering of a sequential Hidded Markov chain.
Given the prior distribution on the hyper-parameters \(\pi(\Theta)\), provides the functions neccessary to assimilate new pieces of data or missing data (defined in terms of transition densities \(\pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right)\) and log-likelihoods \(\log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right)\)), to return the map pushing forward \(\mathcal{N}(0,{\bf I})\) to the smoothing distribution \(\pi\left(\Theta, {\bf Z}_\Lambda \middle\vert {\bf y}_\Xi \right)\) and to return the maps pushing forward \(\mathcal{N}(0,{\bf I})\) to the filtering/forecast distributions \(\{\pi\left(\Theta, {\bf Z}_k \middle\vert {\bf y}_{0:k} \right)\}_k\).
For more details see also [TM3] and the tutorial.
- Parameters:
pi_hyper (
Distribution
) – prior distribution on the hyper-parameters \(\pi(\Theta)\)
Note
This is a super-class. Part of its methods need to be implemented by sub-classes.
- property smoothing_map[source]¶
Returns the map \(\mathfrak{T}\) pushing forward \(\mathcal{N}(0,{\bf I})\) to the smoothing distribution \(\pi\left(\Theta, {\bf Z}_\Lambda \middle\vert {\bf y}_\Xi\right)\).
The map \(\mathfrak{T}\) is given by the composition \(T_0 \circ \cdots \circ T_{k-1}\) maps constructed in \(k\) assimilation steps.
- Returns:
(
TransportMap
) – the map \(\mathfrak{T}\)
- assimilate(pi, ll, *args, **kwargs)[source]¶
Assimilate one piece of data \(\left( \pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right), \log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right) \right)\).
Given the new piece of data \(\left( \pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right), \log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right) \right)\), retrieve the \(k\)-th Markov component \(\pi^k\) of \(\pi\), determine the transport map
\[\begin{split}\mathfrak{M}_k({\boldsymbol \theta}, {\bf z}_k, {\bf z}_{k+1}) = \left[ \begin{array}{l} \mathfrak{M}^\Theta_k({\boldsymbol \theta}) \\ \mathfrak{M}^0_k({\boldsymbol \theta}, {\bf z}_k, {\bf z}_{k+1}) \\ \mathfrak{M}^1_k({\boldsymbol \theta}, {\bf z}_{k+1}) \end{array} \right] = Q \circ R_k \circ Q\end{split}\]that pushes forward \(\mathcal{N}(0,{\bf I})\) to \(\pi^k\), and embed it into the linear map which will remove the desired conditional dependencies from \(\pi\).
- Parameters:
pi (
Distribution
) – transition distribution \(\pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right)\)ll (
LogLikelihood
) – log-likelihood \(\log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right)\). The valueNone
stands for missing observation.*args – arguments required by the particular sub-classes implementations of
_assimilation_step()
.**kwargs –
arguments required by the particular sub-classes implementations of
_assimilation_step()
.
Note
This method requires the implementation of the function
_assimilation_step()
in sub-classes