TransportMaps.Algorithms.SequentialInference

Classes

Inheritance diagram of TransportMaps.Algorithms.SequentialInference.Filter, TransportMaps.Algorithms.SequentialInference.Smoother, TransportMaps.Algorithms.SequentialInference.LinearFilter, TransportMaps.Algorithms.SequentialInference.LinearSmoother, TransportMaps.Algorithms.SequentialInference.TransportMapsSmoother
Class Description
Filter Perform the on-line filtering of a sequential Hidded Markov chain.
Smoother Perform the on-line smoothing and filtering of a sequential Hidded Markov chain.
LinearFilter Perform the on-line filtering of a sequential linear Gaussian Hidden Markov chain.
LinearSmoother Perform the on-line assimilation of a sequential linear Gaussian Hidden Markov chain.
TransportMapsSmoother Perform the on-line assimilation of a sequential Hidded Markov chain.

Documentation

class TransportMaps.Algorithms.SequentialInference.Filter(pi_hyper=None)[source]

Perform the on-line filtering of a sequential Hidded Markov chain.

Given the prior distribution on the hyper-parameters \(\pi(\Theta)\), provides the functions neccessary to assimilate new pieces of data or missing data (defined in terms of transition densities \(\pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right)\) and log-likelihoods \(\log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right)\)), to return the maps pushing forward \(\mathcal{N}(0,{\bf I})\) to the filtering/forecast distributions \(\{\pi\left(\Theta, {\bf Z}_k \middle\vert {\bf y}_{0:k} \right)\}_k\).

For more details see also [TM4] and the tutorial.

Parameters:pi_hyper (Distribution) – prior distribution on the hyper-parameters \(\pi(\Theta)\)

Note

This is a super-class. Part of its methods need to be implemented by sub-classes.

assimilate(pi, ll, *args, **kwargs)[source]

Assimilate one piece of data \(\left( \pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right), \log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right) \right)\).

Given the new piece of data \(\left( \pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right), \log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right) \right)\), determine the maps pushing forward \(\mathcal{N}(0,{\bf I})\) to the filtering/forecast distributions \(\{\pi\left(\Theta, {\bf Z}_k \middle\vert {\bf y}_{0:k} \right)\}_k\).

Parameters:
  • pi (Distribution) – transition distribution \(\pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right)\)
  • ll (LogLikelihood) – log-likelihood \(\log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right)\). The value None stands for missing observation.
  • **kwargs (*args,) –

    arguments required by the particular sub-classes implementations of _assimilation_step().

Note

This method requires the implementation of the function _assimilation_step() in sub-classes

filtering_map_list

Returns the maps \(\{ \widetilde{\mathfrak{M}}_k({\bf x}_\theta, {\bf x}_{k+1}) \}_{i=0}^{k-1}\) pushing forward \(\mathcal{N}(0,{\bf I})\) to the filtering/forecast distributions \(\{\pi\left(\Theta, {\bf Z}_k \middle\vert {\bf y}_{0:k} \right)\}_k\).

The maps \(\widetilde{\mathfrak{M}}_k({\bf x}_\theta, {\bf x}_{k+1})\) are defined as follows:

\[\begin{split}\widetilde{\mathfrak{M}}_k({\bf x}_\theta, {\bf x}_{k+1}) = \left[\begin{array}{l} \mathfrak{M}_0^\Theta \circ \cdots \circ \mathfrak{M}_{k}^\Theta ({\bf x}_\theta) \\ \mathfrak{M}_k^1\left({\bf x}_\theta, {\bf x}_{k+1}\right) \end{array}\right] = \left[\begin{array}{l} \mathfrak{H}_{k}({\bf x}_\theta) \\ \mathfrak{M}_k^1\left({\bf x}_\theta, {\bf x}_{k+1}\right) \end{array}\right]\end{split}\]
Returns:(list of TransportMap) – list of transport maps \(\widetilde{\mathfrak{M}}_k({\bf x}_\theta, {\bf x}_{k+1})\)
get_filtering_map_list()[source]

Deprecated since version Use: filtering_map_list instead

class TransportMaps.Algorithms.SequentialInference.Smoother(pi_hyper=None)[source]

Perform the on-line smoothing and filtering of a sequential Hidded Markov chain.

Given the prior distribution on the hyper-parameters \(\pi(\Theta)\), provides the functions neccessary to assimilate new pieces of data or missing data (defined in terms of transition densities \(\pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right)\) and log-likelihoods \(\log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right)\)), to return the map pushing forward \(\mathcal{N}(0,{\bf I})\) to the smoothing distribution \(\pi\left(\Theta, {\bf Z}_\Lambda \middle\vert {\bf y}_\Xi \right)\) and to return the maps pushing forward \(\mathcal{N}(0,{\bf I})\) to the filtering/forecast distributions \(\{\pi\left(\Theta, {\bf Z}_k \middle\vert {\bf y}_{0:k} \right)\}_k\).

For more details see also [TM4] and the tutorial.

Parameters:pi_hyper (Distribution) – prior distribution on the hyper-parameters \(\pi(\Theta)\)

Note

This is a super-class. Part of its methods need to be implemented by sub-classes.

assimilate(pi, ll, *args, **kwargs)[source]

Assimilate one piece of data \(\left( \pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right), \log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right) \right)\).

Given the new piece of data \(\left( \pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right), \log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right) \right)\), retrieve the \(k\)-th Markov component \(\pi^k\) of \(\pi\), determine the transport map

\[\begin{split}\mathfrak{M}_k({\boldsymbol \theta}, {\bf z}_k, {\bf z}_{k+1}) = \left[ \begin{array}{l} \mathfrak{M}^\Theta_k({\boldsymbol \theta}) \\ \mathfrak{M}^0_k({\boldsymbol \theta}, {\bf z}_k, {\bf z}_{k+1}) \\ \mathfrak{M}^1_k({\boldsymbol \theta}, {\bf z}_{k+1}) \end{array} \right] = Q \circ R_k \circ Q\end{split}\]

that pushes forward \(\mathcal{N}(0,{\bf I})\) to \(\pi^k\), and embed it into the linear map which will remove the desired conditional dependencies from \(\pi\).

Parameters:
  • pi (Distribution) – transition distribution \(\pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right)\)
  • ll (LogLikelihood) – log-likelihood \(\log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right)\). The value None stands for missing observation.
  • **kwargs (*args,) –

    arguments required by the particular sub-classes implementations of _assimilation_step().

Note

This method requires the implementation of the function _assimilation_step() in sub-classes

get_smoothing_map()[source]

Deprecated since version Use: filtering_map_list instead

smoothing_map

Returns the map \(\mathfrak{T}\) pushing forward \(\mathcal{N}(0,{\bf I})\) to the smoothing distribution \(\pi\left(\Theta, {\bf Z}_\Lambda \middle\vert {\bf y}_\Xi\right)\).

The map \(\mathfrak{T}\) is given by the composition \(T_0 \circ \cdots \circ T_{k-1}\) maps constructed in \(k\) assimilation steps.

Returns:(TransportMap) – the map \(\mathfrak{T}\)
class TransportMaps.Algorithms.SequentialInference.LinearFilter(ders=0, pi_hyper=None)[source]

Perform the on-line filtering of a sequential linear Gaussian Hidden Markov chain.

Aka: Kalman filter.

If the linear state-space model is parametric, i.e.

\[\begin{split}{\bf Z}_{k+1} = {\bf c}_k(\theta) + {\bf F}_k(\theta){\bf Z}_k + {\bf w}_k(\theta) \\ {\bf Y}_{k} = {\bf H}_k(\theta){\bf Z}_k + {\bf v}_k(\theta)\end{split}\]

then one can optionally compute the gradient (with respect to the parameters) of the filter.

Parameters:
  • ders (int) – 0 no gradient is computed, 1 compute gradient
  • pi_hyper (Distribution) – prior distribution on the hyper-parameters \(\pi(\Theta)\)

Todo

Square-root filter

filtering_covariance_list

Returns the covariances of all the filtering distributions

Returns:
(list of ndarray) – covariances of
\(\pi\left({\bf Z}_k\middle\vert{\bf Y}_{\Xi\leq k}\right)\) for \(k\in \Lambda=0,\ldots,n\).
filtering_grad_covariance_list

Returns the gradient of the covariances of all the filtering distributions

Returns:
(list of ndarray) –
gradient of the covariances of \(\pi\left({\bf Z}_k\middle\vert{\bf Y}_{\Xi\leq k}\right)\) for \(k\in \Lambda=0,\ldots,n\).
filtering_grad_mean_list

Returns the gradient of the means of all the filtering distributions

Returns:
(list of float) – gradient of the means of
\(\pi\left({\bf Z}_k\middle\vert{\bf Y}_{\Xi\leq k}\right)\) for \(k\in \Lambda=0,\ldots,n\).
filtering_mean_list

Returns the means of all the filtering distributions

Returns:
(list of float) – means of
\(\pi\left({\bf Z}_k\middle\vert{\bf Y}_{\Xi\leq k}\right)\) for \(k\in \Lambda=0,\ldots,n\).
grad_marginal_log_likelihood

Returns the gradient of the marginal log-likelihood \(\nabla_\theta\log\pi\left({\bf Y}_{\Xi\leq k}\right)\)

Returns:(float) – current marginal likelihood
marginal_log_likelihood

Returns the marginal log-likelihood \(\log\pi\left({\bf Y}_{\Xi\leq k}\right)\)

Returns:(float) – current marginal likelihood
class TransportMaps.Algorithms.SequentialInference.LinearSmoother(lag=None)[source]

Perform the on-line assimilation of a sequential linear Gaussian Hidden Markov chain.

Parameters:lag (numpy.float) – lag to be used in the backward updates of smoothing means and covariances. The default value None indicates infinite lag.

Todo

no hyper-parameter admitted right now.

offline_smoothing_mean_covariance_lists(lag=None)[source]

Compute the mean and covariances with a fixed lag for a pre-assimilated density

class TransportMaps.Algorithms.SequentialInference.TransportMapsSmoother(*args, **kwargs)[source]

Perform the on-line assimilation of a sequential Hidded Markov chain.

Given the prior distribution on the hyper-parameters \(\pi(\Theta)\), provides the functions neccessary to assimilate new pieces of data or missing data (defined in terms of transition densities \(\pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right)\) and log-likelihoods \(\log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right)\)), to return the map pushing forward \(\mathcal{N}(0,{\bf I})\) to the smoothing distribution \(\pi\left(\Theta, {\bf Z}_\Lambda \middle\vert {\bf y}_\Xi \right)\) and to return the maps pushing forward \(\mathcal{N}(0,{\bf I})\) to the filtering/forecast distributions \(\{\pi\left(\Theta, {\bf Z}_k \middle\vert {\bf y}_{0:k} \right)\}_k\).

For more details see also [TM4] and the tutorial.

Parameters:pi_hyper (Distribution) – prior distribution on the hyper-parameters \(\pi(\Theta)\)
trim(ntrim)[source]

Trim the integrator to ntrim