TransportMaps.Algorithms.SequentialInference.NonLinearSequentialInference

Module Contents

Classes

TransportMapsSmoother

Perform the on-line assimilation of a sequential Hidded Markov chain.

FilteringPreconditionedTransportMapsSmoother

Perform the on-line assimilation of a sequential Hidded Markov chain.

LowRankTransportMapsSmoother

Perform the on-line assimilation of a sequential Hidded Markov chain using low-rank information to precondition each assimilation problem.

LowRankFilteringPreconditionedTransportMapSmoother

Perform the on-line assimilation of a sequential Hidded Markov chain.

class TransportMaps.Algorithms.SequentialInference.NonLinearSequentialInference.TransportMapsSmoother(*args, **kwargs)[source]

Bases: TransportMaps.Algorithms.SequentialInference.SequentialInferenceBase.Smoother

Perform the on-line assimilation of a sequential Hidded Markov chain.

Given the prior distribution on the hyper-parameters \(\pi(\Theta)\), provides the functions neccessary to assimilate new pieces of data or missing data (defined in terms of transition densities \(\pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right)\) and log-likelihoods \(\log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right)\)), to return the map pushing forward \(\mathcal{N}(0,{\bf I})\) to the smoothing distribution \(\pi\left(\Theta, {\bf Z}_\Lambda \middle\vert {\bf y}_\Xi \right)\) and to return the maps pushing forward \(\mathcal{N}(0,{\bf I})\) to the filtering/forecast distributions \(\{\pi\left(\Theta, {\bf Z}_k \middle\vert {\bf y}_{0:k} \right)\}_k\).

For more details see also [TM3] and the tutorial.

Optional Args:
pi_hyper (Distribution):

prior distribution on the hyper-parameters \(\pi(\Theta)\)

property var_diag_convergence[source]
property regression_convergence[source]
property markov_component_nevals[source]
_terminate_kl(log, continue_on_error)[source]
_permuting_map(pi)[source]
_preconditioning_map(pi, mpi_pool=None)[source]

Returns the preconditioning map as well as the sub-components: the hyper-parameters and filtering preconditioning maps.

_learn_map(rho, pi, tm, solve_params, builder_extra_kwargs, builder_class, continue_on_error, mpi_pool=None, **kwargs)[source]

Returns the transport map found and the preconditioning maps.

_assimilation_step(tm, solve_params, builder_extra_kwargs={}, builder_class=None, var_diag_convergence_params=None, hyper_tm=None, regression_params=None, regression_builder=None, regression_convergence_params=None, continue_on_error=True, learn_map_extra_kwargs={}, mpi_pool=None)[source]

Assimilate one piece of data \(\left( \pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right), \log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right) \right)\).

Given the new piece of data \(\left( \pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right), \log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right) \right)\), retrieve the \(k\)-th Markov component \(\pi^k\) of \(\pi\), determine the transport map

\[\begin{split}\mathfrak{M}_k({\boldsymbol \theta}, {\bf z}_k, {\bf z}_{k+1}) = \left[ \begin{array}{l} \mathfrak{M}^\Theta_k({\boldsymbol \theta}) \\ \mathfrak{M}^0_k({\boldsymbol \theta}, {\bf z}_k, {\bf z}_{k+1}) \\ \mathfrak{M}^1_k({\boldsymbol \theta}, {\bf z}_{k+1}) \end{array} \right] = Q \circ R_k \circ Q\end{split}\]

that pushes forward \(\mathcal{N}(0,{\bf I})\) to \(\pi^k\), and embed it into the linear map which will remove the desired conditional dependencies from \(\pi\).

Optionally, it will also compress the maps \(\mathfrak{M}_{0}^\Theta \circ \ldots \circ \mathfrak{M}_{k-1}^\Theta\) into the map \(\mathfrak{H}_{k-1}\) in order to speed up the evaluation of the \(k\)-th Markov component \(\pi^k\).

Parameters:
  • tm (TransportMap) – transport map \(R_k\)

  • builder_extra_kwargs (dict) – parameters to be passed to the builder minimize_kl_divergence

  • solve_params (dict) – dictionary of options to be passed to minimize_kl_divergence().

  • builder_class (class) – sub-class of KullbackLeiblerBuilder describing the particular builder used for the minimization of the kl-divergence. Default is KullbackLeiblerBuilder itself.

  • hyper_tm (TransportMap) – transport map \(\mathfrak{H}_{k-1}\)

  • regression_params (dict) – parameters to be passed to regression during the determination of \(\mathfrak{H}_{k-1}\)

  • regression_builder (L2RegressionBuilder) – builder for the regression of the hyper-parameters map.

  • var_diag_convergence_params (dict) – parameters to be used to monitor the convergence of the map approximation. If None the conevergence is not monitored.

  • regression_convergence_params (dict) – parameters to be used to monitor the convergence of the regression step on the hyper-parameters map. If None the conevergence is not monitored.

  • continue_on_error (bool) – whether to continue when the KL-minimization step or the regression step fails with back-up plans

  • learn_map_extra_kwargs (dict) – extra keyword arguments to be passed to the _learn_map().

  • mpi_pool (mpi_map.MPI_Pool) – pool of processes to be used for additional evaluations

Raises:

RunTimeError – an convergence error occurred during the assimilation

trim(ntrim)[source]

Trim the integrator to ntrim

class TransportMaps.Algorithms.SequentialInference.NonLinearSequentialInference.FilteringPreconditionedTransportMapsSmoother(**kwargs)[source]

Bases: TransportMapsSmoother

Perform the on-line assimilation of a sequential Hidded Markov chain.

Given the prior distribution on the hyper-parameters \(\pi(\Theta)\), provides the functions neccessary to assimilate new pieces of data or missing data (defined in terms of transition densities \(\pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right)\) and log-likelihoods \(\log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right)\)), to return the map pushing forward \(\mathcal{N}(0,{\bf I})\) to the smoothing distribution \(\pi\left(\Theta, {\bf Z}_\Lambda \middle\vert {\bf y}_\Xi \right)\) and to return the maps pushing forward \(\mathcal{N}(0,{\bf I})\) to the filtering/forecast distributions \(\{\pi\left(\Theta, {\bf Z}_k \middle\vert {\bf y}_{0:k} \right)\}_k\).

For more details see also [TM3] and the tutorial.

Optional Args:
pi_hyper (Distribution):

prior distribution on the hyper-parameters \(\pi(\Theta)\)

property precondition_regression_convergence[source]
_learn_map(rho, pi, tm, solve_params, builder_extra_kwargs, builder_class, continue_on_error, mpi_pool=None, filt_prec_regression_builder=None, filt_prec_regression_tm=None, filt_prec_regression_params={}, filt_prec_regression_convergence_params=None)[source]

Returns the transport map found and the preconditioning maps.

This routines first precondition the target with the previous filtering map along the marginal regarding the next step. This means that the map to be found will be a perturbation of the identity representing the update of the previous filtering. To understand this one can think that, had the transition operator been the identity (and assuming no observation at this step), the identity map would be suitable.

class TransportMaps.Algorithms.SequentialInference.NonLinearSequentialInference.LowRankTransportMapsSmoother(**kwargs)[source]

Bases: TransportMapsSmoother

Perform the on-line assimilation of a sequential Hidded Markov chain using low-rank information to precondition each assimilation problem.

At each assimilation/prediction step computes the matrix

\[H_x = \frac{1}{m-1} \sum_{k=1}^m \nabla_x \log \pi^i(z_i^{(k)},z_{i+1}^{(k)}) \otimes \nabla_x \log \pi^i(z_i^{(k)},z_{i+1}^{(k)})\]

and extracts the rank-\(r\) sub-spaces of \(H_{z_i}\) and \(H_{z_{i+1}}\) such that

\[\sum_{i=0}^r \lambda_i > \alpha \sum_{i=0}^d \lambda_i \;.\]

These are used to precondition the problem through non-symmetric square roots.

Parameters:
  • m (int) – number of samples to be used in the estimation of \(H\) (must be provided)

  • alpha (float) – truncation parameter \(\alpha\) (default: 0.9)

  • max_rank (int) – maximum rank allowed (defalut: \(\infty\))

Optional Kwargs:
pi_hyper (Distribution):

prior distribution on the hyper-parameters \(\pi(\Theta)\)

_compute_sqrt(H)[source]
_preconditioning_map(pi, mpi_pool=None)[source]

Returns the preconditioning map as well as the sub-components: the hyper-parameters and filtering preconditioning maps.

class TransportMaps.Algorithms.SequentialInference.NonLinearSequentialInference.LowRankFilteringPreconditionedTransportMapSmoother(**kwargs)[source]

Bases: FilteringPreconditionedTransportMapsSmoother, LowRankTransportMapsSmoother

Perform the on-line assimilation of a sequential Hidded Markov chain.

Given the prior distribution on the hyper-parameters \(\pi(\Theta)\), provides the functions neccessary to assimilate new pieces of data or missing data (defined in terms of transition densities \(\pi\left({\bf Z}_{k+1} \middle\vert {\bf Z}_k, \Theta \right)\) and log-likelihoods \(\log \mathcal{L}\left({\bf y}_{k+1}\middle\vert {\bf Z}_{k+1}, \Theta\right)\)), to return the map pushing forward \(\mathcal{N}(0,{\bf I})\) to the smoothing distribution \(\pi\left(\Theta, {\bf Z}_\Lambda \middle\vert {\bf y}_\Xi \right)\) and to return the maps pushing forward \(\mathcal{N}(0,{\bf I})\) to the filtering/forecast distributions \(\{\pi\left(\Theta, {\bf Z}_k \middle\vert {\bf y}_{0:k} \right)\}_k\).

For more details see also [TM3] and the tutorial.

Optional Args:
pi_hyper (Distribution):

prior distribution on the hyper-parameters \(\pi(\Theta)\)