# TransportMaps.Distributions.Decomposable¶

Classes

Class Description
AR1TransitionDistribution Transition probability for an auto-regressive (1) process (possibly with hyper-parameters)
MarkovChainDistribution Distribution of a Markov process (optionally with hyper-parameters)
SequentialHiddenMarkovChainDistribution Distribution of a sequential Hidden Markov chain model (optionally with hyper-parameters)
MarkovComponentDistribution $$i$$-th Markov component of a SequentialHiddenMarkovChainDistribution

Documentation

class TransportMaps.Distributions.Decomposable.AR1TransitionDistribution(pi, T)[source]

Transition probability for an auto-regressive (1) process (possibly with hyper-parameters)

Defines the probability distribution $$\pi({\bf Z}_{k+1}\vert {\bf Z}_{k}, \Theta)=\pi({\bf Z}_{k+1} - T({\bf Z}_{k},\Theta) \vert \Theta)$$ for the auto-regressive (1) process

${\bf Z}_{k+1} = T({\bf Z}_k, \Theta) + \varepsilon \;, \quad \varepsilon \sim \nu_\pi$
Parameters: pi (Distribution
grad_x_log_pdf(x, y, params=None, idxs_slice=slice(None, None, None), cache=None)[source]

Evaluate $$\nabla_{\bf x,y} \log \pi({\bf x}\vert{\bf y})$$

Parameters: x (ndarray [$$m,d$$]) – evaluation points y (ndarray [$$m,d_y$$]) – conditioning values $${\bf Y}={\bf y}$$ params (dict) – parameters idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by idxs_slice must match x.shape[0]. cache (dict) – cache (ndarray [$$m,d$$]) – values of $$\nabla_x\log\pi$$ at the x points.
hess_x_log_pdf(x, y, params=None, idxs_slice=slice(None, None, None), cache=None)[source]

Evaluate $$\nabla^2_{\bf x,y} \log \pi({\bf x}\vert{\bf y})$$

Parameters: x (ndarray [$$m,d$$]) – evaluation points y (ndarray [$$m,d_y$$]) – conditioning values $${\bf Y}={\bf y}$$ params (dict) – parameters idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by idxs_slice must match x.shape[0]. cache (dict) – cache (ndarray [$$m,d,d$$]) – values of $$\nabla^2_x\log\pi$$ at the x points.
log_pdf(x, y, params=None, idxs_slice=slice(None, None, None), cache=None)[source]

Evaluate $$\log \pi({\bf x}\vert{\bf y})$$

Parameters: x (ndarray [$$m,d$$]) – evaluation points y (ndarray [$$m,d_y$$]) – conditioning values $${\bf Y}={\bf y}$$ params (dict) – parameters idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by idxs_slice must match x.shape[0]. cache (dict) – cache (ndarray [$$m$$]) – values of $$\log\pi$$ at the x points.
rvs(m, y, *args, **kwargs)[source]

[Abstract] Generate $$m$$ samples from the distribution.

Parameters: m (int) – number of samples to generate y (ndarray [$$d_y$$]) – conditioning values $${\bf Y}={\bf y}$$ (ndarray [$$m,d$$]) – $$m$$ $$d$$-dimensional samples
class TransportMaps.Distributions.Decomposable.MarkovChainDistribution(pi_list, pi_hyper=None)[source]

Distribution of a Markov process (optionally with hyper-parameters)

For the index set $$A=[t_0,\ldots,t_k]$$ with $$t_0<t_1<\ldots <t_k$$, and the user defined distributions $$\pi({\bf Z}_{t_i} \vert {\bf Z}_{t_{i-1}}, \Theta)$$, $$\pi({\bf Z}_{t_0} \vert \Theta)$$ and $$\pi(\Theta)$$ defines the distribution

$\pi(\Theta, {\bf Z}_A) = \left( \prod_{i=1}^k \pi({\bf Z}_{t_i} \vert {\bf Z}_{t_{i-1}}, \Theta) \right) \pi({\bf Z}_{t_0} \vert \Theta) \pi(\Theta)$

associated to the process $${\bf Z}_A$$.

Parameters: pi_list (list of ConditionalDistribution) – list of transition distributions $$\{\pi({\bf Z}_{t_0} \vert \Theta), \pi({\bf Z}_{t_1}\vert {\bf Z}_{t_{0}},\Theta), \ldots \}$$ pi_hyper (Distribution) – prior on hyper-parameters $$h(\Theta)$$
append(pi)[source]

Append a new transition distribution $$\pi({\bf Z}_{t_{k+1}}\vert {\bf Z}_{t_{k}},\Theta)$$

Parameters: pi (Distribution or ConditionaDistribution) – transition distribution $$\pi({\bf Z}_{t_{k+1}}\vert {\bf Z}_{t_{k}},\Theta)$$
nsteps

Returns the number of steps (time indices) $$\sharp A$$.

rvs(m, *args, **kwargs)[source]

Generate $$m$$ samples from the distribution.

Parameters: m (int) – number of samples to generate (ndarray [$$m,d$$]) – $$m$$ $$d$$-dimensional samples
class TransportMaps.Distributions.Decomposable.SequentialHiddenMarkovChainDistribution(pi_list, ll_list, pi_hyper=None)[source]

Distribution of a sequential Hidden Markov chain model (optionally with hyper-parameters)

For the index sets $$A=[t_0,\ldots,t_k]$$ with $$t_0<t_1<\ldots <t_k$$, $$B \subseteq A$$, the user defined transition densities (Distribution) $$\{\pi({\bf Z}_{t_0}\vert\Theta), \pi({\bf Z}_{1}\vert{\bf Z}_{t_{0}},\Theta), \ldots \}$$, the prior $$\pi(\Theta)$$ and the log-likelihoods (LogLikelihood) $$\{\log\mathcal{L}({\bf y}_t \vert{\bf Z}_t,\Theta)\}_{t\in B}$$, defines the distribution

$\pi(\Theta, {\bf Z}_A \vert {\bf y}_B) = \left( \prod_{t\in B} \mathcal{L}({\bf y}_t \vert {\bf Z}_t, \Theta) \right) \left( \prod_{i=1}^k \pi({\bf Z}_{t_i}\vert{\bf Z}_{t_{i-1}},\Theta) \right) \pi({\bf Z}_{t_0},\Theta) \pi(\Theta)$

associated to the process $${\bf Z}_A$$

Note

Each of the log-likelihoods already embed its own data $${\bf y}_t$$. The list of log-likelihoods must be of the same length of the list of transitions. Missing data are simulated by setting the corresponding entry in the list of log-likelihood to None.

Parameters: pi_list (list of ConditionalDistribution) – list of transition densities $$[\pi({\bf Z}_{t_0}\vert\Theta), \pi({\bf Z}_{t_1}\vert{\bf Z}_{t_{0}},\Theta), \ldots ]$$ ll_list (list of LogLikelihood) – list of log-likelihoods $$\{\log\mathcal{L}({\bf y}_t \vert {\bf Z}_t,\Theta)\}_{t\in B}$$ pi_hyper (Distribution) – prior on hyper-parameters $$h(\Theta)$$
append(pi, ll=None)[source]

Append a new transition distribution $$\pi({\bf Z}_{t_{k+1}}\vert\Theta, {\bf Z}_{t_{k}})$$ and the corresponding log-likelihood $$\log\mathcal{L}({\bf y}_{t_k} \vert \Theta, {\bf Z}_{t_k})$$ if any.

Parameters: pi (ConditionalDistribution) – transition distribution $$\pi({\bf Z}_{t_{k+1}}\vert\Theta, {\bf Z}_{t_{k}})$$ ll (LogLikelihood) – log-likelihood $$\log\mathcal{L}({\bf y}_{t_k} \vert \Theta, {\bf Z}_{t_k})$$. Missing data are represented by None.
get_MarkovComponent(i, n=1, state_map=None, hyper_map=None)[source]

Extract the ($$n\geq 1$$ steps) $$i$$-th Markov component from the distribution

If $$i=0$$ the Markov component is given by

$\pi^{0:n}(\Theta, {\bf Z}_{t_0}, \ldots, {\bf Z}_{t_n}) := \left( \prod_{t \in \{t_0,\ldots,t_n\} \cap B} \mathcal{L}({\bf y}_t \vert \Theta, {\bf Z}_t) \right) \left( \prod_{i=1}^n \pi({\bf Z}_{t_i}\vert \Theta, {\bf Z}_{t_{i-1}}) \right) \pi({\bf Z}_{t_0}\vert\Theta) \pi(\Theta) \;.$

If $$i>0$$ then the Markov component is

$\pi^{i:i+n}\left(\Theta, {\bf Z}_{t_i}, \ldots, {\bf Z}_{t_{i+n}}\right) := \eta(\Theta, {\bf Z}_{t_i}) \left( \prod_{t \in \left\{t_{i+1},\ldots,t_{i+n}\right\} \cap B} \mathcal{L}\left({\bf y}_t \vert \mathfrak{T}_{i-1}^{\Theta}(\Theta), {\bf Z}_t\right) \right) \left( \prod_{k=i+1}^{i+n-1} \pi\left({\bf Z}_{t_k+1}\vert {\bf Z}_{t_{k}}, \mathfrak{T}_{i-1}^{\Theta}(\Theta) \right) \right) \pi\left({\bf Z}_{t_{i+1}} \vert \mathfrak{M}_{i-1}^{1}(\Theta, {\bf Z}_{t_i}), \mathfrak{T}_{i-1}^{\Theta}(\Theta) \right) \;,$

where $$\mathfrak{T}_{i-1}^{\Theta}$$ and $$\mathfrak{M}_{i-1}^{1}$$ are the hyper-parameter and forecast components of the map computed at step $$i-1$$, using the sequential algorithm described in [TM4].

Parameters: i (int) – index $$i$$ of the Markov component n (int) – number of steps $$n$$ state_map (TransportMap) – forecast map $$\mathfrak{M}_{i-1}^{1}$$ from step $$i-1$$. hyper_map (TransportMap) – hyper-parameter map $$\mathfrak{T}_{i-1}^{\Theta}$$ from step $$i-1$$. (Distribution) – Markov component $$\pi^{i:i+n}$$.
trim(nsteps)[source]

Trim the Markov chain to the first nsteps

Parameters: nsteps ((int)) – number of steps in the Markov chain of the returned distribution (SequentialHiddenMarkovChainDistribution) – trimmed distribution
class TransportMaps.Distributions.Decomposable.MarkovComponentDistribution(idx0, pi_list, ll_list, state_dim, hyper_dim, pi_hyper=None, state_map=None, hyper_map=None)[source]

$$i$$-th Markov component of a SequentialHiddenMarkovChainDistribution

If $$i=0$$ the Markov component is given by

$\pi^{0:n}(\Theta, {\bf Z}_{t_0}, \ldots, {\bf Z}_{t_n}) := \left( \prod_{t \in \{t_0,\ldots,t_n\} \cap B} \mathcal{L}({\bf y}_t \vert \Theta, {\bf Z}_t) \right) \left( \prod_{i=1}^n \pi({\bf Z}_{t_i}\vert \Theta, {\bf Z}_{t_{i-1}}) \right) \pi({\bf Z}_{t_0}\vert\Theta) \pi(\Theta) \;.$

If $$i>0$$ then the Markov component is

$\pi^{i:i+n}\left(\Theta, {\bf Z}_{t_i}, \ldots, {\bf Z}_{t_{i+n}}\right) := \eta(\Theta, {\bf Z}_{t_i}) \left( \prod_{t \in \left\{t_{i+1},\ldots,t_{i+n}\right\} \cap B} \mathcal{L}\left({\bf y}_t \vert \mathfrak{T}_{i-1}^{\Theta}(\Theta), {\bf Z}_t\right) \right) \left( \prod_{k=i+1}^{i+n-1} \pi\left({\bf Z}_{t_k+1}\vert {\bf Z}_{t_{k}}, \mathfrak{T}_{i-1}^{\Theta}(\Theta) \right) \right) \pi\left({\bf Z}_{t_{i+1}} \vert \mathfrak{M}_{i-1}^{1}(\Theta, {\bf Z}_{t_i}), \mathfrak{T}_{i-1}^{\Theta}(\Theta) \right) \;,$

where $$\mathfrak{T}_{i-1}^{\Theta}$$ and $$\mathfrak{M}_{i-1}^{1}$$ are the hyper-parameter and forecast components of the map computed at step $$i-1$$, using the sequential algorithm described in [TM4].

Parameters: idx0 (int) – index $$i$$ of the Markov component pi_list (list of Distribution) – list of $$n$$ transition densities ll_list (list of LogLikelihood) – list of $$n$$ log-likelihoods (None for missing data) $$\{\log\mathcal{L}({\bf y}_t \vert \Theta, {\bf Z}_t)\}_{t\in B}$$ state_dim (int) – dimension of the state-space hyper_dim (int) – dimension of the parameter-space pi_hyper (Distribution) – prior on hyper-parameters $$h(\Theta)$$ state_map (TransportMap) – forecast map $$\mathfrak{M}_{i-1}^{1}$$ from step $$i-1$$. hyper_map (TransportMap) – hyper-parameter map $$\mathfrak{T}_{i-1}^{\Theta}$$ from step $$i-1$$.
grad_x_log_pdf(x, cache=None, **kwargs)[source]

[Abstract] Evaluate $$\nabla_{\bf x} \log \pi({\bf x})$$

Parameters: x (ndarray [$$m,d$$]) – evaluation points params (dict) – parameters idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by idxs_slice must match x.shape[0]. (ndarray [$$m,d$$]) – values of $$\nabla_x\log\pi$$ at the x points. NotImplementedError – the method needs to be defined in the sub-classes
hess_x_log_pdf(x, cache=None, **kwargs)[source]

[Abstract] Evaluate $$\nabla^2_{\bf x} \log \pi({\bf x})$$

Parameters: x (ndarray [$$m,d$$]) – evaluation points params (dict) – parameters idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by idxs_slice must match x.shape[0]. (ndarray [$$m,d,d$$]) – values of $$\nabla^2_x\log\pi$$ at the x points. NotImplementedError – the method needs to be defined in the sub-classes
log_pdf(x, cache=None, **kwargs)[source]

[Abstract] Evaluate $$\log \pi({\bf x})$$

Parameters: x (ndarray [$$m,d$$]) – evaluation points params (dict) – parameters idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by idxs_slice must match x.shape[0]. (ndarray [$$m$$]) – values of $$\log\pi$$ at the x points. NotImplementedError` – the method needs to be defined in the sub-classes