# TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions¶

## Module Contents¶

### Classes¶

 Lag1TransitionDistribution Transition probability for an auto-regressive (1) process (possibly with hyper-parameters) MarkovChainDistribution Distribution of a Markov process (optionally with hyper-parameters) TimeHomogeneousMarkovChainDistribution A Markov chain distribution where transitions do not depend on time. Lag1TransitionTimeHomogeneousMarkovChainDistribution A Markov chain distribution defined by lag-1 transitions that do not depend on time. HiddenMarkovChainDistribution Distribution of a Hidden Markov chain model (optionally with hyper-parameters) HiddenTimeHomogeneousMarkovChainDistribution Distribution of an hidden time homogenoeus Markov chain model HiddenLag1TransitionTimeHomogeneousMarkovChainDistribution Distribution of an hidden time homogenoeus Markov chain model with lag-1 transitions MarkovComponentDistribution $$i$$-th Markov component of a HiddenMarkovChainDistribution AR1TransitionDistribution Transition probability for an auto-regressive (1) process (possibly with hyper-parameters) SequentialHiddenMarkovChainDistribution param pi_list: list of transition densities
class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.Lag1TransitionDistribution(pi, T)[source]

Bases: TransportMaps.Distributions.ConditionalDistribution

Transition probability for an auto-regressive (1) process (possibly with hyper-parameters)

Defines the probability distribution $$\pi({\bf Z}_{k+1}\vert {\bf Z}_{k}, \Theta)=\pi({\bf Z}_{k+1} - T({\bf Z}_{k},\Theta) \vert \Theta)$$ for the lag-1 process

${\bf Z}_{k+1} = T({\bf Z}_k, \Theta) + \varepsilon \;, \quad \varepsilon \sim \nu_\pi$
Parameters:
• pi (Distribution<TransportMaps.Distributions.ConditionalDistribution) – distribution $$\pi:\mathbb{R}^d\times\mathbb{R}^{d_\theta}\rightarrow\mathbb{R}$$

• T (Map) – map $$T:\mathbb{R}^{d+d_\theta}\rightarrow\mathbb{R}^d$$

property T[source]
property pi[source]
property isPiCond[source]
property state_dim[source]
property hyper_dim[source]
get_ncalls_tree(indent='')[source]
get_nevals_tree(indent='')[source]
get_teval_tree(indent='')[source]
update_ncalls_tree(obj)[source]
update_nevals_tree(obj)[source]
update_teval_tree(obj)[source]
reset_counters()[source]
rvs(m, y, *args, **kwargs)[source]

[Abstract] Generate $$m$$ samples from the distribution.

Parameters:
• m (int) – number of samples to generate

• y (ndarray [$$d_y$$]) – conditioning values $${\bf Y}={\bf y}$$

Returns:

(ndarray [$$m,d$$]) – $$m$$

$$d$$-dimensional samples

log_pdf(x, y, params=None, idxs_slice=slice(None, None, None), cache=None)[source]

Evaluate $$\log \pi({\bf x}\vert{\bf y})$$

Parameters:
• x (ndarray [$$m,d$$]) – evaluation points

• y (ndarray [$$m,d_y$$]) – conditioning values $${\bf Y}={\bf y}$$

• params (dict) – parameters

• idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by idxs_slice must match x.shape[0].

• cache (dict) – cache

Returns:

(ndarray [$$m$$]) – values of $$\log\pi$$

at the x points.

grad_x_log_pdf(x, y, params=None, idxs_slice=slice(None, None, None), cache=None)[source]

Evaluate $$\nabla_{\bf x,y} \log \pi({\bf x}\vert{\bf y})$$

Parameters:
• x (ndarray [$$m,d$$]) – evaluation points

• y (ndarray [$$m,d_y$$]) – conditioning values $${\bf Y}={\bf y}$$

• params (dict) – parameters

• idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by idxs_slice must match x.shape[0].

• cache (dict) – cache

Returns:

(ndarray [$$m,d$$]) – values of

$$\nabla_x\log\pi$$ at the x points.

hess_x_log_pdf(x, y, params=None, idxs_slice=slice(None, None, None), cache=None)[source]

Evaluate $$\nabla^2_{\bf x,y} \log \pi({\bf x}\vert{\bf y})$$

Parameters:
• x (ndarray [$$m,d$$]) – evaluation points

• y (ndarray [$$m,d_y$$]) – conditioning values $${\bf Y}={\bf y}$$

• params (dict) – parameters

• idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by idxs_slice must match x.shape[0].

• cache (dict) – cache

Returns:

(ndarray [$$m,d,d$$]) – values of

$$\nabla^2_x\log\pi$$ at the x points.

class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.MarkovChainDistribution(pi_list: List[TransportMaps.Distributions.ConditionalDistribution], pi_hyper: TransportMaps.Distributions.Distribution | None = None)[source]

Bases: TransportMaps.Distributions.FactorizedDistribution

Distribution of a Markov process (optionally with hyper-parameters)

For the index set $$A=[t_0,\ldots,t_k]$$ with $$t_0<t_1<\ldots <t_k$$, and the user defined distributions $$\pi({\bf Z}_{t_i} \vert {\bf Z}_{t_{i-1}}, \Theta)$$, $$\pi({\bf Z}_{t_0} \vert \Theta)$$ and $$\pi(\Theta)$$ defines the distribution

$\pi(\Theta, {\bf Z}_A) = \left( \prod_{i=1}^k \pi(t_i; {\bf Z}_{t_i} \vert {\bf Z}_{t_{i-1}}, \Theta) \right) \pi({\bf Z}_{t_0} \vert \Theta) \pi(\Theta)$

associated to the process $${\bf Z}_A$$.

Parameters:
• pi_list (list of ConditionalDistribution) – list of transition distributions $$\{\pi({\bf Z}_{t_0} \vert \Theta), \pi({\bf Z}_{t_1}\vert {\bf Z}_{t_{0}},\Theta), \ldots \}$$

• pi_hyper (Distribution) – prior on hyper-parameters $$h(\Theta)$$

property nsteps[source]

Returns the number of steps (time indices) $$\sharp A$$.

append(pi)[source]

Append a new transition distribution $$\pi(t_{k+1};{\bf Z}_{t_{k+1}}\vert {\bf Z}_{t_{k}},\Theta)$$

Parameters:

pi (Distribution or ConditionaDistribution) – transition distribution $$\pi({\bf Z}_{t_{k+1}}\vert {\bf Z}_{t_{k}},\Theta)$$

rvs(m, *args, **kwargs)[source]

Generate $$m$$ samples from the distribution.

Parameters:

m (int) – number of samples to generate

Returns:

(ndarray [$$m,d$$]) – $$m$$

$$d$$-dimensional samples

class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.TimeHomogeneousMarkovChainDistribution(pi_init, pi_trans, pi_list=[], pi_hyper=None)[source]

A Markov chain distribution where transitions do not depend on time.

The distribution is defined by

$\pi(\Theta, {\bf Z}_A) = \left( \prod_{i=1}^k \pi({\bf Z}_{t_i} \vert {\bf Z}_{t_{i-1}}, \Theta) \right) \pi({\bf Z}_{t_0} \vert \Theta) \pi(\Theta)$
Parameters:
• pi_init (Distribution) – distribution $$\pi({\bf Z}_{t_0}\vert\Theta)$$

• pi_trans (ConditionalDistribution) – transition distribution $$\pi({\bf Z}_{t_i}\vert{\bf Z}_{t_{i-1}},\Theta)$$

• pi_list (list of ConditionalDistribution) – list of transition distributions $$\{\pi({\bf Z}_{t_0} \vert \Theta), \pi({\bf Z}_{t_1}\vert {\bf Z}_{t_{0}},\Theta), \ldots \}$$

• pi_hyper (Distribution) – prior on hyper-parameters $$h(\Theta)$$

property pi_init[source]
property pi_trans[source]
next_transition()[source]
append()[source]

Append a transition distribution $$\pi({\bf Z}_{t_{k+1}}\vert {\bf Z}_{t_{k}},\Theta)$$

class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.Lag1TransitionTimeHomogeneousMarkovChainDistribution(pi_init, dyn_map, pi_dyn, pi_list=[], pi_hyper=None)[source]

A Markov chain distribution defined by lag-1 transitions that do not depend on time.

The distribution is defined by

$\pi(\Theta, {\bf Z}_A) = \left( \prod_{i=1}^k \pi({\bf Z}_{t_i} \vert {\bf Z}_{t_{i-1}}, \Theta) \right) \pi({\bf Z}_{t_0} \vert \Theta) \pi(\Theta)$

where each conditional $$\pi({\bf Z}_{t_i} \vert {\bf Z}_{t_{i-1}}, \Theta)$$ describe the lag-1 process

${\bf Z}_{k+1} = T({\bf Z}_k, \Theta) + \varepsilon \;, \quad \varepsilon \sim \pi_{\text{dyn}}$
Parameters:
• pi_init (Distribution) – distribution $$\pi({\bf Z}_{t_0}\vert\Theta)$$

• dyn_map (Map) – map $$T$$

• pi_dyn (Distribution<TransportMaps.Distributions.ConditionalDistribution) – distributin of the noise of the dynamics

• pi_list (list of ConditionalDistribution) – list of transition distributions $$\{\pi({\bf Z}_{t_0} \vert \Theta), \pi({\bf Z}_{t_1}\vert {\bf Z}_{t_{0}},\Theta), \ldots \}$$

• pi_hyper (Distribution) – prior on hyper-parameters $$h(\Theta)$$

property pi_dyn[source]
property dyn_map[source]
class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.HiddenMarkovChainDistribution(pi_markov: MarkovChainDistribution, ll_list: = [])[source]

Distribution of a Hidden Markov chain model (optionally with hyper-parameters)

For the index sets $$A=[t_0,\ldots,t_k]$$ with $$t_0<t_1<\ldots <t_k$$, $$B \subseteq A$$, the user defined transition densities (Distribution) $$\{\pi({\bf Z}_{t_0}\vert\Theta), \pi({\bf Z}_{1}\vert{\bf Z}_{t_{0}},\Theta), \ldots \}$$, the prior $$\pi(\Theta)$$ and the log-likelihoods (LogLikelihood) $$\{\log\mathcal{L}({\bf y}_t \vert{\bf Z}_t,\Theta)\}_{t\in B}$$, defines the distribution

$\pi(\Theta, {\bf Z}_A \vert {\bf y}_B) = \left( \prod_{t\in B} \mathcal{L}(t; {\bf y}_t \vert {\bf Z}_t, \Theta) \right) \pi({\bf Z}_{t_0},\ldots,{\bf Z}_{t_k},\Theta)$

associated to the process $${\bf Z}_A$$, where $$\pi({\bf Z}_{t_0},\ldots,{\bf Z}_{t_k},\Theta)$$ is Markov chain distribution.

Note

Each of the log-likelihoods already embed its own data $${\bf y}_t$$. The list of log-likelihoods must be of the same length of the list of transitions. Missing data are simulated by setting the corresponding entry in the list of log-likelihood to None.

Parameters:
• pi_markov (MarkovChainDistribution) – Markov chain distribution describing $$\pi({\bf Z}_{t_0},\ldots,{\bf Z}_{t_k},\Theta)$$

• ll_list (list of LogLikelihood) – list of log-likelihoods $$\{\log\mathcal{L}({\bf y}_t \vert {\bf Z}_t,\Theta)\}_{t\in B}$$

property state_dim[source]
property hyper_dim[source]
property pi_hyper[source]
property pi_list[source]
property ll_list[source]
property nsteps[source]
property ys[source]
property dim[source]
_append_ll(ll=None)[source]
append(pi, ll=None)[source]

Append a new transition distribution $$\pi({\bf Z}_{t_{k+1}}\vert\Theta, {\bf Z}_{t_{k}})$$ and the corresponding log-likelihood $$\log\mathcal{L}({\bf y}_{t_k} \vert \Theta, {\bf Z}_{t_k})$$ if any.

Parameters:
• pi (ConditionalDistribution) – transition distribution $$\pi({\bf Z}_{t_{k+1}}\vert\Theta, {\bf Z}_{t_{k}})$$

• ll (LogLikelihood) – log-likelihood $$\log\mathcal{L}({\bf y}_{t_k} \vert \Theta, {\bf Z}_{t_k})$$. Missing data are represented by None.

get_MarkovComponent(i, n=1, state_map=None, hyper_map=None)[source]

Extract the ($$n\geq 1$$ steps) $$i$$-th Markov component from the distribution

If $$i=-1$$ the Markov component is given by

$\pi^{0:n}(\Theta, {\bf Z}_{t_0}, \ldots, {\bf Z}_{t_n}) := \left( \prod_{t \in \{t_0,\ldots,t_n\} \cap B} \mathcal{L}({\bf y}_t \vert \Theta, {\bf Z}_t) \right) \left( \prod_{i=1}^n \pi({\bf Z}_{t_i}\vert \Theta, {\bf Z}_{t_{i-1}}) \right) \pi({\bf Z}_{t_0}\vert\Theta) \pi(\Theta) \;.$

If $$i\geq 0$$ then the Markov component is

$\pi^{i:i+n}\left(\Theta, {\bf Z}_{t_i}, \ldots, {\bf Z}_{t_{i+n}}\right) := \eta(\Theta, {\bf Z}_{t_i}) \left( \prod_{t \in \left\{t_{i+1},\ldots,t_{i+n}\right\} \cap B} \mathcal{L}\left({\bf y}_t \vert \mathfrak{T}_{i-1}^{\Theta}(\Theta), {\bf Z}_t\right) \right) \left( \prod_{k=i+1}^{i+n-1} \pi\left({\bf Z}_{t_k+1}\vert {\bf Z}_{t_{k}}, \mathfrak{T}_{i-1}^{\Theta}(\Theta) \right) \right) \pi\left({\bf Z}_{t_{i+1}} \vert \mathfrak{M}_{i-1}^{1}(\Theta, {\bf Z}_{t_i}), \mathfrak{T}_{i-1}^{\Theta}(\Theta) \right) \;,$

where $$\mathfrak{T}_{i-1}^{\Theta}$$ and $$\mathfrak{M}_{i-1}^{1}$$ are the hyper-parameter and forecast components of the map computed at step $$i-1$$, using the sequential algorithm described in [TM3].

Parameters:
• i (int) – index $$i$$ of the Markov component

• n (int) – number of steps $$n$$

• state_map (TransportMap) – forecast map $$\mathfrak{M}_{i-1}^{1}$$ from step $$i-1$$.

• hyper_map (TransportMap) – hyper-parameter map $$\mathfrak{T}_{i-1}^{\Theta}$$ from step $$i-1$$.

Returns:

(Distribution) –

Markov component $$\pi^{i:i+n}$$.

trim(nsteps)[source]

Trim the Markov chain to the first nsteps

Parameters:

nsteps ((int)) – number of steps in the Markov chain of the returned distribution

Returns:

(HiddenMarkovChainDistribution) – trimmed distribution

class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.HiddenTimeHomogeneousMarkovChainDistribution(pi_init, pi_trans, pi_list=[], pi_hyper=None, ll_list=[])[source]

Distribution of an hidden time homogenoeus Markov chain model

This is a sequential hidden Markov chain where transitions do not depend on time. The distribution is then defined as

$\pi(\Theta, {\bf Z}_A \vert {\bf y}_B) = \left( \prod_{t\in B} \mathcal{L}(t; {\bf y}_t \vert {\bf Z}_t, \Theta) \right) \left( \prod_{i=1}^k \pi({\bf Z}_{t_i}\vert{\bf Z}_{t_{i-1}},\Theta) \right) \pi({\bf Z}_{t_0}\vert\Theta) \pi(\Theta)$
Parameters:
• pi_init (Distribution) – distribution $$\pi({\bf Z}_{t_0}\vert\Theta)$$

• pi_trans (ConditionalDistribution) – transition distribution $$\pi({\bf Z}_{t_i}\vert{\bf Z}_{t_{i-1}},\Theta)$$

• pi_list (list of ConditionalDistribution) – list of transition densities $$[\pi({\bf Z}_{t_0}\vert\Theta), \pi({\bf Z}_{t_1}\vert{\bf Z}_{t_{0}},\Theta), \ldots ]$$

• ll_list (list of LogLikelihood) – list of log-likelihoods $$\{\log\mathcal{L}({\bf y}_t \vert {\bf Z}_t,\Theta)\}_{t\in B}$$

• pi_hyper (Distribution) – prior on hyper-parameters $$h(\Theta)$$

property pi_init[source]
property pi_trans[source]
append(ll=None)[source]

Append a new transition distribution $$\pi({\bf Z}_{t_{k+1}}\vert\Theta, {\bf Z}_{t_{k}})$$ and the corresponding log-likelihood $$\log\mathcal{L}({\bf y}_{t_k} \vert \Theta, {\bf Z}_{t_k})$$ if any.

Parameters:
• pi (ConditionalDistribution) – transition distribution $$\pi({\bf Z}_{t_{k+1}}\vert\Theta, {\bf Z}_{t_{k}})$$

• ll (LogLikelihood) – log-likelihood $$\log\mathcal{L}({\bf y}_{t_k} \vert \Theta, {\bf Z}_{t_k})$$. Missing data are represented by None.

class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.HiddenLag1TransitionTimeHomogeneousMarkovChainDistribution(pi_init, dyn_map, pi_dyn, pi_list=[], pi_hyper=None, ll_list=[])[source]

Distribution of an hidden time homogenoeus Markov chain model with lag-1 transitions

This is a sequential hidden Markov chain where transitions do not depend on time and describe a lag-1 process. The distribution is then defined as

$\pi(\Theta, {\bf Z}_A \vert {\bf y}_B) = \left( \prod_{t\in B} \mathcal{L}(t; {\bf y}_t \vert {\bf Z}_t, \Theta) \right) \left( \prod_{i=1}^k \pi({\bf Z}_{t_i}\vert{\bf Z}_{t_{i-1}},\Theta) \right) \pi({\bf Z}_{t_0}\vert\Theta) \pi(\Theta)$
Parameters:
• pi_init (Distribution) – distribution $$\pi({\bf Z}_{t_0}\vert\Theta)$$

• dyn_map (Map) – map $$T$$

• pi_dyn (Distribution<TransportMaps.Distributions.ConditionalDistribution) – distributin of the noise of the dynamics

• pi_list (list of ConditionalDistribution) – list of transition densities $$[\pi({\bf Z}_{t_0}\vert\Theta), \pi({\bf Z}_{t_1}\vert{\bf Z}_{t_{0}},\Theta), \ldots ]$$

• ll_list (list of LogLikelihood) – list of log-likelihoods $$\{\log\mathcal{L}({\bf y}_t \vert {\bf Z}_t,\Theta)\}_{t\in B}$$

• pi_hyper (Distribution) – prior on hyper-parameters $$h(\Theta)$$

property pi_dyn[source]
property dyn_map[source]
class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.MarkovComponentDistribution(idx0, pi_list, ll_list, state_dim, hyper_dim, pi_hyper=None, state_map=None, hyper_map=None)[source]

Bases: TransportMaps.Distributions.Distribution

$$i$$-th Markov component of a HiddenMarkovChainDistribution

If $$i=-1$$ the Markov component is given by

$\pi^{0:n}(\Theta, {\bf Z}_{t_0}, \ldots, {\bf Z}_{t_n}) := \left( \prod_{t \in \{t_0,\ldots,t_n\} \cap B} \mathcal{L}({\bf y}_t \vert \Theta, {\bf Z}_t) \right) \left( \prod_{i=1}^n \pi({\bf Z}_{t_i}\vert \Theta, {\bf Z}_{t_{i-1}}) \right) \pi({\bf Z}_{t_0}\vert\Theta) \pi(\Theta) \;.$

If $$i>=0$$ then the Markov component is

$\pi^{i:i+n}\left(\Theta, {\bf Z}_{t_i}, \ldots, {\bf Z}_{t_{i+n}}\right) := \eta(\Theta, {\bf Z}_{t_i}) \left( \prod_{t \in \left\{t_{i+1},\ldots,t_{i+n}\right\} \cap B} \mathcal{L}\left({\bf y}_t \vert \mathfrak{T}_{i-1}^{\Theta}(\Theta), {\bf Z}_t\right) \right) \left( \prod_{k=i+1}^{i+n-1} \pi\left({\bf Z}_{t_k+1}\vert {\bf Z}_{t_{k}}, \mathfrak{T}_{i-1}^{\Theta}(\Theta) \right) \right) \pi\left({\bf Z}_{t_{i+1}} \vert \mathfrak{M}_{i-1}^{1}(\Theta, {\bf Z}_{t_i}), \mathfrak{T}_{i-1}^{\Theta}(\Theta) \right) \;,$

where $$\mathfrak{T}_{i-1}^{\Theta}$$ and $$\mathfrak{M}_{i-1}^{1}$$ are the hyper-parameter and forecast components of the map computed at step $$i-1$$, using the sequential algorithm described in [TM3].

Parameters:
• idx0 (int) – index $$i$$ of the Markov component

• pi_list (list of Distribution) – list of $$n$$ transition densities

• ll_list (list of LogLikelihood) – list of $$n$$ log-likelihoods (None for missing data) $$\{\log\mathcal{L}({\bf y}_t \vert \Theta, {\bf Z}_t)\}_{t\in B}$$

• state_dim (int) – dimension of the state-space

• hyper_dim (int) – dimension of the parameter-space

• pi_hyper (Distribution) – prior on hyper-parameters $$h(\Theta)$$

• state_map (TransportMap) – forecast map $$\mathfrak{M}_{i-1}^{1}$$ from step $$i-1$$.

• hyper_map (TransportMap) – hyper-parameter map $$\mathfrak{T}_{i-1}^{\Theta}$$ from step $$i-1$$.

property n_steps[source]
get_ncalls_tree(indent='')[source]
get_nevals_tree(indent='')[source]
get_teval_tree(indent='')[source]
update_ncalls_tree(obj)[source]
update_nevals_tree(obj)[source]
update_teval_tree(obj)[source]
reset_counters()[source]
_transform_input(x, hdim, sdim, hyper_map_cache, state_map_cache, **kwargs)[source]
_transition_log_pdf(lpdf, out)[source]
_ll_log_pdf(llev, out)[source]
log_pdf(x, cache=None, **kwargs)[source]

[Abstract] Evaluate $$\log \pi({\bf x})$$

Parameters:
• x (ndarray [$$m,d$$]) – evaluation points

• params (dict) – parameters

• idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by idxs_slice must match x.shape[0].

Returns:

(ndarray [$$m$$]) – values of $$\log\pi$$

at the x points.

Raises:

NotImplementedError – the method needs to be defined in the sub-classes

_transition_grad_x_log_pdf(gxlpdf, out, i, hdim, sdim, gx_hyper, gx_comp, s1, s3)[source]
_ll_grad_x_log_pdf(gxllev, out, hdim, gx_hyper, s2, s3)[source]

[Abstract] Evaluate $$\nabla_{\bf x} \log \pi({\bf x})$$

Parameters:
• x (ndarray [$$m,d$$]) – evaluation points

• params (dict) – parameters

• idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by idxs_slice must match x.shape[0].

Returns:

(ndarray [$$m,d$$]) – values of

$$\nabla_x\log\pi$$ at the x points.

Raises:

NotImplementedError – the method needs to be defined in the sub-classes

[Abstract] Compute the tuple $$\left(\log \pi({\bf x}), \nabla_{\bf x} \log \pi({\bf x})\right)$$

Parameters:
• x (ndarray [$$m,d$$]) – evaluation points

• params (dict) – parameters

• idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by idxs_slice must match x.shape[0].

• cache (dict) – cache

Returns:

(tuple) – containing

$$\left(\log \pi({\bf x}), \nabla_{\bf x} \log \pi({\bf x})\right)$$

Raises:

NotImplementedError – the method needs to be defined in the sub-classes

hess_x_log_pdf(x, cache=None, **kwargs)[source]

[Abstract] Evaluate $$\nabla^2_{\bf x} \log \pi({\bf x})$$

Parameters:
• x (ndarray [$$m,d$$]) – evaluation points

• params (dict) – parameters

• idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by idxs_slice must match x.shape[0].

Returns:

(ndarray [$$m,d,d$$]) – values of

$$\nabla^2_x\log\pi$$ at the x points.

Raises:

NotImplementedError – the method needs to be defined in the sub-classes

class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.AR1TransitionDistribution(*args, **kwargs)[source]

Transition probability for an auto-regressive (1) process (possibly with hyper-parameters)

Defines the probability distribution $$\pi({\bf Z}_{k+1}\vert {\bf Z}_{k}, \Theta)=\pi({\bf Z}_{k+1} - T({\bf Z}_{k},\Theta) \vert \Theta)$$ for the lag-1 process

${\bf Z}_{k+1} = T({\bf Z}_k, \Theta) + \varepsilon \;, \quad \varepsilon \sim \nu_\pi$
Parameters:
• pi (Distribution<TransportMaps.Distributions.ConditionalDistribution) – distribution $$\pi:\mathbb{R}^d\times\mathbb{R}^{d_\theta}\rightarrow\mathbb{R}$$

• T (Map) – map $$T:\mathbb{R}^{d+d_\theta}\rightarrow\mathbb{R}^d$$

class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.SequentialHiddenMarkovChainDistribution(pi_list=[], ll_list=[], pi_hyper=None)[source]
Parameters:
• pi_list (list of ConditionalDistribution) – list of transition densities $$[\pi({\bf Z}_{t_0}\vert\Theta), \pi({\bf Z}_{t_1}\vert{\bf Z}_{t_{0}},\Theta), \ldots ]$$

• ll_list (list of LogLikelihood) – list of log-likelihoods $$\{\log\mathcal{L}({\bf y}_t \vert {\bf Z}_t,\Theta)\}_{t\in B}$$

• pi_hyper (Distribution) – prior on hyper-parameters $$h(\Theta)$$