TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions
¶
Module Contents¶
Classes¶
Transition probability for an auto-regressive (1) process (possibly with hyper-parameters) |
|
Distribution of a Markov process (optionally with hyper-parameters) |
|
A Markov chain distribution where transitions do not depend on time. |
|
A Markov chain distribution defined by lag-1 transitions that do not depend on time. |
|
Distribution of a Hidden Markov chain model (optionally with hyper-parameters) |
|
Distribution of an hidden time homogenoeus Markov chain model |
|
Distribution of an hidden time homogenoeus Markov chain model with lag-1 transitions |
|
\(i\)-th Markov component of a |
|
Transition probability for an auto-regressive (1) process (possibly with hyper-parameters) |
|
|
- class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.Lag1TransitionDistribution(pi, T)[source]¶
Bases:
TransportMaps.Distributions.ConditionalDistribution
Transition probability for an auto-regressive (1) process (possibly with hyper-parameters)
Defines the probability distribution \(\pi({\bf Z}_{k+1}\vert {\bf Z}_{k}, \Theta)=\pi({\bf Z}_{k+1} - T({\bf Z}_{k},\Theta) \vert \Theta)\) for the lag-1 process
\[{\bf Z}_{k+1} = T({\bf Z}_k, \Theta) + \varepsilon \;, \quad \varepsilon \sim \nu_\pi\]- Parameters:
pi (
Distribution<TransportMaps.Distributions.ConditionalDistribution
) – distribution \(\pi:\mathbb{R}^d\times\mathbb{R}^{d_\theta}\rightarrow\mathbb{R}\)T (
Map
) – map \(T:\mathbb{R}^{d+d_\theta}\rightarrow\mathbb{R}^d\)
- log_pdf(x, y, params=None, idxs_slice=slice(None, None, None), cache=None)[source]¶
Evaluate \(\log \pi({\bf x}\vert{\bf y})\)
- Parameters:
x (
ndarray
[\(m,d\)]) – evaluation pointsy (
ndarray
[\(m,d_y\)]) – conditioning values \({\bf Y}={\bf y}\)params (dict) – parameters
idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by
idxs_slice
must matchx.shape[0]
.cache (dict) – cache
- Returns:
- (
ndarray
[\(m\)]) – values of \(\log\pi\) at the
x
points.
- (
- grad_x_log_pdf(x, y, params=None, idxs_slice=slice(None, None, None), cache=None)[source]¶
Evaluate \(\nabla_{\bf x,y} \log \pi({\bf x}\vert{\bf y})\)
- Parameters:
x (
ndarray
[\(m,d\)]) – evaluation pointsy (
ndarray
[\(m,d_y\)]) – conditioning values \({\bf Y}={\bf y}\)params (dict) – parameters
idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by
idxs_slice
must matchx.shape[0]
.cache (dict) – cache
- Returns:
- (
ndarray
[\(m,d\)]) – values of \(\nabla_x\log\pi\) at the
x
points.
- (
- hess_x_log_pdf(x, y, params=None, idxs_slice=slice(None, None, None), cache=None)[source]¶
Evaluate \(\nabla^2_{\bf x,y} \log \pi({\bf x}\vert{\bf y})\)
- Parameters:
x (
ndarray
[\(m,d\)]) – evaluation pointsy (
ndarray
[\(m,d_y\)]) – conditioning values \({\bf Y}={\bf y}\)params (dict) – parameters
idxs_slice (slice) – if precomputed values are present, this parameter indicates at which of the points to evaluate. The number of indices represented by
idxs_slice
must matchx.shape[0]
.cache (dict) – cache
- Returns:
- (
ndarray
[\(m,d,d\)]) – values of \(\nabla^2_x\log\pi\) at the
x
points.
- (
- class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.MarkovChainDistribution(pi_list: List[TransportMaps.Distributions.ConditionalDistribution], pi_hyper: TransportMaps.Distributions.Distribution | None = None)[source]¶
Bases:
TransportMaps.Distributions.FactorizedDistribution
Distribution of a Markov process (optionally with hyper-parameters)
For the index set \(A=[t_0,\ldots,t_k]\) with \(t_0<t_1<\ldots <t_k\), and the user defined distributions \(\pi({\bf Z}_{t_i} \vert {\bf Z}_{t_{i-1}}, \Theta)\), \(\pi({\bf Z}_{t_0} \vert \Theta)\) and \(\pi(\Theta)\) defines the distribution
\[\pi(\Theta, {\bf Z}_A) = \left( \prod_{i=1}^k \pi(t_i; {\bf Z}_{t_i} \vert {\bf Z}_{t_{i-1}}, \Theta) \right) \pi({\bf Z}_{t_0} \vert \Theta) \pi(\Theta)\]associated to the process \({\bf Z}_A\).
- Parameters:
pi_list (
list
ofConditionalDistribution
) – list of transition distributions \(\{\pi({\bf Z}_{t_0} \vert \Theta), \pi({\bf Z}_{t_1}\vert {\bf Z}_{t_{0}},\Theta), \ldots \}\)pi_hyper (
Distribution
) – prior on hyper-parameters \(h(\Theta)\)
- class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.TimeHomogeneousMarkovChainDistribution(pi_init, pi_trans, pi_list=[], pi_hyper=None)[source]¶
Bases:
MarkovChainDistribution
A Markov chain distribution where transitions do not depend on time.
The distribution is defined by
\[\pi(\Theta, {\bf Z}_A) = \left( \prod_{i=1}^k \pi({\bf Z}_{t_i} \vert {\bf Z}_{t_{i-1}}, \Theta) \right) \pi({\bf Z}_{t_0} \vert \Theta) \pi(\Theta)\]- Parameters:
pi_init (
Distribution
) – distribution \(\pi({\bf Z}_{t_0}\vert\Theta)\)pi_trans (
ConditionalDistribution
) – transition distribution \(\pi({\bf Z}_{t_i}\vert{\bf Z}_{t_{i-1}},\Theta)\)pi_list (
list
ofConditionalDistribution
) – list of transition distributions \(\{\pi({\bf Z}_{t_0} \vert \Theta), \pi({\bf Z}_{t_1}\vert {\bf Z}_{t_{0}},\Theta), \ldots \}\)pi_hyper (
Distribution
) – prior on hyper-parameters \(h(\Theta)\)
- class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.Lag1TransitionTimeHomogeneousMarkovChainDistribution(pi_init, dyn_map, pi_dyn, pi_list=[], pi_hyper=None)[source]¶
Bases:
TimeHomogeneousMarkovChainDistribution
A Markov chain distribution defined by lag-1 transitions that do not depend on time.
The distribution is defined by
\[\pi(\Theta, {\bf Z}_A) = \left( \prod_{i=1}^k \pi({\bf Z}_{t_i} \vert {\bf Z}_{t_{i-1}}, \Theta) \right) \pi({\bf Z}_{t_0} \vert \Theta) \pi(\Theta)\]where each conditional \(\pi({\bf Z}_{t_i} \vert {\bf Z}_{t_{i-1}}, \Theta)\) describe the lag-1 process
\[{\bf Z}_{k+1} = T({\bf Z}_k, \Theta) + \varepsilon \;, \quad \varepsilon \sim \pi_{\text{dyn}}\]- Parameters:
pi_init (
Distribution
) – distribution \(\pi({\bf Z}_{t_0}\vert\Theta)\)dyn_map (
Map
) – map \(T\)pi_dyn (
Distribution<TransportMaps.Distributions.ConditionalDistribution
) – distributin of the noise of the dynamicspi_list (
list
ofConditionalDistribution
) – list of transition distributions \(\{\pi({\bf Z}_{t_0} \vert \Theta), \pi({\bf Z}_{t_1}\vert {\bf Z}_{t_{0}},\Theta), \ldots \}\)pi_hyper (
Distribution
) – prior on hyper-parameters \(h(\Theta)\)
- class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.HiddenMarkovChainDistribution(pi_markov: MarkovChainDistribution, ll_list: List[TransportMaps.Likelihoods.LikelihoodBase.LogLikelihood] = [])[source]¶
Bases:
TransportMaps.Distributions.Inference.InferenceBase.BayesPosteriorDistribution
Distribution of a Hidden Markov chain model (optionally with hyper-parameters)
For the index sets \(A=[t_0,\ldots,t_k]\) with \(t_0<t_1<\ldots <t_k\), \(B \subseteq A\), the user defined transition densities (
Distribution
) \(\{\pi({\bf Z}_{t_0}\vert\Theta), \pi({\bf Z}_{1}\vert{\bf Z}_{t_{0}},\Theta), \ldots \}\), the prior \(\pi(\Theta)\) and the log-likelihoods (LogLikelihood
) \(\{\log\mathcal{L}({\bf y}_t \vert{\bf Z}_t,\Theta)\}_{t\in B}\), defines the distribution\[\pi(\Theta, {\bf Z}_A \vert {\bf y}_B) = \left( \prod_{t\in B} \mathcal{L}(t; {\bf y}_t \vert {\bf Z}_t, \Theta) \right) \pi({\bf Z}_{t_0},\ldots,{\bf Z}_{t_k},\Theta)\]associated to the process \({\bf Z}_A\), where \(\pi({\bf Z}_{t_0},\ldots,{\bf Z}_{t_k},\Theta)\) is Markov chain distribution.
Note
Each of the log-likelihoods already embed its own data \({\bf y}_t\). The list of log-likelihoods must be of the same length of the list of transitions. Missing data are simulated by setting the corresponding entry in the list of log-likelihood to
None
.- Parameters:
pi_markov (
MarkovChainDistribution
) – Markov chain distribution describing \(\pi({\bf Z}_{t_0},\ldots,{\bf Z}_{t_k},\Theta)\)ll_list (
list
ofLogLikelihood
) – list of log-likelihoods \(\{\log\mathcal{L}({\bf y}_t \vert {\bf Z}_t,\Theta)\}_{t\in B}\)
- append(pi, ll=None)[source]¶
Append a new transition distribution \(\pi({\bf Z}_{t_{k+1}}\vert\Theta, {\bf Z}_{t_{k}})\) and the corresponding log-likelihood \(\log\mathcal{L}({\bf y}_{t_k} \vert \Theta, {\bf Z}_{t_k})\) if any.
- Parameters:
pi (
ConditionalDistribution
) – transition distribution \(\pi({\bf Z}_{t_{k+1}}\vert\Theta, {\bf Z}_{t_{k}})\)ll (
LogLikelihood
) – log-likelihood \(\log\mathcal{L}({\bf y}_{t_k} \vert \Theta, {\bf Z}_{t_k})\). Missing data are represented byNone
.
- get_MarkovComponent(i, n=1, state_map=None, hyper_map=None)[source]¶
Extract the (\(n\geq 1\) steps) \(i\)-th Markov component from the distribution
If \(i=-1\) the Markov component is given by
\[\pi^{0:n}(\Theta, {\bf Z}_{t_0}, \ldots, {\bf Z}_{t_n}) := \left( \prod_{t \in \{t_0,\ldots,t_n\} \cap B} \mathcal{L}({\bf y}_t \vert \Theta, {\bf Z}_t) \right) \left( \prod_{i=1}^n \pi({\bf Z}_{t_i}\vert \Theta, {\bf Z}_{t_{i-1}}) \right) \pi({\bf Z}_{t_0}\vert\Theta) \pi(\Theta) \;.\]If \(i\geq 0\) then the Markov component is
\[\pi^{i:i+n}\left(\Theta, {\bf Z}_{t_i}, \ldots, {\bf Z}_{t_{i+n}}\right) := \eta(\Theta, {\bf Z}_{t_i}) \left( \prod_{t \in \left\{t_{i+1},\ldots,t_{i+n}\right\} \cap B} \mathcal{L}\left({\bf y}_t \vert \mathfrak{T}_{i-1}^{\Theta}(\Theta), {\bf Z}_t\right) \right) \left( \prod_{k=i+1}^{i+n-1} \pi\left({\bf Z}_{t_k+1}\vert {\bf Z}_{t_{k}}, \mathfrak{T}_{i-1}^{\Theta}(\Theta) \right) \right) \pi\left({\bf Z}_{t_{i+1}} \vert \mathfrak{M}_{i-1}^{1}(\Theta, {\bf Z}_{t_i}), \mathfrak{T}_{i-1}^{\Theta}(\Theta) \right) \;,\]where \(\mathfrak{T}_{i-1}^{\Theta}\) and \(\mathfrak{M}_{i-1}^{1}\) are the hyper-parameter and forecast components of the map computed at step \(i-1\), using the sequential algorithm described in [TM3].
- Parameters:
- Returns:
- (
Distribution
) – Markov component \(\pi^{i:i+n}\).
- (
- trim(nsteps)[source]¶
Trim the Markov chain to the first
nsteps
- Parameters:
nsteps ((int)) – number of steps in the Markov chain of the returned distribution
- Returns:
(
HiddenMarkovChainDistribution
) – trimmed distribution
- class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.HiddenTimeHomogeneousMarkovChainDistribution(pi_init, pi_trans, pi_list=[], pi_hyper=None, ll_list=[])[source]¶
Bases:
HiddenMarkovChainDistribution
Distribution of an hidden time homogenoeus Markov chain model
This is a sequential hidden Markov chain where transitions do not depend on time. The distribution is then defined as
\[\pi(\Theta, {\bf Z}_A \vert {\bf y}_B) = \left( \prod_{t\in B} \mathcal{L}(t; {\bf y}_t \vert {\bf Z}_t, \Theta) \right) \left( \prod_{i=1}^k \pi({\bf Z}_{t_i}\vert{\bf Z}_{t_{i-1}},\Theta) \right) \pi({\bf Z}_{t_0}\vert\Theta) \pi(\Theta)\]- Parameters:
pi_init (
Distribution
) – distribution \(\pi({\bf Z}_{t_0}\vert\Theta)\)pi_trans (
ConditionalDistribution
) – transition distribution \(\pi({\bf Z}_{t_i}\vert{\bf Z}_{t_{i-1}},\Theta)\)pi_list (
list
ofConditionalDistribution
) – list of transition densities \([\pi({\bf Z}_{t_0}\vert\Theta), \pi({\bf Z}_{t_1}\vert{\bf Z}_{t_{0}},\Theta), \ldots ]\)ll_list (
list
ofLogLikelihood
) – list of log-likelihoods \(\{\log\mathcal{L}({\bf y}_t \vert {\bf Z}_t,\Theta)\}_{t\in B}\)pi_hyper (
Distribution
) – prior on hyper-parameters \(h(\Theta)\)
- append(ll=None)[source]¶
Append a new transition distribution \(\pi({\bf Z}_{t_{k+1}}\vert\Theta, {\bf Z}_{t_{k}})\) and the corresponding log-likelihood \(\log\mathcal{L}({\bf y}_{t_k} \vert \Theta, {\bf Z}_{t_k})\) if any.
- Parameters:
pi (
ConditionalDistribution
) – transition distribution \(\pi({\bf Z}_{t_{k+1}}\vert\Theta, {\bf Z}_{t_{k}})\)ll (
LogLikelihood
) – log-likelihood \(\log\mathcal{L}({\bf y}_{t_k} \vert \Theta, {\bf Z}_{t_k})\). Missing data are represented byNone
.
- class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.HiddenLag1TransitionTimeHomogeneousMarkovChainDistribution(pi_init, dyn_map, pi_dyn, pi_list=[], pi_hyper=None, ll_list=[])[source]¶
Bases:
HiddenTimeHomogeneousMarkovChainDistribution
Distribution of an hidden time homogenoeus Markov chain model with lag-1 transitions
This is a sequential hidden Markov chain where transitions do not depend on time and describe a lag-1 process. The distribution is then defined as
\[\pi(\Theta, {\bf Z}_A \vert {\bf y}_B) = \left( \prod_{t\in B} \mathcal{L}(t; {\bf y}_t \vert {\bf Z}_t, \Theta) \right) \left( \prod_{i=1}^k \pi({\bf Z}_{t_i}\vert{\bf Z}_{t_{i-1}},\Theta) \right) \pi({\bf Z}_{t_0}\vert\Theta) \pi(\Theta)\]- Parameters:
pi_init (
Distribution
) – distribution \(\pi({\bf Z}_{t_0}\vert\Theta)\)dyn_map (
Map
) – map \(T\)pi_dyn (
Distribution<TransportMaps.Distributions.ConditionalDistribution
) – distributin of the noise of the dynamicspi_list (
list
ofConditionalDistribution
) – list of transition densities \([\pi({\bf Z}_{t_0}\vert\Theta), \pi({\bf Z}_{t_1}\vert{\bf Z}_{t_{0}},\Theta), \ldots ]\)ll_list (
list
ofLogLikelihood
) – list of log-likelihoods \(\{\log\mathcal{L}({\bf y}_t \vert {\bf Z}_t,\Theta)\}_{t\in B}\)pi_hyper (
Distribution
) – prior on hyper-parameters \(h(\Theta)\)
- class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.MarkovComponentDistribution(idx0, pi_list, ll_list, state_dim, hyper_dim, pi_hyper=None, state_map=None, hyper_map=None)[source]¶
Bases:
TransportMaps.Distributions.Distribution
\(i\)-th Markov component of a
HiddenMarkovChainDistribution
If \(i=-1\) the Markov component is given by
\[\pi^{0:n}(\Theta, {\bf Z}_{t_0}, \ldots, {\bf Z}_{t_n}) := \left( \prod_{t \in \{t_0,\ldots,t_n\} \cap B} \mathcal{L}({\bf y}_t \vert \Theta, {\bf Z}_t) \right) \left( \prod_{i=1}^n \pi({\bf Z}_{t_i}\vert \Theta, {\bf Z}_{t_{i-1}}) \right) \pi({\bf Z}_{t_0}\vert\Theta) \pi(\Theta) \;.\]If \(i>=0\) then the Markov component is
\[\pi^{i:i+n}\left(\Theta, {\bf Z}_{t_i}, \ldots, {\bf Z}_{t_{i+n}}\right) := \eta(\Theta, {\bf Z}_{t_i}) \left( \prod_{t \in \left\{t_{i+1},\ldots,t_{i+n}\right\} \cap B} \mathcal{L}\left({\bf y}_t \vert \mathfrak{T}_{i-1}^{\Theta}(\Theta), {\bf Z}_t\right) \right) \left( \prod_{k=i+1}^{i+n-1} \pi\left({\bf Z}_{t_k+1}\vert {\bf Z}_{t_{k}}, \mathfrak{T}_{i-1}^{\Theta}(\Theta) \right) \right) \pi\left({\bf Z}_{t_{i+1}} \vert \mathfrak{M}_{i-1}^{1}(\Theta, {\bf Z}_{t_i}), \mathfrak{T}_{i-1}^{\Theta}(\Theta) \right) \;,\]where \(\mathfrak{T}_{i-1}^{\Theta}\) and \(\mathfrak{M}_{i-1}^{1}\) are the hyper-parameter and forecast components of the map computed at step \(i-1\), using the sequential algorithm described in [TM3].
- Parameters:
idx0 (int) – index \(i\) of the Markov component
pi_list (
list
ofDistribution
) – list of \(n\) transition densitiesll_list (
list
ofLogLikelihood
) – list of \(n\) log-likelihoods (None
for missing data) \(\{\log\mathcal{L}({\bf y}_t \vert \Theta, {\bf Z}_t)\}_{t\in B}\)state_dim (int) – dimension of the state-space
hyper_dim (int) – dimension of the parameter-space
pi_hyper (
Distribution
) – prior on hyper-parameters \(h(\Theta)\)state_map (
TransportMap
) – forecast map \(\mathfrak{M}_{i-1}^{1}\) from step \(i-1\).hyper_map (
TransportMap
) – hyper-parameter map \(\mathfrak{T}_{i-1}^{\Theta}\) from step \(i-1\).
- log_pdf(x, cache=None, **kwargs)[source]¶
[Abstract] Evaluate \(\log \pi({\bf x})\)
- Parameters:
- Returns:
- (
ndarray
[\(m\)]) – values of \(\log\pi\) at the
x
points.
- (
- Raises:
NotImplementedError – the method needs to be defined in the sub-classes
- grad_x_log_pdf(x, cache=None, **kwargs)[source]¶
[Abstract] Evaluate \(\nabla_{\bf x} \log \pi({\bf x})\)
- Parameters:
- Returns:
- (
ndarray
[\(m,d\)]) – values of \(\nabla_x\log\pi\) at the
x
points.
- (
- Raises:
NotImplementedError – the method needs to be defined in the sub-classes
- tuple_grad_x_log_pdf(x, cache=None, **kwargs)[source]¶
[Abstract] Compute the tuple \(\left(\log \pi({\bf x}), \nabla_{\bf x} \log \pi({\bf x})\right)\)
- Parameters:
- Returns:
- (
tuple
) – containing \(\left(\log \pi({\bf x}), \nabla_{\bf x} \log \pi({\bf x})\right)\)
- (
- Raises:
NotImplementedError – the method needs to be defined in the sub-classes
- hess_x_log_pdf(x, cache=None, **kwargs)[source]¶
[Abstract] Evaluate \(\nabla^2_{\bf x} \log \pi({\bf x})\)
- Parameters:
- Returns:
- (
ndarray
[\(m,d,d\)]) – values of \(\nabla^2_x\log\pi\) at the
x
points.
- (
- Raises:
NotImplementedError – the method needs to be defined in the sub-classes
- class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.AR1TransitionDistribution(*args, **kwargs)[source]¶
Bases:
Lag1TransitionDistribution
Transition probability for an auto-regressive (1) process (possibly with hyper-parameters)
Defines the probability distribution \(\pi({\bf Z}_{k+1}\vert {\bf Z}_{k}, \Theta)=\pi({\bf Z}_{k+1} - T({\bf Z}_{k},\Theta) \vert \Theta)\) for the lag-1 process
\[{\bf Z}_{k+1} = T({\bf Z}_k, \Theta) + \varepsilon \;, \quad \varepsilon \sim \nu_\pi\]- Parameters:
pi (
Distribution<TransportMaps.Distributions.ConditionalDistribution
) – distribution \(\pi:\mathbb{R}^d\times\mathbb{R}^{d_\theta}\rightarrow\mathbb{R}\)T (
Map
) – map \(T:\mathbb{R}^{d+d_\theta}\rightarrow\mathbb{R}^d\)
- class TransportMaps.Distributions.Decomposable.SequentialInferenceDistributions.SequentialHiddenMarkovChainDistribution(pi_list=[], ll_list=[], pi_hyper=None)[source]¶
Bases:
HiddenMarkovChainDistribution
- Parameters:
pi_list (
list
ofConditionalDistribution
) – list of transition densities \([\pi({\bf Z}_{t_0}\vert\Theta), \pi({\bf Z}_{t_1}\vert{\bf Z}_{t_{0}},\Theta), \ldots ]\)ll_list (
list
ofLogLikelihood
) – list of log-likelihoods \(\{\log\mathcal{L}({\bf y}_t \vert {\bf Z}_t,\Theta)\}_{t\in B}\)pi_hyper (
Distribution
) – prior on hyper-parameters \(h(\Theta)\)