TransportMaps.Samplers.MarkovChainSamplers

Module Contents

Classes

MetropolisHastingsIndependentProposalsSampler

Metropolis-Hastings with independent proposal sampler of distribution d, with proposal distribution d_prop

MetropolisHastingsSampler

Metropolis-Hastings sampler of distribution d, with proposal d_prop

MetropolisHastingsWithinGibbsSampler

Metropolis-Hastings within Gibbs sampler of distribution d, with proposal d_prop and Gibbs block sampling blocks

HamiltonianMonteCarloSampler

Hamiltonian Monte Carlo sampler of distribution d, with proposal distribution d_prop

class TransportMaps.Samplers.MarkovChainSamplers.MetropolisHastingsIndependentProposalsSampler(d, d_prop)[source]

Bases: TransportMaps.Samplers.SamplerBase.Sampler

Metropolis-Hastings with independent proposal sampler of distribution d, with proposal distribution d_prop

Parameters:
  • d (Distributions.Distribution) – distribution to sample from

  • d_prop (Distributions.Distribution) – proposal distribution

rvs(m, x0=None, mpi_pool_tuple=(None, None), disable_tqdm: bool = True)[source]

Generate a Markov Chain of \(m\) equally weighted samples from the distribution d

Parameters:
  • m (int) – number of samples to generate

  • x0 (ndarray [\(1,d\)]) – initial chain value

  • mpi_pool_tuple (tuple [2] of mpi_map.MPI_Pool) – pool of processes to be used for the evaluation of d and prop_d

  • disable_tqdm (bool) – whether to disable tqdm

Returns:

(tuple (ndarray [\(m,d\)], ndarray [\(m\)])) – list of points and weights

class TransportMaps.Samplers.MarkovChainSamplers.MetropolisHastingsSampler(d, d_prop)[source]

Bases: TransportMaps.Samplers.SamplerBase.Sampler

Metropolis-Hastings sampler of distribution d, with proposal d_prop

Parameters:
  • d (Distributions.Distribution) – distribution \(\pi({\bf x})\) to sample from

  • d_prop (Distributions.ConditionalDistribution) – conditional distribution \(\pi({\bf y}\vert{\bf x})\) to use as a proposal

rvs(m, x0=None, mpi_pool_tuple=(None, None), disable_tqdm: bool = True)[source]

Generate a Markov Chain of \(m\) equally weighted samples from the distribution d

Parameters:
  • m (int) – number of samples to generate

  • x0 (ndarray [\(1,d\)]) – initial chain value

  • mpi_pool_tuple (tuple [2] of mpi_map.MPI_Pool) – pool of processes to be used for the evaluation of d and prop_d

  • disable_tqdm (bool) – whether to disable tqdm

Returns:

(tuple (ndarray [\(m,d\)], ndarray [\(m\)])) – list of points and weights

class TransportMaps.Samplers.MarkovChainSamplers.MetropolisHastingsWithinGibbsSampler(d, d_prop_list, block_list=None, block_prob_list=None)[source]

Bases: TransportMaps.Samplers.SamplerBase.Sampler

Metropolis-Hastings within Gibbs sampler of distribution d, with proposal d_prop and Gibbs block sampling blocks

Parameters:
  • d (Distributions.Distribution) – distribution \(\pi({\bf x})\) to sample from

  • d_prop (list of Distributions.ConditionalDistribution) – conditional distribution \(\pi({\bf y}\vert{\bf x})\) to use as a proposal

  • block_list (list of list) – list of blocks of variables

  • block_prob_list (list of float) – probability (0,1] of sampling one block.

rvs(m, x0=None, mpi_pool_tuple=(None, None), disable_tqdm: bool = True)[source]

Generate a Markov Chain of \(m\) equally weighted samples from the distribution d

Parameters:
  • m (int) – number of samples to generate

  • x0 (ndarray [\(1,d\)]) – initial chain value

  • mpi_pool_tuple (tuple [2] of mpi_map.MPI_Pool) – pool of processes to be used for the evaluation of d and prop_d

  • disable_tqdm (bool) – whether to disable tqdm

Returns:

(tuple (ndarray [\(m,d\)], ndarray [\(m\)])) – list of points and weights

class TransportMaps.Samplers.MarkovChainSamplers.HamiltonianMonteCarloSampler(d)[source]

Bases: TransportMaps.Samplers.SamplerBase.Sampler

Hamiltonian Monte Carlo sampler of distribution d, with proposal distribution d_prop

This sampler requires the package pyhmc.

Parameters:

d (Distributions.Distribution) – distribution to sample from

rvs(m, x0=None, display=False, n_steps=1, persistence=False, decay=0.9, epsilon=0.2, window=1, return_logp=False, return_diagnostics=False, random_state=None)[source]

Generate a Markov Chain of \(m\) equally weighted samples from the distribution d

See also

pyhmc for arguments