TransportMaps Q&A - Recent questions and answers
https://transportmaps.mit.edu/qa/index.php?qa=qa
Powered by Question2AnswerAnswered: Code to extract sparse transport support from Markov Graph
https://transportmaps.mit.edu/qa/index.php?qa=503&qa_1=code-to-extract-sparse-transport-support-from-markov-graph&show=504#a504
<p>Hi Patrick, </p><p>Thank you very much for your question! Given a marginal random field (MRF), the sparsity of the inverse transport map (i.e., the active variables set in the code you referenced) can be extracted from a variable elimination algorithm applied to the graph. We have implemented this code as part of our <a rel="nofollow" href="https://transportmaps.mit.edu/docs/example-inverse-sparsity-identification.html">SING algorithm</a> for learning the structure of graphical models from data, which alternates using a sparse map to learn the adjacency of the graph with extracting the map sparsity from the adjacency matrix. </p><p>The code block to extract the active variables as a list given a matrix A encoding the edges of the MRF is:</p><pre><code># Variable elimination moving from highest node (dim-1) to node 2 (at most)
dim = A.shape[0]
ALower = np.tril(A)
for i in range(dim-1,1,-1):
non_zero_ind = np.where(ALower[i,:i] != 0)[0]
if len(non_zero_ind) > 1:
co_parents = list(itertools.combinations(non_zero_ind,2))
for j in range(len(co_parents)):
row_index = max(co_parents[j])
col_index = min(co_parents[j])
ALower[row_index, col_index] = 1.0
# Find list of active_vars
active_vars = []
for i in range(dim):
actives = np.where(ALower[i,:] != 0)
active_list = list(set(actives[0]) | set([i]))
active_list.sort(key=int)
active_vars.append(active_list)</code></pre><p>Please let us know if you have any other questions!</p><p> </p><p>Best,</p><p>Ricardo</p>usagehttps://transportmaps.mit.edu/qa/index.php?qa=503&qa_1=code-to-extract-sparse-transport-support-from-markov-graph&show=504#a504Sun, 12 Feb 2023 00:33:50 +0000Answered: applying optimal tranport map to filtering theory
https://transportmaps.mit.edu/qa/index.php?qa=251&qa_1=applying-optimal-tranport-map-to-filtering-theory&show=256#a256
<p>Hello Peng,</p><p>There are mainly two lines of work regarding filtering/smoothing/parameter estimation (or more in general data assimilation):</p><ul><li>The first one is along the lines of <strong>variational</strong> filters/smoothers and is presented in <a rel="nofollow" href="http://jmlr.org/papers/v19/17-747.html">"Inference via low-dimensional couplings"</a>. It is already implemented in TransportMaps 2.0, in the version described in the paper.</li><li>The second one is along the lines of <strong>ensemble</strong> filters/smoothers and is presented in <a rel="nofollow" href="https://arxiv.org/abs/1907.00389">"Coupling techniques for nonlinear ensemble filtering"</a> . The lead in this case is Ricardo Baptista, and you should contact him for more details. These versions of filters are not included in the code yet (I know Ricardo has some software for it). It might be included in future releases of TransportMaps.</li></ul><div>On both fornts there are many open questions and new challenges which we could discuss off-line.</div><div> </div><div>Best,</div><div> Daniele</div>theoryhttps://transportmaps.mit.edu/qa/index.php?qa=251&qa_1=applying-optimal-tranport-map-to-filtering-theory&show=256#a256Fri, 16 Aug 2019 15:32:26 +0000Answered: constructing transport map from given samples and use it to get/samples from standard normal
https://transportmaps.mit.edu/qa/index.php?qa=205&qa_1=constructing-transport-given-samples-samples-standard-normal&show=232#a232
<p>Hi Saad, your question got cut out for some reasons, but I imagine you are referring to the rescaling done by the map \(L\), which (practically) takes the original sample and scales it to get a mean 0 and variance 0 sample. This is done because the map \(S\) is going to use polynomials orthogonal with respect ot \(N(0,1)\), which implies that they are mostly "accurate" on the bulk of \(N(0,1)\).</p><p>This question is similar to the issue described <a rel="nofollow" href="https://transportmaps.mit.edu/qa/index.php?qa=5&qa_1=cant-build-standard-normal-distribution-gumbel-distribution">here</a></p><p>I hope this helps,</p><p> Daniele</p>theoryhttps://transportmaps.mit.edu/qa/index.php?qa=205&qa_1=constructing-transport-given-samples-samples-standard-normal&show=232#a232Tue, 23 Jul 2019 15:43:39 +0000Answered: I get an MPI error for Inverse Transport.
https://transportmaps.mit.edu/qa/index.php?qa=120&qa_1=i-get-an-mpi-error-for-inverse-transport&show=122#a122
<p>Hi Hassan, I think that the documentation is lacking some explanation here.</p><p>When solving the problem \(\min_T \mathcal{D}_{\text{KL}}\left( T_\sharp \pi \Vert \rho \right) \) , where \( \pi \) is your target from which you have samples and \( \rho \) is the Sandard Normal, the code will:</p><ol><li>flip the problem to solve \( \arg\min_T \mathcal{D}_{\text{KL}}\left( \pi \Vert T^\sharp \rho \right) = \arg\min_T \mathbb{E}_{\pi} \left[ -\log T^\sharp \rho \right] \approx \arg\min_T \sum_{i=1}^n -\log T^\sharp \rho({\bf x}_i) \). This is done in <a rel="nofollow" href="https://transportmaps.mit.edu/docs/_modules/TransportMaps/Distributions/TransportMapDistributions.html#PushForwardTransportMapDistribution.minimize_kl_divergence">PushForwardTransportMapDistribution.minimize_kl_divergence</a> .</li><li>Then it will call <a rel="nofollow" href="https://transportmaps.mit.edu/docs/api-TransportMaps-Maps.html#TransportMaps.Maps.MonotonicTriangularTransportMap.minimize_kl_divergence">MonotonicTriangularTransportMap.minimize_kl_divergence</a> .</li><li>If the distribution \(\rho\) is a product distribution (as the Standard Normal is), the map will be learned componentwise (see Section 4.2 of <a rel="nofollow" href="https://rd.springer.com/content/pdf/10.1007%2F978-3-319-11259-6_23-1.pdf">Marzouk et al.</a>)</li></ol><div>Since usually the first components are not computationally expensive (few parameters), one can provide a list of \(d\) elements (\(d\) being the number of components of the map/dimension of the problem), where each element is either <span style="font-family:Courier New,Courier,monospace">None</span><span style="font-family:Arial,Helvetica,sans-serif"> or an </span><span style="font-family:Courier New,Courier,monospace">mpi_pool</span><span style="font-family:Arial,Helvetica,sans-serif">. </span>The point being that there is a tradeoff between using a single process or multiple processes: if one uses multiple processes gets the benefit of a faster function/gradient evaluations, at the expense of a fixed communication cost. This communication cost sometimes is higher than the actual functional/gradient evaluation costs (e.g. for the first components) and therefore it does not make sense to use parallelism for them.</div><div> </div><div>I hope this clarifies it a little bit.</div><div>Daniele</div>usagehttps://transportmaps.mit.edu/qa/index.php?qa=120&qa_1=i-get-an-mpi-error-for-inverse-transport&show=122#a122Thu, 21 Mar 2019 13:53:50 +0000Answered: Inverse map computational speed
https://transportmaps.mit.edu/qa/index.php?qa=105&qa_1=inverse-map-computational-speed&show=106#a106
<p>Hello Hassan,</p><p>yes computing the inverse of a map is in general more expensive than evaluating the map. The evaluation of a generic non-linear map \(S\) is given by:</p><p>$$ S({\bf x})=\left[\begin{array}{l}<br>S_1(x_1)\\<br>S_2(x_1, x_2)\\<br>\;\vdots\\<br>S_d(x_1,\ldots,x_d)<br>\end{array}\right]$$</p><p>The inverse instead requires \(d\) rootfinding to be computed, one for each component \(S_i\):</p><p>$$ S^{-1}({\bf y})=\left[\begin{array}{l}<br>S_1^{-1}(y_1)\\<br>S_2^{-1}(S_1^{-1}(y_1), \cdot)(y_2)\\<br>\;\vdots\\<br>S_d^{-1}(S_1^{-1}(y_1),\ldots,\cdot)(y_d)<br>\end{array}\right]$$</p><p>where each rootfinding is done along the last dimension (you can see the monotonicity constraint coming handy when doing these rootfidings). The class InverseTransportMap is just a wrapper around \(S\), so it wouldn't help.</p><p>In order to speed-up the computation of the inverse there are two available routes:</p><ol><li>Use MPI: the computation can be performed for each point separately. You can take a look at the <a rel="nofollow" href="https://transportmaps.mit.edu/docs/mpi-usage.html">MPI section</a> of the tutorial.</li><li>The alternative (if you really have to use this inverse again and again) is to solve the regression problem<br>$$ T = \arg\min_{T} \Vert T({\bf x}) - S^{-1}({\bf x}) \Vert_{\mathcal{N}(0,{\bf I})} $$<br>You can look for the <a rel="nofollow" href="https://transportmaps.mit.edu/docs/api-TransportMaps-Maps.html#TransportMaps.Maps.TransportMap.regression">API documentation of the regression function</a>.<br>Once the map \(T\) is found, then you can just evaluate it on all your points and that should be way faster.</li></ol><p>Let me know if you have any additional doubt and I can write down a couple of working examples.</p>usagehttps://transportmaps.mit.edu/qa/index.php?qa=105&qa_1=inverse-map-computational-speed&show=106#a106Thu, 07 Mar 2019 02:08:05 +0000Answered: I got an error "'NoneType' object has no attribute 'reset_counters'"
https://transportmaps.mit.edu/qa/index.php?qa=99&qa_1=got-error-nonetype-object-has-no-attribute-reset_counters&show=100#a100
<p>Hi, sorry for the long delay. Let me try to reproduce the problem. Are you working on any example provided with the software or your own?</p><p>I just run step by step the code provided <a rel="nofollow" href="https://transportmaps.mit.edu/docs/example-sequential-stocvol-6d.html">here</a> with TransportMaps version 2.0b2 and it works also with missing observations.</p><p>Please let me know which version you are using and a minimum working example.</p><p>Thanks.</p>usagehttps://transportmaps.mit.edu/qa/index.php?qa=99&qa_1=got-error-nonetype-object-has-no-attribute-reset_counters&show=100#a100Sat, 09 Feb 2019 13:24:27 +0000Answered: import error and installation of scikit-sparse
https://transportmaps.mit.edu/qa/index.php?qa=85&qa_1=import-error-and-installation-of-scikit-sparse&show=87#a87
<p>Hi Hassan, </p><p>Thank you very much for your question! </p><p>To follow up on the previous suggestion, I think you may be missing the sparse Cholesky factorization library, CHOMOD. The library is used by scikit-sparse and I believe it does not come with the conda-forge package. The easiest approach may be to download the <a rel="nofollow" href="http://faculty.cse.tamu.edu/davis/suitesparse.html">suitesparse package</a> that contains several sparse matrix methods, including CHOLMOD, and to reinstall scikit-sparse. I've tested it on Mojave with Anaconda Python 3.6 and it seems to be working.</p><p>Please let us know if this fixes the import error on your end, and if not we are glad to investigate further!</p>installationhttps://transportmaps.mit.edu/qa/index.php?qa=85&qa_1=import-error-and-installation-of-scikit-sparse&show=87#a87Sat, 10 Nov 2018 23:31:01 +0000Answered: Is the algorithm in 'Transport map accelerated Markov chain Monte Carlo' included?
https://transportmaps.mit.edu/qa/index.php?qa=54&qa_1=algorithm-transport-accelerated-markov-chain-carlo-included&show=58#a58
<p>The transport preconditioned MCMC can be coded using TransportMaps. There is not a tutorial page for that yet, but the idea is to construct (inverse) maps from MCMC samples to then precondition new chains. The package TransportMaps provides very few Markov Chain samplers (standard ones like <a rel="nofollow" href="https://transportmaps.mit.edu/docs/api-TransportMaps-Samplers.html?highlight=metropolis">Metropolis-Hastings and some Gibbs variations</a>). More samplers are available through <a rel="nofollow" href="http://muq.mit.edu/">MUQ</a>, which also implements the construction of transport maps from samples (only polynomial ones which don't enforce monotonicity everywhere though). I will push to include a tutorial on this technique once I'm back.</p>usagehttps://transportmaps.mit.edu/qa/index.php?qa=54&qa_1=algorithm-transport-accelerated-markov-chain-carlo-included&show=58#a58Wed, 08 Aug 2018 09:54:32 +0000Answered: I got an error "NameError: name 'mpl' is not defined" when I ran the "Diagnostics and unbiased sampling" in tutorial
https://transportmaps.mit.edu/qa/index.php?qa=48&qa_1=nameerror-defined-diagnostics-unbiased-sampling-tutorial&show=51#a51
<p>The error should be fixed in the version 2.0b2. Just do</p><pre><code class="language-bash">pip install --upgrade TransportMaps</code></pre><p>to upgrade to the new version.</p>usagehttps://transportmaps.mit.edu/qa/index.php?qa=48&qa_1=nameerror-defined-diagnostics-unbiased-sampling-tutorial&show=51#a51Thu, 02 Aug 2018 22:22:30 +0000Answered: I got an error "log_pdf() got an unexpected keyword argument 'cache'" when I ran the tutorial
https://transportmaps.mit.edu/qa/index.php?qa=44&qa_1=error-log_pdf-unexpected-keyword-argument-cache-tutorial&show=46#a46
<p>It appears to be a leftover bug in the documentation. Thanks for point it out.</p><p>The functions log_pdf, grad_x_log_pdf and hess_x_log_pdf now can take additional arguments and the Gumbel example provided in the tutorial is still coding an old interface. The correct code for the definition of the GumbelDistribution is as follows.</p><pre><code class="language-python">class GumbelDistribution(DIST.Distribution):
def __init__(self, mu, beta):
super(GumbelDistribution,self).__init__(1)
self.mu = mu
self.beta = beta
self.dist = stats.gumbel_r(loc=mu, scale=beta)
def pdf(self, x, params=None, *args, **kwargs):
return self.dist.pdf(x).flatten()
def log_pdf(self, x, params=None, *args, **kwargs):
return self.dist.logpdf(x).flatten()
def grad_x_log_pdf(self, x, params=None, *args, **kwargs):
m = self.mu
b = self.beta
z = (x-m)/b
return (np.exp(-z)-1.)/b
def hess_x_log_pdf(self, x, params=None, *args, **kwargs):
m = self.mu
b = self.beta
z = (x-m)/b
return (-np.exp(-z)/b**2.)[:,:,np.newaxis]</code></pre><p>I'm in the process of updating the tutorial in any place where this bug appear.</p><p>Thanks again,</p><p> Daniele</p>usagehttps://transportmaps.mit.edu/qa/index.php?qa=44&qa_1=error-log_pdf-unexpected-keyword-argument-cache-tutorial&show=46#a46Wed, 01 Aug 2018 13:35:22 +0000Answered: Composition of maps and order adaptativity
https://transportmaps.mit.edu/qa/index.php?qa=36&qa_1=composition-of-maps-and-order-adaptativity&show=37#a37
<p>Hi Paul,</p><p>I don't have personal experience with the approach described in [TM1] "Bayesian inference with optimal maps", but here are my insights with regard to map composition and tempering. In the following I assume that you have whitened the posterior, so that the prior distribution is a Standard Normal and tempering is applied to the whitened likelihood (either by noise covariance progression, data injection or forward accuracy).</p><ul><li>I would say that the progression of the noise covariance should be slow enough so that at each step the map is able to capture most of the relation between reference (Standard Normal) and the tempered posterior. You should measure this using the variance diagnostic.</li><li>Regarding computational cost: if the maps are simple enough (at least order two or three for accuracy) then the approach can result in something more efficient than directly computing a transport map of high-order. You can think to the final map as many layer of a neural network with each layer corresponding to a map in the composition. This can help explain the expressivity of the representation. On the upside, of using compositions of monotone triangular maps is that the transformation is invertible (a property sometimes needed).</li><li>The composition of maps should take the prior/reference \(\nu_\rho\) to posterior \(\nu_\pi\) progressively, but it should not result in the "optimal composition" due to the inherent sequential construction. One could correct the maps in a final sweep over the maps to improve them, e.g. solving the \(n\) problems:<br>\[ T_i = \arg\min_{T\in \mathcal{T}_>} \mathcal{D}_{\text{KL}}\left( (T_1\circ T_{i-1})_\sharp \nu_\rho \middle\Vert T^\sharp (T_{i+1}\circ T_n)^\sharp \nu_\pi \right) \]<br>warm started at the \(T_i\) already learned through the tempering procedure.</li></ul><div>Can you clarify what you mean with "first map to get low-rank coupling"? Do you mean to find the rotation such that the next maps become the identity almost everywhere (like Likelihood Informed sub-spaces or Active sub-spaces)?</div><div> </div><div>Also what do you mean with "unscaled"? Usually the scaling problem can arise when one tries to find <a target="_blank" rel="nofollow" href="https://transportmaps.mit.edu/docs/examples-transport-from-samples.html">maps from samples</a> not when constructing <a target="_blank" rel="nofollow" href="https://transportmaps.mit.edu/docs/examples-direct-transport.html">maps from densities </a>. So using a linear map at the beginning is not strictly necessary, but it shouldn't hurt either.</div><div> </div><div>I hope to have addressed some of your questions.</div><div>I will try to gather more insights from my collegues in the coming days.</div><div> </div><div>Best,</div><div> Daniele</div>theoryhttps://transportmaps.mit.edu/qa/index.php?qa=36&qa_1=composition-of-maps-and-order-adaptativity&show=37#a37Fri, 22 Jun 2018 10:02:17 +0000Answered: I got an error "NameError: free variable 'gfk' referenced before assignment in enclosing scope"
https://transportmaps.mit.edu/qa/index.php?qa=29&qa_1=nameerror-variable-referenced-before-assignment-enclosing&show=30#a30
<p>Hi Jack, you are trying to solve the following problem, right?</p><p>\[ \nu_{Z_0}=\mathcal{N}(0, Q_0) \;, \quad \nu_w = \mathcal{N}(0,Q) \;,\quad \nu_v=\mathcal{N}(0,R) \]</p><p>\[ Z_{t+1} = F Z_{t} + \varepsilon \;, \quad \varepsilon \sim \nu_{w} \]</p><p>\[ Y_t = \sqrt{Z_{2,t}^2 + Z_{0,t}^2} + \delta \;, \quad \delta\sim\nu_v \]</p><p>I just tried to run your code and I run on a different error, namely</p><pre><code class="language-bash">Traceback (most recent call last):
File "script.py", line 68, in <module>
lap = TM.laplace_approximation(pi)
File "/home/dabi/VC-Projects/Software/Mine/Current/transportmaps-private/TransportMaps/Routines.py", line 1664, in laplace_approximation
x0 = pi.prior.rvs(1).flatten()
File "/home/dabi/VC-Projects/Software/Mine/Current/transportmaps-private/TransportMaps/Distributions/DistributionBase.py", line 68, in rvs
raise NotImplementedError("The method is not implemented for this distribution")
NotImplementedError: The method is not implemented for this distribution</code></pre><p>This means that the random sampler for the prior \( \nu_Z \) distribution is not implelemented (now that I know it, I will do it soon). If this is not implemented, then you need to provide a starting point for the optimization used to find the MAP point for the Laplace approximation. For example, if you wanted to use the generating states as a starting point, you would do</p><pre><code class="language-python">lap = TM.laplace_approximation(pi, x0=Z)</code></pre><p>This way the code runs on my machine. Let me know whether you are still having the same problem and I can dig more into it.</p><p>Best,</p><p> Daniele</p><p>PS: Also, if you want to see whether the optimization worked fine, you can lower the logging level to the INFO level (20) or to the debug level (10):</p><pre><code class="language-python">TM.logger.setLevel(20)</code></pre><p> </p>usagehttps://transportmaps.mit.edu/qa/index.php?qa=29&qa_1=nameerror-variable-referenced-before-assignment-enclosing&show=30#a30Fri, 06 Apr 2018 20:16:32 +0000Answered: Can't use a pulled distribution since the v1.1b0
https://transportmaps.mit.edu/qa/index.php?qa=25&qa_1=cant-use-a-pulled-distribution-since-the-v1-1b0&show=26#a26
Hi Paul,<br />
<br />
yes, I'm aware of the bug in version 1.1b0. Try to update to 1.1b2<br />
<br />
https://pypi.python.org/pypi/TransportMaps/1.1b2<br />
<br />
I run the code on this version and it was working ok.<br />
<br />
Best,<br />
<br />
Danieleusagehttps://transportmaps.mit.edu/qa/index.php?qa=25&qa_1=cant-use-a-pulled-distribution-since-the-v1-1b0&show=26#a26Thu, 15 Mar 2018 15:39:11 +0000Answered: I got an error "undefined symbol: _gfortran_stop_numeric_f08" when I import orthpol_light in Anaconda.
https://transportmaps.mit.edu/qa/index.php?qa=18&qa_1=undefined-_gfortran_stop_numeric_f08-orthpol_light-anaconda&show=20#a20
<p>I was able to reproduce the error. Anaconda uses by default python3.6. For some reason compiling <span style="font-family:Courier New,Courier,monospace">orthpol_light</span> against the libgfortran provided by anaconda raises that undefined symbol error.</p><p>A workaround is to use python3.5 (which is the version supported by Ubuntu16.04). To do so you can create a new environment in conda (which is anyway advised in order to keep things tided):</p><pre><code class="language-bash">$ conda create -n tm python=3.5 numpy scipy</code></pre><p>then activate the newly created environment</p><pre><code class="language-bash">$ source activate tm</code></pre><p>and installing orthpol_light through pip</p><pre><code class="language-bash">$ pip install orthpol_light</code></pre><p> </p><p>I will dig into figuring out whether the problem with python3.6 is only related to anaconda or more general.</p>installationhttps://transportmaps.mit.edu/qa/index.php?qa=18&qa_1=undefined-_gfortran_stop_numeric_f08-orthpol_light-anaconda&show=20#a20Tue, 05 Dec 2017 02:46:53 +0000Answered: Why I can't build TM from non standard normal distribution to Gumbel distribution ?
https://transportmaps.mit.edu/qa/index.php?qa=5&qa_1=cant-build-standard-normal-distribution-gumbel-distribution&show=7#a7
<p>Hi Paul, Youssef's comment is spot on. Here is the code implementing his suggestion:</p><pre><code class="language-python">import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
import TransportMaps as TM
import TransportMaps.Functionals as FUNC
import TransportMaps.Maps as MAPS
import TransportMaps.Distributions as DIST
class GumbelDistribution(DIST.Distribution):
def __init__(self, mu, beta):
super(GumbelDistribution,self).__init__(1)
self.mu = mu
self.beta = beta
self.dist = stats.gumbel_r(loc=mu, scale=beta)
def pdf(self, x, params=None):
return self.dist.pdf(x).flatten()
def log_pdf(self, x, params=None):
return self.dist.logpdf(x).flatten()
def grad_x_log_pdf(self, x, params=None):
m = self.mu
b = self.beta
z = (x-m)/b
return (np.exp(-z)-1.)/b
def hess_x_log_pdf(self, x, params=None):
m = self.mu
b = self.beta
z = (x-m)/b
return (-np.exp(-z)/b**2.)[:,:,np.newaxis]
mu = -63.
beta = 3.
pi = GumbelDistribution(mu,beta)
M=np.array([[-50],[-70],[-60]])
xnew=np.linspace(-70,-50,201)[:,np.newaxis]
mu = np.array([-60])
sigma = np.array([[8]])
rho=DIST.StandardNormalDistribution(1)
# Linear map
L = MAPS.LinearTransportMap(mu, sigma)
# New target (pullback of pi through L)
pull_L_pi = DIST.PullBackTransportMapDistribution(L, pi)
x = np.linspace(-70., -50., 100).reshape((100,1))
order=7
T = TM.Default_IsotropicIntegratedSquaredTriangularTransportMap(
1, order, 'full')
push_rho=DIST.PushForwardTransportMapDistribution(T,rho)
qtype = 3 # Gauss quadrature
qparams =[20] # Quadrature order
reg = None # No regularization
tol = 1e-6 # Optimization tolerance
ders = 2 # Use gradient and Hessian
# Solve D_KL(T_\sharp \rho || L^\sharp \pi)
log = push_rho.minimize_kl_divergence(
pull_L_pi, qtype=qtype, qparams=qparams, regularization=reg,
tol=tol, ders=ders)
# Obtain \pi \approx L_\sharp T_\sharp \rho
push_LT_rho = DIST.PushForwardTransportMapDistribution(L, push_rho)
plt.figure()
plt.plot(x, push_LT_rho.pdf(x),'b',label=r'pushrho');
plt.plot(x, pi.pdf(x),'g',label=r'π');
plt.legend();
</code></pre><p>To know more on composition of maps, you can see <a target="_blank" rel="nofollow" href="http://transportmaps.mit.edu/docs/example-beta-1d.html">this</a> page of the tutorial.</p><p>Best,</p><p> Daniele</p>theoryhttps://transportmaps.mit.edu/qa/index.php?qa=5&qa_1=cant-build-standard-normal-distribution-gumbel-distribution&show=7#a7Fri, 03 Nov 2017 15:26:16 +0000