The TransportMap support team will be happy to help you out with questions ranging from theory on transport maps, to the installation and usage of the software TransportMaps.

Inverse map computational speed

0 votes

I am using the TransportMaps (TM) for density estimation: I have 1M data points with 2 features and use TM to compute the map S which takes my distribution to the standard normal distribution. Computing S takes some reasonable time as shown in the log below. Now I need to compute the inverse map S^{-1} for large data sets (~10M) as well, but when I use the method S.inverse() to map those values back to my original space it takes way longer, in fact, about 100 times longer. I have also tried creating another object "T=TransportMaps.Maps.TransportMapBase.InverseTransportMap(S)" and use T, but it takes similarly long. Below I share a log for computing S using 1M and then trying to compute the inverse map for some 1M points.

My question is wether I am missing something e.g. there is a way in TM to compute inverse maps much faster? If not I am thinking of using e.g. K-neighbor regression for the inverse map, any tips or thoughts on that?

bests,

Hassan

runing log (OS: Fedora release 28, 32 cores and 128G memory - didn't use MPI tho):

Polynomial order: 4
Number of coefficients: 20
==========================
2019-03-04 20:41:38 INFO: TM.MonotonicIntegratedSquaredApproximation: Optimization terminated successfully
2019-03-04 20:41:38 INFO: TM.MonotonicIntegratedSquaredApproximation:   Function value:          1.412383
2019-03-04 20:41:38 INFO: TM.MonotonicIntegratedSquaredApproximation:   Norm of the Jacobian:    0.000000
2019-03-04 20:41:38 INFO: TM.MonotonicIntegratedSquaredApproximation:   Number of iterations:         5
2019-03-04 20:41:38 INFO: TM.MonotonicIntegratedSquaredApproximation:   N. function evaluations:      6
2019-03-04 20:41:38 INFO: TM.MonotonicIntegratedSquaredApproximation:   N. Jacobian evaluations:     10
2019-03-04 20:41:38 INFO: TM.MonotonicIntegratedSquaredApproximation:   N. Hessian evaluations:       5
2019-03-04 20:45:04 INFO: TM.MonotonicIntegratedSquaredApproximation: Optimization terminated successfully
2019-03-04 20:45:04 INFO: TM.MonotonicIntegratedSquaredApproximation:   Function value:          1.009555
2019-03-04 20:45:04 INFO: TM.MonotonicIntegratedSquaredApproximation:   Norm of the Jacobian:    0.000000
2019-03-04 20:45:04 INFO: TM.MonotonicIntegratedSquaredApproximation:   Number of iterations:        14
2019-03-04 20:45:04 INFO: TM.MonotonicIntegratedSquaredApproximation:   N. function evaluations:     16
2019-03-04 20:45:04 INFO: TM.MonotonicIntegratedSquaredApproximation:   N. Jacobian evaluations:     29
2019-03-04 20:45:04 INFO: TM.MonotonicIntegratedSquaredApproximation:   N. Hessian evaluations:      14

......
......

Computing inverse transport map for 1000001 points took 27807.015720441937 seconds

 

 

 

 

asked Mar 5, 2019 in usage by HassanAr (13 points)

1 Answer

0 votes
 
Best answer

Hello Hassan,

yes computing the inverse of a map is in general more expensive than evaluating the map. The evaluation of a generic non-linear map \(S\) is given by:

$$ S({\bf x})=\left[\begin{array}{l}
S_1(x_1)\\
S_2(x_1, x_2)\\
\;\vdots\\
S_d(x_1,\ldots,x_d)
\end{array}\right]$$

The inverse instead requires \(d\) rootfinding to be computed, one for each component \(S_i\):

$$ S^{-1}({\bf y})=\left[\begin{array}{l}
S_1^{-1}(y_1)\\
S_2^{-1}(S_1^{-1}(y_1), \cdot)(y_2)\\
\;\vdots\\
S_d^{-1}(S_1^{-1}(y_1),\ldots,\cdot)(y_d)
\end{array}\right]$$

where each rootfinding is done along the last dimension (you can see the monotonicity constraint coming handy when doing these rootfidings). The class InverseTransportMap is just a wrapper around \(S\), so it wouldn't help.

In order to speed-up the computation of the inverse there are two available routes:

  1. Use MPI: the computation can be performed for each point separately. You can take a look at the MPI section of the tutorial.
  2. The alternative (if you really have to use this inverse again and again) is to solve the regression problem
    $$ T = \arg\min_{T} \Vert T({\bf x}) - S^{-1}({\bf x}) \Vert_{\mathcal{N}(0,{\bf I})} $$
    You can look for the API documentation of the regression function.
    Once the map \(T\) is found, then you can just evaluate it on all your points and that should be way faster.

Let me know if you have any additional doubt and I can write down a couple of working examples.

answered Mar 6, 2019 by dabi (307 points)
selected Mar 12, 2019 by HassanAr

Another way to accelerate things is to relax the tolerances of the rootfininding functions (if you can live with giving away some digit of precision). There is no easy way to do this right now (I didn't think of exposing those parameters but it could be a good idea...

What you would have to do is to:

  1. Write your own inverse function following what is done in the TriangularTransportMap implementation. Namely, it is inverting one component at a time.
  2. For each a.inverse call you see, you should pass the extra arguments xtol and rtol (see the arguments taken by the inverse function of MonotonicFunctionApproximation)

 

 

Hi Dabi,

Thanks for the explaination and suggestions! 

I tried reducing the tolerances that are passed into inverse() in MonotonicFunctionApproximations.py but that does not give me the speed-up that I want: my testing with 10000 samples show that reducing xtol,rtol by 10^6-factor reduces the computation time by %25.

The more attractive option is using the regression, I have tried scikitlearn regressors which are ok, but I am not sure about the usage syntax of regression() in TransportMap. It is not an attribute of the transport map S (that I have optimized above) bc it is a Composite of FrozenLinearDiagonalMap and Default_IsotropicIntegratedSquaredTriangularTransportMap. Note that I am computing S for density estimation, that is, I am optimzing S so that it maps my data points  to standard normal distribution (distribution 'rho' in the below code).

So I tried creating a new map say F and then run its regression() method. If my understaning is correct, running F.regression(S,...) would optimize F so that F\approx S^{-1}. Something like this:

F = TM.Default_IsotropicIntegratedSquaredTriangularTransportMap(2, 7, 'total')
F.regression(S,d=pi,qtype=0,qparams=10000)

But it fails regardless of the order for F I am choosing:

2019-03-08 17:49:27 WARNING: TM.MonotonicIntegratedSquaredApproximation: Regression failure: Warning: Maximum number of iterations has been exceeded.
Out[66]:
[     fun: 1.1961881629991252e-05
      jac: array([ 8.75172327e-09, -4.75322922e-07, -3.00859841e-07,  3.92958646e-07,
         2.14541558e-07, -4.81239400e-07, -1.87358454e-07,  5.38003228e-07])
  message: 'Optimization terminated successfully.'
     nfev: 5
     nhev: 8
      nit: 4
     njev: 8
   status: 0
  success: True
        x: array([-3.48537190e-02,  1.01375345e+00,  8.68793114e-03, -7.18648138e-04,
        -6.14610519e-03,  8.80097391e-04,  5.32511666e-03, -9.83908107e-04]),
      fun: 0.0006638339539564431
      jac: array([ 1.90022453e-06,  2.73937293e-07,  2.16901301e-07,  1.78346944e-06,
        -1.43655917e-07, -2.48060167e-07, -1.73040333e-07, -9.14702992e-07,
        -4.13594286e-07, -1.13394772e-06,  5.93691193e-07,  8.00605762e-07,
         5.89656228e-07, -4.12815757e-09, -8.21406444e-07,  2.61063530e-09,
         5.06448574e-07,  3.58184664e-07,  1.26754452e-06,  5.73870297e-09,
         3.37571407e-09, -5.19205225e-07,  5.84908188e-07, -3.19645960e-09,
        -1.31521671e-06,  7.57141189e-07, -3.19693671e-09, -4.21075932e-07,
        -5.12217092e-07, -2.97314593e-09,  9.26045256e-07, -8.31352439e-07,
        -7.16223428e-07,  9.39239258e-07, -6.40621275e-09, -4.69282684e-09])
  message: 'Warning: Maximum number of iterations has been exceeded.'
     nfev: 102
     nhev: 299
      nit: 100
     njev: 201
   status: 1
  success: False
        x: array([-4.93248132e+00, -3.40419280e+00, -2.80404418e+00, -5.89418871e+00,
        -1.60478450e-01,  3.14334393e+00,  2.28341036e+00,  3.29807034e+00,
         5.84773238e+00,  1.60351738e+01, -2.30804474e-01, -1.13217743e+01,
        -2.17947307e-01, -2.19994803e+00,  3.42905217e-01, -1.56575739e-02,
        -7.16097576e+00, -5.06389764e+00, -1.79253547e+01,  3.11273681e+00,
         1.79634020e+00,  2.18439951e-01, -8.27057470e+00,  1.91733436e-02,
         1.46399323e+00, -3.39846521e-01,  1.91746656e-02,  1.69263224e-01,
         1.96172589e-01, -1.55490609e+00, -1.30945011e+01,  2.96098368e-01,
         1.01279233e+01, -3.62025217e-01, -3.47997091e+00, -2.54166652e+00])]

Now I am not sure if I am using this in the correct way, or it is just the math that says that approximating inverse of polynomials with F-type maps is a no-go. 


 

Hi Hassan, I would increase the number of maximum iterations for the regression, or relax the tolerances. It looks that it stops at 100 iterations and the gradient is pretty small (as well as the function value), so I guess the optimizer should be almost done.

Also, in 2 dimensions I would go for Gauss quadratures i.e. calling something like this:

F.regression(S,d=pi,qtype=3,qparams=[10,10],maxit=1000)

Let me know whether this doesn't solve the problem and I will look more into it.

Hey Dabi,

I have check-marked your answer since it gave me a good understanding of how TM computes the inverse map and various options to speed it up. Thanks!

About the TM regression method I tried your suggestion. Indeed the optimization problem converges using higher maxit. Also using Gaussian quadrature really speeds it up. However, the final regressed map appears to be overfit and maps data to off values. Here is an example:

F = TM.Default_IsotropicIntegratedSquaredTriangularTransportMap(2,8, 'total')
log_opt=F.regression(S,d=rho,qtype=3,qparams=[10,10],maxit=1000,tol=.0001)

print(log_opt)

# comparison to direct inverse
ytest,w_=rho.quadrature(0,10)
print('direct inverse:')
print(str(S.inverse(ytest)))
print('regressed inverse')
print(str(F(ytest)))


OUTPUT:
[     fun: 0.0001210223457066231
      jac: array([-5.31952241e-05,  3.57561192e-02,  1.52204030e-02, -1.42395374e-04,
        -1.32378090e-04, -2.17599374e-05,  1.81187299e-03,  6.34074482e-06,
         1.84036808e-04])
  message: 'Optimization terminated successfully.'
     nfev: 17
     nhev: 39
      nit: 12
     njev: 28
   status: 0
  success: True
        x: array([-0.03240988,  6.5646004 ,  1.30135379, -0.00801603,  1.11145911,
        -0.01313375,  0.69350859, -0.03486978,  0.41362886]),
      fun: 1.580567596271701e+21
      jac: array([ 1.10631838e+08, -1.21778892e+08,  1.79419142e+07,  5.78275758e+07,
         2.73629540e+08,  3.83568895e+08, -4.96095942e+07, -6.08678104e+07,
         4.33737927e+08, -2.06124836e+09,  1.83683770e+10,  2.32456862e+10,
         3.29261748e+08, -1.62948821e+08,  3.98306486e+10, -5.70868639e+08,
         3.80685692e+08,  5.39078141e+07,  7.65910683e+10,  7.06392405e+07,
        -5.12928453e+09,  1.39426261e+09,  6.94277129e+10, -2.36900710e+08,
         1.72783564e+08,  2.61984104e+08,  3.54404192e+10,  2.50905713e+09,
         1.54879620e+08,  1.49368662e+10,  6.17278938e+10, -2.24363161e+08,
        -3.38268557e+09,  3.21037429e+08,  2.00679333e+08,  4.27836854e+09,
         6.88187343e+09,  1.15290825e+11,  2.52385150e+10,  1.07236560e+10,
         4.72916612e+10, -9.67461034e+07, -8.85094335e+08,  1.83451230e+06,
         3.50656394e+07])
  message: 'Optimization terminated successfully.'
     nfev: 147
     nhev: 2943
      nit: 86
     njev: 230
   status: 0
  success: True
        x: array([-9.49735423e+07,  1.31372309e+08, -1.21167959e+07, -5.98574098e+07,
        -2.40473525e+08, -3.07773110e+08,  4.88369400e+07,  6.27035028e+07,
        -3.53025584e+08,  9.80987068e+04, -6.27358053e+04, -1.62609362e+05,
         1.69010481e+06, -9.64783249e+02,  9.43145520e+04,  5.35655108e+06,
        -1.24621503e+05,  3.89018768e+03,  2.16503722e+05, -7.50524565e+02,
         2.24483243e+05,  3.84788590e+05, -5.50786290e+04, -1.47110127e+04,
         5.56065737e+02,  6.13492257e+02, -7.38261195e+04,  5.94662981e+05,
         2.97200499e+02, -3.19761376e+06, -2.78300273e+04, -2.51769012e+03,
        -1.33202418e+04,  3.72665652e+06, -3.83662435e+00, -1.51214307e+05,
         7.56563436e+06,  3.26940859e+04, -8.59584662e+04,  1.28434698e+05,
         3.32784703e+05, -1.37855940e+04, -1.39003340e+05, -1.38494764e+02,
        -4.69202885e+03])]



direct inverse:
[[ 6.05466762e-02 -7.54333691e-03]
 [ 3.87743488e-02 -9.85809003e-03]
 [-5.94600312e-02 -2.04010446e-02]
 [-2.91511506e-02  5.54453966e-02]
 [ 3.35553031e-02 -4.78581561e-02]
 [-4.87278503e-02 -1.58625780e-02]
 [ 4.92988601e-02 -1.43207271e-02]
 [-1.74338836e-02 -1.46619238e+06]
 [-2.76217843e-02  6.79868555e-02]
 [-8.29415694e-02 -2.29045018e-02]]
regressed inverse
[[ 5.88494435e+01  3.14812004e+10]
 [ 8.72100660e+00  4.48223205e+09]
 [-6.91018643e+01  6.23152582e+10]
 [-5.46005102e+00  1.54710299e+10]
 [ 2.70575699e+00 -1.80779585e+10]
 [-4.07154114e+01  2.16092712e+10]
 [ 2.98838362e+01 -2.52700442e+10]
 [-1.99069867e+00 -1.55231359e+10]
 [-4.43173991e+00  1.55826696e+10]
 [-1.32678313e+02  9.29507958e+11]]

At this point, other regressor methods work well enough for me, but if I am missing something in using TM regression let me know.

 

Hi Hassan, yes it looks like it finds a minimizer but the function value is way off (10e21) compared to your previous case with Monte Carlo points.

I think this might be due to the fact you are using too few quadrature points. You are trying to fit a 7th order map with integrated squared parametrization. This means that the maximum order on each component would be 15th (this parametrization squares the polynomials and then integrate over them, making them monotone increasing). I would use at least a quadrature of order 15 to do the regression, or introduce some regularization.

...