TransportMaps.MPI

Module Contents

Classes

MPIPoolContext

SumChunkReduce

Define the summation of the chunks operation.

TupleSumChunkReduce

Define the summation of the chunks operation over list of tuples.

TensorDotReduce

Define the reduce tensordot operation carried out through the mpi_map function

ExpectationReduce

Define the expectation operation carried out through the mpi_map function

AbsExpectationReduce

Define the expectation of the absolute value: \(\mathbb{E}[\vert {\bf X} \vert]\)

TupleExpectationReduce

Define the expectation operation applied on a tuple

Functions

get_mpi_pool()

Get a pool of n processors

mpi_eval(f[, scatter_tuple, bcast_tuple, ...])

Interface for the parallel evaluation of a generic function on points x

mpi_map(f[, scatter_tuple, bcast_tuple, ...])

Interface for the parallel evaluation of a generic function on points x

mpi_map_alloc_dmem(f[, scatter_tuple, bcast_tuple, ...])

Interface for the parallel evaluation of a generic function on points x

mpi_alloc_dmem([mpi_pool])

mpi_bcast_dmem([mpi_pool, reserved])

List of keyworded arguments to be allocated in the distributed memory.

mpi_scatter_dmem([mpi_pool])

List of keyworded arguments to be scattered in the distributed memory.

distributed_sampling(d, qtype, qparams[, mpi_pool])

TransportMaps.MPI.get_mpi_pool()[source]

Get a pool of n processors

Returns:

(mpi_map.MPI_Pool) – pool of processors

Usage example:

import numpy as np
import numpy.random as npr
from TransportMaps import get_mpi_pool, mpi_map

class Operator(object):
    def __init__(self, a):
        self.a = a
    def sum(self, x, n=1):
        out = x
        for i in range(n):
            out += self.a
        return out

op = Operator(2.)
x = npr.randn(100,5)
n = 2

pool = get_mpi_pool()
pool.start(3)
try:
    xsum = mpi_map("sum", op, x, (n,), mpi_pool=pool)
finally:
    pool.stop()
class TransportMaps.MPI.MPIPoolContext(nprocs)[source]

Bases: object

__enter__()[source]
__exit__(exc_type, exc_val, exc_tb)[source]
TransportMaps.MPI.mpi_eval(f, scatter_tuple=None, bcast_tuple=None, dmem_key_in_list=None, dmem_arg_in_list=None, dmem_val_in_list=None, dmem_key_out_list=None, obj=None, reduce_obj=None, reduce_tuple=None, import_set=None, mpi_pool=None, splitted=False, concatenate=True)[source]

Interface for the parallel evaluation of a generic function on points x

Parameters:
  • f (object or str) – function or string identifying the function in object obj

  • scatter_tuple (tuple) – tuple containing 2 lists of [keys] and [arguments] which will be scattered to the processes.

  • bcast_tuple (tuple) – tuple containing 2 lists of [keys] and [arguments] which will be broadcasted to the processes.

  • dmem_key_in_list (list) – list of string containing the keys to be fetched (or created with default None if missing) from the distributed memory and provided as input to f.

  • dmem_val_in_list (list) – list of objects corresponding to the keys defined in dmem_key_in_list, used in case we are not executing in parallel

  • dmem_key_out_list (list) – list of keys to be assigned to the outputs beside the first one

  • obj (object) – object where the function f_name is defined

  • reduce_obj (object) – object ReduceObject defining the reduce method to be applied (if any)

  • reduce_tuple (object) – tuple containing 2 lists of [keys] and [arguments] which will be scattered to the processes to be used by reduce_obj

  • import_set (set) – list of couples (module_name,as_field) to be imported as import module_name as as_field

  • mpi_pool (mpi_map.MPI_Pool) – pool of processors

  • splitted (bool) – whether the scattering input is already splitted or not

  • concatenate (bool) – whether to concatenate the output (the output of f must be a numpy.ndarray object

TransportMaps.MPI.mpi_map(f, scatter_tuple=None, bcast_tuple=None, dmem_key_in_list=None, dmem_arg_in_list=None, dmem_val_in_list=None, obj=None, obj_val=None, reduce_obj=None, reduce_tuple=None, mpi_pool=None, splitted=False, concatenate=True)[source]

Interface for the parallel evaluation of a generic function on points x

Parameters:
  • f (object or str) – function or string identifying the function in object obj

  • scatter_tuple (tuple) – tuple containing 2 lists of [keys] and [arguments] which will be scattered to the processes.

  • bcast_tuple (tuple) – tuple containing 2 lists of [keys] and [arguments] which will be broadcasted to the processes.

  • dmem_key_in_list (list) – list of string containing the keys to be fetched (or created with default None if missing) from the distributed memory and provided as input to f.

  • dmem_val_in_list (list) – list of objects corresponding to the keys defined in dmem_key_in_list, used in case we are not executing in parallel

  • obj (object or str) – object where the function f_name is defined

  • obj_val (object) – object to be used in case not executing in parallel and obj is a string

  • reduce_obj (object) – object ReduceObject defining the reduce method to be applied (if any)

  • reduce_tuple (object) – tuple containing 2 lists of [keys] and [arguments] which will be scattered to the processes to be used by reduce_obj

  • mpi_pool (mpi_map.MPI_Pool) – pool of processors

  • splitted (bool) – whether the scattering input is already splitted or not

  • concatenate (bool) – whether to concatenate the output (the output of f must be a numpy.ndarray object

TransportMaps.MPI.mpi_map_alloc_dmem(f, scatter_tuple=None, bcast_tuple=None, dmem_key_in_list=None, dmem_arg_in_list=None, dmem_val_in_list=None, dmem_key_out_list=None, obj=None, obj_val=None, reduce_obj=None, reduce_tuple=None, mpi_pool=None, splitted=False, concatenate=True)[source]

Interface for the parallel evaluation of a generic function on points x

Parameters:
  • f (object or str) – function or string identifying the function in object obj

  • scatter_tuple (tuple) – tuple containing 2 lists of [keys] and [arguments] which will be scattered to the processes.

  • bcast_tuple (tuple) – tuple containing 2 lists of [keys] and [arguments] which will be broadcasted to the processes.

  • dmem_key_in_list (list) – list of string containing the keys to be fetched (or created with default None if missing) from the distributed memory and provided as input to f.

  • dmem_val_in_list (list) – list of objects corresponding to the keys defined in dmem_key_in_list, used in case we are not executing in parallel

  • dmem_key_out_list (list) – list of keys to be assigned to the outputs beside the first one

  • obj (object) – object where the function f_name is defined

  • obj_val (object) – object to be used in case not executing in parallel and obj is a string

  • reduce_obj (object) – object ReduceObject defining the reduce method to be applied (if any)

  • reduce_tuple (object) – tuple containing 2 lists of [keys] and [arguments] which will be scattered to the processes to be used by reduce_obj

  • mpi_pool (mpi_map.MPI_Pool) – pool of processors

  • splitted (bool) – whether the scattering input is already splitted or not

  • concatenate (bool) – whether to concatenate the output (the output of f must be a numpy.ndarray object

TransportMaps.MPI.mpi_alloc_dmem(mpi_pool=None, **kwargs)[source]
TransportMaps.MPI.mpi_bcast_dmem(mpi_pool=None, reserved=False, **kwargs)[source]

List of keyworded arguments to be allocated in the distributed memory.

This executes only if an mpi_pool is provided.

Parameters:
  • mpi_pool (mpi_map.MPI_Pool) – pool of processors

  • reserved (bool) – whether the kwargs dictionary can contain the reserved key obj

TransportMaps.MPI.mpi_scatter_dmem(mpi_pool=None, **kwargs)[source]

List of keyworded arguments to be scattered in the distributed memory.

This executes only if an mpi_pool is provided.

Parameters:

mpi_pool (mpi_map.MPI_Pool) – pool of processors

class TransportMaps.MPI.SumChunkReduce(axis=None)[source]

Bases: object

Define the summation of the chunks operation.

The chunks resulting from the output of the MPI evaluation are summed along their axis.

Parameters:

axis (tuple [2]) – tuple containing list of axes to be used in the sum operation

inner_reduce(x, *args, **kwargs)[source]
outer_reduce(x, *args, **kwargs)[source]
class TransportMaps.MPI.TupleSumChunkReduce(axis=None)[source]

Bases: SumChunkReduce

Define the summation of the chunks operation over list of tuples.

The chunks resulting from the output of the MPI evaluation are summed along their axis.

Parameters:

axis (tuple [2]) – tuple containing list of axes to be used in the sum operation

outer_reduce(x, *args, **kwargs)[source]
class TransportMaps.MPI.TensorDotReduce(axis)[source]

Bases: object

Define the reduce tensordot operation carried out through the mpi_map function

Parameters:

axis (tuple [2]) – tuple containing list of axes to be used in the tensordot operation

inner_reduce(x, w)[source]
outer_reduce(x, w)[source]
class TransportMaps.MPI.ExpectationReduce[source]

Bases: TensorDotReduce

Define the expectation operation carried out through the mpi_map function

class TransportMaps.MPI.AbsExpectationReduce[source]

Bases: ExpectationReduce

Define the expectation of the absolute value: \(\mathbb{E}[\vert {\bf X} \vert]\)

inner_reduce(x, w)[source]
class TransportMaps.MPI.TupleExpectationReduce[source]

Bases: ExpectationReduce

Define the expectation operation applied on a tuple

If we are given a tuple \((x_1,x_2)\), the inner reduce returns \((\langle x_1,w\rangle , \langle x_2, w\rangle)\).

Given a list of tuples \(\{(x_i,y_i\}_{i=0}^n\), the outer reduce gives \((\sum x_i, \sum y_i)\).

inner_reduce(x, w)[source]
outer_reduce(x, w)[source]
TransportMaps.MPI.distributed_sampling(d, qtype, qparams, mpi_pool=None)[source]