# LostTech.TensorFlow : API Documentation

Type VectorLaplaceDiag

Namespace tensorflow.contrib.distributions

Interfaces IVectorLaplaceDiag

The vectorization of the Laplace distribution on `R^k`.

The vector laplace distribution is defined over `R^k`, and parameterized by a (batch of) length-`k` `loc` vector (the means) and a (batch of) `k x k` `scale` matrix: `covariance = 2 * scale @ scale.T`, where `@` denotes matrix-multiplication.

#### Mathematical Details

The probability density function (pdf) is,

```none pdf(x; loc, scale) = exp(-||y||_1) / Z, y = inv(scale) @ (x - loc), Z = 2**k |det(scale)|, ```

where:

* `loc` is a vector in `R^k`, * `scale` is a linear operator in `R^{k x k}`, `cov = scale @ scale.T`, * `Z` denotes the normalization constant, and, * `||y||_1` denotes the `l1` norm of `y`, `sum_i |y_i|.

A (non-batch) `scale` matrix is:

```none scale = diag(scale_diag + scale_identity_multiplier * ones(k)) ```

where:

* `scale_diag.shape = [k]`, and, * `scale_identity_multiplier.shape = []`.

If both `scale_diag` and `scale_identity_multiplier` are `None`, then `scale` is the Identity matrix.

The VectorLaplace distribution is a member of the [location-scale family](https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be constructed as,

```none X = (X_1,..., X_k), each X_i ~ Laplace(loc=0, scale=1) Y = (Y_1,...,Y_k) = scale @ X + loc ```

#### About `VectorLaplace` and `Vector` distributions in TensorFlow.

The `VectorLaplace` is a non-standard distribution that has useful properties.

The marginals `Y_1,..., Y_k` are *not* Laplace random variables, due to the fact that the sum of Laplace random variables is not Laplace.

Instead, `Y` is a vector whose components are linear combinations of Laplace random variables. Thus, `Y` lives in the vector space generated by `vectors` of Laplace distributions. This allows the user to decide the mean and covariance (by setting `loc` and `scale`), while preserving some properties of the Laplace distribution. In particular, the tails of `Y_i` will be (up to polynomial factors) exponentially decaying.

To see this last statement, note that the pdf of `Y_i` is the convolution of the pdf of `k` independent Laplace random variables. One can then show by induction that distributions with exponential (up to polynomial factors) tails are closed under convolution.

#### Examples
Show Example
import tensorflow_probability as tfp
tfd = tfp.distributions

# Initialize a single 2-variate VectorLaplace. vla = tfd.VectorLaplaceDiag( loc=[1., -1], scale_diag=[1, 2.])

vla.mean().eval() # ==> [1., -1]

vla.stddev().eval() # ==> [1., 2] * sqrt(2)

# Evaluate this on an observation in `R^2`, returning a scalar. vla.prob([-1., 0]).eval() # shape: []

# Initialize a 3-batch, 2-variate scaled-identity VectorLaplace. vla = tfd.VectorLaplaceDiag( loc=[1., -1], scale_identity_multiplier=[1, 2., 3])

vla.mean().eval() # shape: [3, 2] # ==> [[1., -1] # [1, -1], # [1, -1]]

vla.stddev().eval() # shape: [3, 2] # ==> sqrt(2) * [[1., 1], # [2, 2], # [3, 3]]

# Evaluate this on an observation in `R^2`, returning a length-3 vector. vla.prob([-1., 0]).eval() # shape: [3]

# Initialize a 2-batch of 3-variate VectorLaplace's. vla = tfd.VectorLaplaceDiag( loc=[[1., 2, 3], [11, 22, 33]] # shape: [2, 3] scale_diag=[[1., 2, 3], [0.5, 1, 1.5]]) # shape: [2, 3]

# Evaluate this on a two observations, each in `R^3`, returning a length-2 # vector. x = [[-1., 0, 1], [-11, 0, 11.]] # shape: [2, 3]. vla.prob(x).eval() # shape: [2]