LostTech.TensorFlow : API Documentation

Type VectorSinhArcsinhDiag

Namespace tensorflow.contrib.distributions

Parent TransformedDistribution

Interfaces IVectorSinhArcsinhDiag

The (diagonal) SinhArcsinh transformation of a distribution on `R^k`.

This distribution models a random vector `Y = (Y1,...,Yk)`, making use of a `SinhArcsinh` transformation (which has adjustable tailweight and skew), a rescaling, and a shift.

The `SinhArcsinh` transformation of the Normal is described in great depth in [Sinh-arcsinh distributions](https://www.jstor.org/stable/27798865). Here we use a slightly different parameterization, in terms of `tailweight` and `skewness`. Additionally we allow for distributions other than Normal, and control over `scale` as well as a "shift" parameter `loc`.

#### Mathematical Details

Given iid random vector `Z = (Z1,...,Zk)`, we define the VectorSinhArcsinhDiag transformation of `Z`, `Y`, parameterized by `(loc, scale, skewness, tailweight)`, via the relation (with `@` denoting matrix multiplication):

``` Y := loc + scale @ F(Z) * (2 / F_0(2)) F(Z) := Sinh( (Arcsinh(Z) + skewness) * tailweight ) F_0(Z) := Sinh( Arcsinh(Z) * tailweight ) ```

This distribution is similar to the location-scale transformation `L(Z) := loc + scale @ Z` in the following ways:

* If `skewness = 0` and `tailweight = 1` (the defaults), `F(Z) = Z`, and then `Y = L(Z)` exactly. * `loc` is used in both to shift the result by a constant factor. * The multiplication of `scale` by `2 / F_0(2)` ensures that if `skewness = 0` `P[Y - loc <= 2 * scale] = P[L(Z) - loc <= 2 * scale]`. Thus it can be said that the weights in the tails of `Y` and `L(Z)` beyond `loc + 2 * scale` are the same.

This distribution is different than `loc + scale @ Z` due to the reshaping done by `F`:

* Positive (negative) `skewness` leads to positive (negative) skew. * positive skew means, the mode of `F(Z)` is "tilted" to the right. * positive skew means positive values of `F(Z)` become more likely, and negative values become less likely. * Larger (smaller) `tailweight` leads to fatter (thinner) tails. * Fatter tails mean larger values of `|F(Z)|` become more likely. * `tailweight < 1` leads to a distribution that is "flat" around `Y = loc`, and a very steep drop-off in the tails. * `tailweight > 1` leads to a distribution more peaked at the mode with heavier tails.

To see the argument about the tails, note that for `|Z| >> 1` and `|Z| >> (|skewness| * tailweight)**tailweight`, we have `Y approx 0.5 Z**tailweight e**(sign(Z) skewness * tailweight)`.

To see the argument regarding multiplying `scale` by `2 / F_0(2)`,

``` P[(Y - loc) / scale <= 2] = P[F(Z) * (2 / F_0(2)) <= 2] = P[F(Z) <= F_0(2)] = P[Z <= 2] (if F = F_0). ```

Methods

Properties

Public static methods

VectorSinhArcsinhDiag NewDyn(object loc, object scale_diag, object scale_identity_multiplier, object skewness, object tailweight, object distribution, ImplicitContainer<T> validate_args, ImplicitContainer<T> allow_nan_stats, ImplicitContainer<T> name)

Construct VectorSinhArcsinhDiag distribution on `R^k`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-10-01. Instructions for updating: The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of tf.contrib.distributions.

The arguments `scale_diag` and `scale_identity_multiplier` combine to define the diagonal `scale` referred to in this class docstring:

```none scale = diag(scale_diag + scale_identity_multiplier * ones(k)) ```

The `batch_shape` is the broadcast shape between `loc` and `scale` arguments.

The `event_shape` is given by last dimension of the matrix implied by `scale`. The last dimension of `loc` (if provided) must broadcast with this

Additional leading dimensions (if any) will index batches.
Parameters
object loc
Floating-point `Tensor`. If this is set to `None`, `loc` is implicitly `0`. When specified, may have shape `[B1,..., Bb, k]` where `b >= 0` and `k` is the event size.
object scale_diag
Non-zero, floating-point `Tensor` representing a diagonal matrix added to `scale`. May have shape `[B1,..., Bb, k]`, `b >= 0`, and characterizes `b`-batches of `k x k` diagonal matrices added to `scale`. When both `scale_identity_multiplier` and `scale_diag` are `None` then `scale` is the `Identity`.
object scale_identity_multiplier
Non-zero, floating-point `Tensor` representing a scale-identity-matrix added to `scale`. May have shape `[B1,..., Bb]`, `b >= 0`, and characterizes `b`-batches of scale `k x k` identity matrices added to `scale`. When both `scale_identity_multiplier` and `scale_diag` are `None` then `scale` is the `Identity`.
object skewness
Skewness parameter. floating-point `Tensor` with shape broadcastable with `event_shape`.
object tailweight
Tailweight parameter. floating-point `Tensor` with shape broadcastable with `event_shape`.
object distribution
`tf.Distribution`-like instance. Distribution from which `k` iid samples are used as input to transformation `F`. Default is `tfp.distributions.Normal(loc=0., scale=1.)`. Must be a scalar-batch, scalar-event distribution. Typically `distribution.reparameterization_type = FULLY_REPARAMETERIZED` or it is a function of non-trainable parameters. WARNING: If you backprop through a VectorSinhArcsinhDiag sample and `distribution` is not `FULLY_REPARAMETERIZED` yet is a function of trainable variables, then the gradient will be incorrect!
ImplicitContainer<T> validate_args
Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs.
ImplicitContainer<T> allow_nan_stats
Python `bool`, default `True`. When `True`, statistics (e.g., mean, mode, variance) use the value "`NaN`" to indicate the result is undefined. When `False`, an exception is raised if one or more of the statistic's batch members are undefined.
ImplicitContainer<T> name
Python `str` name prefixed to Ops created by this class.

Public properties

object allow_nan_stats get;

object allow_nan_stats_dyn get;

TensorShape batch_shape get;

object batch_shape_dyn get;

object bijector get;

object bijector_dyn get;

object distribution get;

object distribution_dyn get;

object dtype get;

object dtype_dyn get;

TensorShape event_shape get;

object event_shape_dyn get;

object loc get;

The `loc` in `Y := loc + scale @ F(Z) * (2 / F(2)).

object loc_dyn get;

The `loc` in `Y := loc + scale @ F(Z) * (2 / F(2)).

string name get;

object name_dyn get;

IDictionary<object, object> parameters get;

object parameters_dyn get;

object PythonObject get;

object reparameterization_type get;

object reparameterization_type_dyn get;

object scale get;

The `LinearOperator` `scale` in `Y := loc + scale @ F(Z) * (2 / F(2)).

object scale_dyn get;

The `LinearOperator` `scale` in `Y := loc + scale @ F(Z) * (2 / F(2)).

object skewness get;

Controls the skewness. `Skewness > 0` means right skew.

object skewness_dyn get;

Controls the skewness. `Skewness > 0` means right skew.

object tailweight get;

Controls the tail decay. `tailweight > 1` means faster than Normal.

object tailweight_dyn get;

Controls the tail decay. `tailweight > 1` means faster than Normal.

object validate_args get;

object validate_args_dyn get;