# LostTech.TensorFlow : API Documentation

Type SinhArcsinh

Namespace tensorflow.contrib.distributions.bijectors

Parent Bijector

Interfaces ISinhArcsinh

Compute `Y = g(X) = Sinh( (Arcsinh(X) + skewness) * tailweight )`.

For `skewness in (-inf, inf)` and `tailweight in (0, inf)`, this transformation is a diffeomorphism of the real line `(-inf, inf)`. The inverse transform is `X = g^{-1}(Y) = Sinh( ArcSinh(Y) / tailweight - skewness )`.

The `SinhArcsinh` transformation of the Normal is described in [Sinh-arcsinh distributions](https://www.jstor.org/stable/27798865) This Bijector allows a similar transformation of any distribution supported on `(-inf, inf)`.

#### Meaning of the parameters

* If `skewness = 0` and `tailweight = 1`, this transform is the identity. * Positive (negative) `skewness` leads to positive (negative) skew. * positive skew means, for unimodal `X` centered at zero, the mode of `Y` is "tilted" to the right. * positive skew means positive values of `Y` become more likely, and negative values become less likely. * Larger (smaller) `tailweight` leads to fatter (thinner) tails. * Fatter tails mean larger values of `|Y|` become more likely. * If `X` is a unit Normal, `tailweight < 1` leads to a distribution that is "flat" around `Y = 0`, and a very steep drop-off in the tails. * If `X` is a unit Normal, `tailweight > 1` leads to a distribution more peaked at the mode with heavier tails.

To see the argument about the tails, note that for `|X| >> 1` and `|X| >> (|skewness| * tailweight)**tailweight`, we have `Y approx 0.5 X**tailweight e**(sign(X) skewness * tailweight)`.

### Public properties

#### Tensorskewness get;

The `skewness` in: `Y = Sinh((Arcsinh(X) + skewness) * tailweight)`.

#### objectskewness_dyn get;

The `skewness` in: `Y = Sinh((Arcsinh(X) + skewness) * tailweight)`.

#### objecttailweight get;

The `tailweight` in: `Y = Sinh((Arcsinh(X) + skewness) * tailweight)`.

#### objecttailweight_dyn get;

The `tailweight` in: `Y = Sinh((Arcsinh(X) + skewness) * tailweight)`.