LostTech.TensorFlow : API Documentation

Type tf.math

Namespace tensorflow

Public static methods

object bessel_i0(object x, string name)

Computes the Bessel i0 function of `x` element-wise.

Modified Bessel function of order 0.

It is preferable to use the numerically stabler function `i0e(x)` instead.
Parameters
object x
A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, `float32`, `float64`.
string name
A name for the operation (optional).
Returns
object
A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.

object bessel_i0_dyn(object x, object name)

Computes the Bessel i0 function of `x` element-wise.

Modified Bessel function of order 0.

It is preferable to use the numerically stabler function `i0e(x)` instead.
Parameters
object x
A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, `float32`, `float64`.
object name
A name for the operation (optional).
Returns
object
A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.

Tensor bessel_i0e(IGraphNodeBase x, string name)

Computes the Bessel i0e function of `x` element-wise.

Exponentially scaled modified Bessel function of order 0 defined as `bessel_i0e(x) = exp(-abs(x)) bessel_i0(x)`.

This function is faster and numerically stabler than `bessel_i0(x)`.
Parameters
IGraphNodeBase x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `x`.

If `x` is a `SparseTensor`, returns `SparseTensor(x.indices, tf.math.bessel_i0e(x.values,...), x.dense_shape)`

object bessel_i0e_dyn(object x, object name)

Computes the Bessel i0e function of `x` element-wise.

Exponentially scaled modified Bessel function of order 0 defined as `bessel_i0e(x) = exp(-abs(x)) bessel_i0(x)`.

This function is faster and numerically stabler than `bessel_i0(x)`.
Parameters
object x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `x`.

If `x` is a `SparseTensor`, returns `SparseTensor(x.indices, tf.math.bessel_i0e(x.values,...), x.dense_shape)`

object bessel_i1(object x, string name)

Computes the Bessel i1 function of `x` element-wise.

Modified Bessel function of order 1.

It is preferable to use the numerically stabler function `i1e(x)` instead.
Parameters
object x
A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, `float32`, `float64`.
string name
A name for the operation (optional).
Returns
object
A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.

object bessel_i1_dyn(object x, object name)

Computes the Bessel i1 function of `x` element-wise.

Modified Bessel function of order 1.

It is preferable to use the numerically stabler function `i1e(x)` instead.
Parameters
object x
A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, `float32`, `float64`.
object name
A name for the operation (optional).
Returns
object
A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.

Tensor bessel_i1e(IGraphNodeBase x, string name)

Computes the Bessel i1e function of `x` element-wise.

Exponentially scaled modified Bessel function of order 0 defined as `bessel_i1e(x) = exp(-abs(x)) bessel_i1(x)`.

This function is faster and numerically stabler than `bessel_i1(x)`.
Parameters
IGraphNodeBase x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `x`.

If `x` is a `SparseTensor`, returns `SparseTensor(x.indices, tf.math.bessel_i1e(x.values,...), x.dense_shape)`

object bessel_i1e_dyn(object x, object name)

Computes the Bessel i1e function of `x` element-wise.

Exponentially scaled modified Bessel function of order 0 defined as `bessel_i1e(x) = exp(-abs(x)) bessel_i1(x)`.

This function is faster and numerically stabler than `bessel_i1(x)`.
Parameters
object x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `x`.

If `x` is a `SparseTensor`, returns `SparseTensor(x.indices, tf.math.bessel_i1e(x.values,...), x.dense_shape)`

Tensor cumulative_logsumexp(IGraphNodeBase x, int axis, bool exclusive, bool reverse, string name)

Compute the cumulative log-sum-exp of the tensor `x` along `axis`.

By default, this op performs an inclusive cumulative log-sum-exp, which means that the first element of the input is identical to the first element of the output.

This operation is significantly more numerically stable than the equivalent tensorflow operation `tf.math.log(tf.math.cumsum(tf.math.exp(x)))`, although computes the same result given infinite numerical precision. However, note that in some cases, it may be less stable than tf.math.reduce_logsumexp for a given element, as it applies the "log-sum-exp trick" in a different way.

More precisely, where tf.math.reduce_logsumexp uses the following trick:

``` log(sum(exp(x))) == log(sum(exp(x - max(x)))) + max(x) ```

it cannot be directly used here as there is no fast way of applying it to each prefix `x[:i]`. Instead, this function implements a prefix scan using pairwise log-add-exp, which is a commutative and associative (up to floating point precision) operator:

``` log_add_exp(x, y) = log(exp(x) + exp(y)) = log(1 + exp(min(x, y) - max(x, y))) + max(x, y) ```

However, reducing using the above operator leads to a different computation tree (logs are taken repeatedly instead of only at the end), and the maximum is only computed pairwise instead of over the entire prefix. In general, this leads to a different and slightly less precise computation.
Parameters
IGraphNodeBase x
A `Tensor`. Must be one of the following types: `float16`, `float32`, `float64`.
int axis
A `Tensor` of type `int32` or `int64` (default: 0). Must be in the range `[-rank(x), rank(x))`.
bool exclusive
If `True`, perform exclusive cumulative log-sum-exp.
bool reverse
If `True`, performs the cumulative log-sum-exp in the reverse direction.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same shape and type as `x`.

object cumulative_logsumexp_dyn(object x, ImplicitContainer<T> axis, ImplicitContainer<T> exclusive, ImplicitContainer<T> reverse, object name)

Compute the cumulative log-sum-exp of the tensor `x` along `axis`.

By default, this op performs an inclusive cumulative log-sum-exp, which means that the first element of the input is identical to the first element of the output.

This operation is significantly more numerically stable than the equivalent tensorflow operation `tf.math.log(tf.math.cumsum(tf.math.exp(x)))`, although computes the same result given infinite numerical precision. However, note that in some cases, it may be less stable than tf.math.reduce_logsumexp for a given element, as it applies the "log-sum-exp trick" in a different way.

More precisely, where tf.math.reduce_logsumexp uses the following trick:

``` log(sum(exp(x))) == log(sum(exp(x - max(x)))) + max(x) ```

it cannot be directly used here as there is no fast way of applying it to each prefix `x[:i]`. Instead, this function implements a prefix scan using pairwise log-add-exp, which is a commutative and associative (up to floating point precision) operator:

``` log_add_exp(x, y) = log(exp(x) + exp(y)) = log(1 + exp(min(x, y) - max(x, y))) + max(x, y) ```

However, reducing using the above operator leads to a different computation tree (logs are taken repeatedly instead of only at the end), and the maximum is only computed pairwise instead of over the entire prefix. In general, this leads to a different and slightly less precise computation.
Parameters
object x
A `Tensor`. Must be one of the following types: `float16`, `float32`, `float64`.
ImplicitContainer<T> axis
A `Tensor` of type `int32` or `int64` (default: 0). Must be in the range `[-rank(x), rank(x))`.
ImplicitContainer<T> exclusive
If `True`, perform exclusive cumulative log-sum-exp.
ImplicitContainer<T> reverse
If `True`, performs the cumulative log-sum-exp in the reverse direction.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same shape and type as `x`.

Tensor multiply_no_nan(IGraphNodeBase x, IGraphNodeBase y, string name)

Computes the product of x and y and returns 0 if the y is zero, even if x is NaN or infinite.
Parameters
IGraphNodeBase x
A `Tensor`. Must be one of the following types: `float32`, `float64`.
IGraphNodeBase y
A `Tensor` whose dtype is compatible with `x`.
string name
A name for the operation (optional).
Returns
Tensor
The element-wise value of the x times y.

object multiply_no_nan_dyn(object x, object y, object name)

Computes the product of x and y and returns 0 if the y is zero, even if x is NaN or infinite.
Parameters
object x
A `Tensor`. Must be one of the following types: `float32`, `float64`.
object y
A `Tensor` whose dtype is compatible with `x`.
object name
A name for the operation (optional).
Returns
object
The element-wise value of the x times y.

Tensor nextafter(IGraphNodeBase x1, IGraphNodeBase x2, string name)

Returns the next representable value of `x1` in the direction of `x2`, element-wise.

This operation returns the same result as the C++ std::nextafter function.

It can also return a subnormal number.
Parameters
IGraphNodeBase x1
A `Tensor`. Must be one of the following types: `float64`, `float32`.
IGraphNodeBase x2
A `Tensor`. Must have the same type as `x1`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `x1`.

object nextafter_dyn(object x1, object x2, object name)

Returns the next representable value of `x1` in the direction of `x2`, element-wise.

This operation returns the same result as the C++ std::nextafter function.

It can also return a subnormal number.
Parameters
object x1
A `Tensor`. Must be one of the following types: `float64`, `float32`.
object x2
A `Tensor`. Must have the same type as `x1`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `x1`.

Tensor polyval(IEnumerable<object> coeffs, IGraphNodeBase x, string name)

Computes the elementwise value of a polynomial.

If `x` is a tensor and `coeffs` is a list n + 1 tensors, this function returns the value of the n-th order polynomial

p(x) = coeffs[n-1] + coeffs[n-2] * x +... + coeffs[0] * x**(n-1)

evaluated using Horner's method, i.e.

p(x) = coeffs[n-1] + x * (coeffs[n-2] +... + x * (coeffs[1] + x * coeffs[0]))
Parameters
IEnumerable<object> coeffs
A list of `Tensor` representing the coefficients of the polynomial.
IGraphNodeBase x
A `Tensor` representing the variable of the polynomial.
string name
A name for the operation (optional).
Returns
Tensor
A `tensor` of the shape as the expression p(x) with usual broadcasting rules for element-wise addition and multiplication applied.

object polyval_dyn(object coeffs, object x, object name)

Computes the elementwise value of a polynomial.

If `x` is a tensor and `coeffs` is a list n + 1 tensors, this function returns the value of the n-th order polynomial

p(x) = coeffs[n-1] + coeffs[n-2] * x +... + coeffs[0] * x**(n-1)

evaluated using Horner's method, i.e.

p(x) = coeffs[n-1] + x * (coeffs[n-2] +... + x * (coeffs[1] + x * coeffs[0]))
Parameters
object coeffs
A list of `Tensor` representing the coefficients of the polynomial.
object x
A `Tensor` representing the variable of the polynomial.
object name
A name for the operation (optional).
Returns
object
A `tensor` of the shape as the expression p(x) with usual broadcasting rules for element-wise addition and multiplication applied.

Tensor reciprocal_no_nan(IGraphNodeBase x, string name)

Performs a safe reciprocal operation, element wise.

If a particular element is zero, the reciprocal for that element is also set to zero.
Parameters
IGraphNodeBase x
A `Tensor` of type `float16`, `float32`, `float64` `complex64` or `complex128`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of same shape and type as `x`.
Show Example
x = tf.constant([2.0, 0.5, 0, 1], dtype=tf.float32)
            tf.math.reciprocal_no_nan(x)  # [ 0.5, 2, 0.0, 1.0 ] 

object reciprocal_no_nan_dyn(object x, object name)

Performs a safe reciprocal operation, element wise.

If a particular element is zero, the reciprocal for that element is also set to zero.
Parameters
object x
A `Tensor` of type `float16`, `float32`, `float64` `complex64` or `complex128`.
object name
A name for the operation (optional).
Returns
object
A `Tensor` of same shape and type as `x`.
Show Example
x = tf.constant([2.0, 0.5, 0, 1], dtype=tf.float32)
            tf.math.reciprocal_no_nan(x)  # [ 0.5, 2, 0.0, 1.0 ] 

Tensor reduce_euclidean_norm(IGraphNodeBase input_tensor, int axis, bool keepdims, string name)

Computes the Euclidean norm of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
IGraphNodeBase input_tensor
The tensor to reduce. Should have numeric type.
int axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name for the operation (optional).
Returns
Tensor
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1, 2, 3], [1, 1, 1]])
            tf.reduce_euclidean_norm(x)  # sqrt(17)
            tf.reduce_euclidean_norm(x, 0)  # [sqrt(2), sqrt(5), sqrt(10)]
            tf.reduce_euclidean_norm(x, 1)  # [sqrt(14), sqrt(3)]
            tf.reduce_euclidean_norm(x, 1, keepdims=True)  # [[sqrt(14)], [sqrt(3)]]
            tf.reduce_euclidean_norm(x, [0, 1])  # sqrt(17) 

Tensor reduce_euclidean_norm(IGraphNodeBase input_tensor, IEnumerable<int> axis, bool keepdims, string name)

Computes the Euclidean norm of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
IGraphNodeBase input_tensor
The tensor to reduce. Should have numeric type.
IEnumerable<int> axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name for the operation (optional).
Returns
Tensor
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1, 2, 3], [1, 1, 1]])
            tf.reduce_euclidean_norm(x)  # sqrt(17)
            tf.reduce_euclidean_norm(x, 0)  # [sqrt(2), sqrt(5), sqrt(10)]
            tf.reduce_euclidean_norm(x, 1)  # [sqrt(14), sqrt(3)]
            tf.reduce_euclidean_norm(x, 1, keepdims=True)  # [[sqrt(14)], [sqrt(3)]]
            tf.reduce_euclidean_norm(x, [0, 1])  # sqrt(17) 

Tensor reduce_euclidean_norm(ndarray input_tensor, int axis, bool keepdims, string name)

Computes the Euclidean norm of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
ndarray input_tensor
The tensor to reduce. Should have numeric type.
int axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name for the operation (optional).
Returns
Tensor
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1, 2, 3], [1, 1, 1]])
            tf.reduce_euclidean_norm(x)  # sqrt(17)
            tf.reduce_euclidean_norm(x, 0)  # [sqrt(2), sqrt(5), sqrt(10)]
            tf.reduce_euclidean_norm(x, 1)  # [sqrt(14), sqrt(3)]
            tf.reduce_euclidean_norm(x, 1, keepdims=True)  # [[sqrt(14)], [sqrt(3)]]
            tf.reduce_euclidean_norm(x, [0, 1])  # sqrt(17) 

Tensor reduce_euclidean_norm(ndarray input_tensor, IEnumerable<int> axis, bool keepdims, string name)

Computes the Euclidean norm of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
ndarray input_tensor
The tensor to reduce. Should have numeric type.
IEnumerable<int> axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name for the operation (optional).
Returns
Tensor
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1, 2, 3], [1, 1, 1]])
            tf.reduce_euclidean_norm(x)  # sqrt(17)
            tf.reduce_euclidean_norm(x, 0)  # [sqrt(2), sqrt(5), sqrt(10)]
            tf.reduce_euclidean_norm(x, 1)  # [sqrt(14), sqrt(3)]
            tf.reduce_euclidean_norm(x, 1, keepdims=True)  # [[sqrt(14)], [sqrt(3)]]
            tf.reduce_euclidean_norm(x, [0, 1])  # sqrt(17) 

object reduce_euclidean_norm_dyn(object input_tensor, object axis, ImplicitContainer<T> keepdims, object name)

Computes the Euclidean norm of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
object input_tensor
The tensor to reduce. Should have numeric type.
object axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
ImplicitContainer<T> keepdims
If true, retains reduced dimensions with length 1.
object name
A name for the operation (optional).
Returns
object
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1, 2, 3], [1, 1, 1]])
            tf.reduce_euclidean_norm(x)  # sqrt(17)
            tf.reduce_euclidean_norm(x, 0)  # [sqrt(2), sqrt(5), sqrt(10)]
            tf.reduce_euclidean_norm(x, 1)  # [sqrt(14), sqrt(3)]
            tf.reduce_euclidean_norm(x, 1, keepdims=True)  # [[sqrt(14)], [sqrt(3)]]
            tf.reduce_euclidean_norm(x, [0, 1])  # sqrt(17) 

object reduce_std(ndarray input_tensor, IEnumerable<int> axis, bool keepdims, string name)

Computes the standard deviation of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
ndarray input_tensor
The tensor to reduce. Should have numeric type.
IEnumerable<int> axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
object
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_std(x)  # 1.1180339887498949
            tf.reduce_std(x, 0)  # [1., 1.]
            tf.reduce_std(x, 1)  # [0.5,  0.5] 

object reduce_std(CompositeTensor input_tensor, IEnumerable<int> axis, bool keepdims, string name)

Computes the standard deviation of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
CompositeTensor input_tensor
The tensor to reduce. Should have numeric type.
IEnumerable<int> axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
object
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_std(x)  # 1.1180339887498949
            tf.reduce_std(x, 0)  # [1., 1.]
            tf.reduce_std(x, 1)  # [0.5,  0.5] 

object reduce_std(ndarray input_tensor, int axis, bool keepdims, string name)

Computes the standard deviation of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
ndarray input_tensor
The tensor to reduce. Should have numeric type.
int axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
object
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_std(x)  # 1.1180339887498949
            tf.reduce_std(x, 0)  # [1., 1.]
            tf.reduce_std(x, 1)  # [0.5,  0.5] 

object reduce_std(IEnumerable<PythonClassContainer> input_tensor, IEnumerable<int> axis, bool keepdims, string name)

Computes the standard deviation of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
IEnumerable<PythonClassContainer> input_tensor
The tensor to reduce. Should have numeric type.
IEnumerable<int> axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
object
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_std(x)  # 1.1180339887498949
            tf.reduce_std(x, 0)  # [1., 1.]
            tf.reduce_std(x, 1)  # [0.5,  0.5] 

object reduce_std(IEnumerable<PythonClassContainer> input_tensor, int axis, bool keepdims, string name)

Computes the standard deviation of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
IEnumerable<PythonClassContainer> input_tensor
The tensor to reduce. Should have numeric type.
int axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
object
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_std(x)  # 1.1180339887498949
            tf.reduce_std(x, 0)  # [1., 1.]
            tf.reduce_std(x, 1)  # [0.5,  0.5] 

object reduce_std(PythonClassContainer input_tensor, int axis, bool keepdims, string name)

Computes the standard deviation of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
PythonClassContainer input_tensor
The tensor to reduce. Should have numeric type.
int axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
object
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_std(x)  # 1.1180339887498949
            tf.reduce_std(x, 0)  # [1., 1.]
            tf.reduce_std(x, 1)  # [0.5,  0.5] 

object reduce_std(PythonClassContainer input_tensor, IEnumerable<int> axis, bool keepdims, string name)

Computes the standard deviation of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
PythonClassContainer input_tensor
The tensor to reduce. Should have numeric type.
IEnumerable<int> axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
object
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_std(x)  # 1.1180339887498949
            tf.reduce_std(x, 0)  # [1., 1.]
            tf.reduce_std(x, 1)  # [0.5,  0.5] 

object reduce_std(IGraphNodeBase input_tensor, int axis, bool keepdims, string name)

Computes the standard deviation of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
IGraphNodeBase input_tensor
The tensor to reduce. Should have numeric type.
int axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
object
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_std(x)  # 1.1180339887498949
            tf.reduce_std(x, 0)  # [1., 1.]
            tf.reduce_std(x, 1)  # [0.5,  0.5] 

object reduce_std(CompositeTensor input_tensor, int axis, bool keepdims, string name)

Computes the standard deviation of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
CompositeTensor input_tensor
The tensor to reduce. Should have numeric type.
int axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
object
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_std(x)  # 1.1180339887498949
            tf.reduce_std(x, 0)  # [1., 1.]
            tf.reduce_std(x, 1)  # [0.5,  0.5] 

object reduce_std(IGraphNodeBase input_tensor, IEnumerable<int> axis, bool keepdims, string name)

Computes the standard deviation of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
IGraphNodeBase input_tensor
The tensor to reduce. Should have numeric type.
IEnumerable<int> axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
object
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_std(x)  # 1.1180339887498949
            tf.reduce_std(x, 0)  # [1., 1.]
            tf.reduce_std(x, 1)  # [0.5,  0.5] 

object reduce_std_dyn(object input_tensor, object axis, ImplicitContainer<T> keepdims, object name)

Computes the standard deviation of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
object input_tensor
The tensor to reduce. Should have numeric type.
object axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
ImplicitContainer<T> keepdims
If true, retains reduced dimensions with length 1.
object name
A name scope for the associated operations (optional).
Returns
object
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_std(x)  # 1.1180339887498949
            tf.reduce_std(x, 0)  # [1., 1.]
            tf.reduce_std(x, 1)  # [0.5,  0.5] 

Tensor reduce_variance(ndarray input_tensor, int axis, bool keepdims, string name)

Computes the variance of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
ndarray input_tensor
The tensor to reduce. Should have numeric type.
int axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
Tensor
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_variance(x)  # 1.25
            tf.reduce_variance(x, 0)  # [1., 1.]
            tf.reduce_variance(x, 1)  # [0.25,  0.25] 

Tensor reduce_variance(PythonClassContainer input_tensor, int axis, bool keepdims, string name)

Computes the variance of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
PythonClassContainer input_tensor
The tensor to reduce. Should have numeric type.
int axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
Tensor
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_variance(x)  # 1.25
            tf.reduce_variance(x, 0)  # [1., 1.]
            tf.reduce_variance(x, 1)  # [0.25,  0.25] 

Tensor reduce_variance(PythonClassContainer input_tensor, IEnumerable<int> axis, bool keepdims, string name)

Computes the variance of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
PythonClassContainer input_tensor
The tensor to reduce. Should have numeric type.
IEnumerable<int> axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
Tensor
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_variance(x)  # 1.25
            tf.reduce_variance(x, 0)  # [1., 1.]
            tf.reduce_variance(x, 1)  # [0.25,  0.25] 

Tensor reduce_variance(IGraphNodeBase input_tensor, int axis, bool keepdims, string name)

Computes the variance of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
IGraphNodeBase input_tensor
The tensor to reduce. Should have numeric type.
int axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
Tensor
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_variance(x)  # 1.25
            tf.reduce_variance(x, 0)  # [1., 1.]
            tf.reduce_variance(x, 1)  # [0.25,  0.25] 

Tensor reduce_variance(IGraphNodeBase input_tensor, IEnumerable<int> axis, bool keepdims, string name)

Computes the variance of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
IGraphNodeBase input_tensor
The tensor to reduce. Should have numeric type.
IEnumerable<int> axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
Tensor
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_variance(x)  # 1.25
            tf.reduce_variance(x, 0)  # [1., 1.]
            tf.reduce_variance(x, 1)  # [0.25,  0.25] 

Tensor reduce_variance(CompositeTensor input_tensor, int axis, bool keepdims, string name)

Computes the variance of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
CompositeTensor input_tensor
The tensor to reduce. Should have numeric type.
int axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
Tensor
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_variance(x)  # 1.25
            tf.reduce_variance(x, 0)  # [1., 1.]
            tf.reduce_variance(x, 1)  # [0.25,  0.25] 

Tensor reduce_variance(CompositeTensor input_tensor, IEnumerable<int> axis, bool keepdims, string name)

Computes the variance of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
CompositeTensor input_tensor
The tensor to reduce. Should have numeric type.
IEnumerable<int> axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
Tensor
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_variance(x)  # 1.25
            tf.reduce_variance(x, 0)  # [1., 1.]
            tf.reduce_variance(x, 1)  # [0.25,  0.25] 

Tensor reduce_variance(IEnumerable<PythonClassContainer> input_tensor, int axis, bool keepdims, string name)

Computes the variance of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
IEnumerable<PythonClassContainer> input_tensor
The tensor to reduce. Should have numeric type.
int axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
Tensor
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_variance(x)  # 1.25
            tf.reduce_variance(x, 0)  # [1., 1.]
            tf.reduce_variance(x, 1)  # [0.25,  0.25] 

Tensor reduce_variance(IEnumerable<PythonClassContainer> input_tensor, IEnumerable<int> axis, bool keepdims, string name)

Computes the variance of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
IEnumerable<PythonClassContainer> input_tensor
The tensor to reduce. Should have numeric type.
IEnumerable<int> axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
Tensor
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_variance(x)  # 1.25
            tf.reduce_variance(x, 0)  # [1., 1.]
            tf.reduce_variance(x, 1)  # [0.25,  0.25] 

Tensor reduce_variance(ndarray input_tensor, IEnumerable<int> axis, bool keepdims, string name)

Computes the variance of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
ndarray input_tensor
The tensor to reduce. Should have numeric type.
IEnumerable<int> axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
bool keepdims
If true, retains reduced dimensions with length 1.
string name
A name scope for the associated operations (optional).
Returns
Tensor
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_variance(x)  # 1.25
            tf.reduce_variance(x, 0)  # [1., 1.]
            tf.reduce_variance(x, 1)  # [0.25,  0.25] 

object reduce_variance_dyn(object input_tensor, object axis, ImplicitContainer<T> keepdims, object name)

Computes the variance of elements across dimensions of a tensor.

Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1.

If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned.
Parameters
object input_tensor
The tensor to reduce. Should have numeric type.
object axis
The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`.
ImplicitContainer<T> keepdims
If true, retains reduced dimensions with length 1.
object name
A name scope for the associated operations (optional).
Returns
object
The reduced tensor, of the same dtype as the input_tensor.
Show Example
x = tf.constant([[1., 2.], [3., 4.]])
            tf.reduce_variance(x)  # 1.25
            tf.reduce_variance(x, 0)  # [1., 1.]
            tf.reduce_variance(x, 1)  # [0.25,  0.25] 

Tensor xdivy(IGraphNodeBase x, IGraphNodeBase y, string name)

Returns 0 if x == 0, and x / y otherwise, elementwise.
Parameters
IGraphNodeBase x
A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
IGraphNodeBase y
A `Tensor`. Must have the same type as `x`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `x`.

object xdivy_dyn(object x, object y, object name)

Returns 0 if x == 0, and x / y otherwise, elementwise.
Parameters
object x
A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
object y
A `Tensor`. Must have the same type as `x`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `x`.

Tensor xlogy(IGraphNodeBase x, IGraphNodeBase y, string name)

Returns 0 if x == 0, and x * log(y) otherwise, elementwise.
Parameters
IGraphNodeBase x
A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
IGraphNodeBase y
A `Tensor`. Must have the same type as `x`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `x`.

object xlogy_dyn(object x, object y, object name)

Returns 0 if x == 0, and x * log(y) otherwise, elementwise.
Parameters
object x
A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
object y
A `Tensor`. Must have the same type as `x`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `x`.

Public properties

PythonFunctionContainer bessel_i0_fn get;

PythonFunctionContainer bessel_i0e_fn get;

PythonFunctionContainer bessel_i1_fn get;

PythonFunctionContainer bessel_i1e_fn get;

PythonFunctionContainer cumulative_logsumexp_fn get;

PythonFunctionContainer multiply_no_nan_fn get;

PythonFunctionContainer nextafter_fn get;

PythonFunctionContainer polyval_fn get;

PythonFunctionContainer reciprocal_no_nan_fn get;

PythonFunctionContainer reduce_euclidean_norm_fn get;

PythonFunctionContainer reduce_std_fn get;

PythonFunctionContainer reduce_variance_fn get;

PythonFunctionContainer xdivy_fn get;

PythonFunctionContainer xlogy_fn get;