LostTech.TensorFlow : API Documentation

Type tf

Namespace tensorflow

Bring in all of the public TensorFlow interface into this module.

Methods

Properties

Fields

Public static methods

Tensor a(string name)

object a_dyn(object name)

object abs(IGraphNodeBase x, string name)

object abs_dyn(object x, object name)

Tensor accumulate_n(IEnumerable<IGraphNodeBase> inputs, IEnumerable<object> shape, PythonClassContainer tensor_dtype, string name)

Returns the element-wise sum of a list of tensors.

Optionally, pass `shape` and `tensor_dtype` for shape and type checking, otherwise, these are inferred.

`accumulate_n` performs the same operation as tf.math.add_n, but does not wait for all of its inputs to be ready before beginning to sum. This approach can save memory if inputs are ready at different times, since minimum temporary storage is proportional to the output size rather than the inputs' size.

`accumulate_n` is differentiable (but wasn't previous to TensorFlow 1.7).
Parameters
IEnumerable<IGraphNodeBase> inputs
A list of `Tensor` objects, each with same shape and type.
IEnumerable<object> shape
Expected shape of elements of `inputs` (optional). Also controls the output shape of this op, which may affect type inference in other ops. A value of `None` means "infer the input shape from the shapes in `inputs`".
PythonClassContainer tensor_dtype
Expected data type of `inputs` (optional). A value of `None` means "infer the input dtype from `inputs[0]`".
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of same shape and type as the elements of `inputs`.
Show Example
a = tf.constant([[1, 2], [3, 4]])
            b = tf.constant([[5, 0], [0, 6]])
            tf.math.accumulate_n([a, b, a])  # [[7, 4], [6, 14]] 

# Explicitly pass shape and type tf.math.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32) # [[7, 4], # [6, 14]]

Tensor accumulate_n(ValueTuple<PythonClassContainer, PythonClassContainer> inputs, TensorShape shape, PythonClassContainer tensor_dtype, string name)

Returns the element-wise sum of a list of tensors.

Optionally, pass `shape` and `tensor_dtype` for shape and type checking, otherwise, these are inferred.

`accumulate_n` performs the same operation as tf.math.add_n, but does not wait for all of its inputs to be ready before beginning to sum. This approach can save memory if inputs are ready at different times, since minimum temporary storage is proportional to the output size rather than the inputs' size.

`accumulate_n` is differentiable (but wasn't previous to TensorFlow 1.7).
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> inputs
A list of `Tensor` objects, each with same shape and type.
TensorShape shape
Expected shape of elements of `inputs` (optional). Also controls the output shape of this op, which may affect type inference in other ops. A value of `None` means "infer the input shape from the shapes in `inputs`".
PythonClassContainer tensor_dtype
Expected data type of `inputs` (optional). A value of `None` means "infer the input dtype from `inputs[0]`".
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of same shape and type as the elements of `inputs`.
Show Example
a = tf.constant([[1, 2], [3, 4]])
            b = tf.constant([[5, 0], [0, 6]])
            tf.math.accumulate_n([a, b, a])  # [[7, 4], [6, 14]] 

# Explicitly pass shape and type tf.math.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32) # [[7, 4], # [6, 14]]

Tensor accumulate_n(ValueTuple<PythonClassContainer, PythonClassContainer> inputs, IEnumerable<object> shape, PythonClassContainer tensor_dtype, string name)

Returns the element-wise sum of a list of tensors.

Optionally, pass `shape` and `tensor_dtype` for shape and type checking, otherwise, these are inferred.

`accumulate_n` performs the same operation as tf.math.add_n, but does not wait for all of its inputs to be ready before beginning to sum. This approach can save memory if inputs are ready at different times, since minimum temporary storage is proportional to the output size rather than the inputs' size.

`accumulate_n` is differentiable (but wasn't previous to TensorFlow 1.7).
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> inputs
A list of `Tensor` objects, each with same shape and type.
IEnumerable<object> shape
Expected shape of elements of `inputs` (optional). Also controls the output shape of this op, which may affect type inference in other ops. A value of `None` means "infer the input shape from the shapes in `inputs`".
PythonClassContainer tensor_dtype
Expected data type of `inputs` (optional). A value of `None` means "infer the input dtype from `inputs[0]`".
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of same shape and type as the elements of `inputs`.
Show Example
a = tf.constant([[1, 2], [3, 4]])
            b = tf.constant([[5, 0], [0, 6]])
            tf.math.accumulate_n([a, b, a])  # [[7, 4], [6, 14]] 

# Explicitly pass shape and type tf.math.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32) # [[7, 4], # [6, 14]]

Tensor accumulate_n(IEnumerable<IGraphNodeBase> inputs, TensorShape shape, PythonClassContainer tensor_dtype, string name)

Returns the element-wise sum of a list of tensors.

Optionally, pass `shape` and `tensor_dtype` for shape and type checking, otherwise, these are inferred.

`accumulate_n` performs the same operation as tf.math.add_n, but does not wait for all of its inputs to be ready before beginning to sum. This approach can save memory if inputs are ready at different times, since minimum temporary storage is proportional to the output size rather than the inputs' size.

`accumulate_n` is differentiable (but wasn't previous to TensorFlow 1.7).
Parameters
IEnumerable<IGraphNodeBase> inputs
A list of `Tensor` objects, each with same shape and type.
TensorShape shape
Expected shape of elements of `inputs` (optional). Also controls the output shape of this op, which may affect type inference in other ops. A value of `None` means "infer the input shape from the shapes in `inputs`".
PythonClassContainer tensor_dtype
Expected data type of `inputs` (optional). A value of `None` means "infer the input dtype from `inputs[0]`".
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of same shape and type as the elements of `inputs`.
Show Example
a = tf.constant([[1, 2], [3, 4]])
            b = tf.constant([[5, 0], [0, 6]])
            tf.math.accumulate_n([a, b, a])  # [[7, 4], [6, 14]] 

# Explicitly pass shape and type tf.math.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32) # [[7, 4], # [6, 14]]

object accumulate_n_dyn(object inputs, object shape, object tensor_dtype, object name)

Returns the element-wise sum of a list of tensors.

Optionally, pass `shape` and `tensor_dtype` for shape and type checking, otherwise, these are inferred.

`accumulate_n` performs the same operation as tf.math.add_n, but does not wait for all of its inputs to be ready before beginning to sum. This approach can save memory if inputs are ready at different times, since minimum temporary storage is proportional to the output size rather than the inputs' size.

`accumulate_n` is differentiable (but wasn't previous to TensorFlow 1.7).
Parameters
object inputs
A list of `Tensor` objects, each with same shape and type.
object shape
Expected shape of elements of `inputs` (optional). Also controls the output shape of this op, which may affect type inference in other ops. A value of `None` means "infer the input shape from the shapes in `inputs`".
object tensor_dtype
Expected data type of `inputs` (optional). A value of `None` means "infer the input dtype from `inputs[0]`".
object name
A name for the operation (optional).
Returns
object
A `Tensor` of same shape and type as the elements of `inputs`.
Show Example
a = tf.constant([[1, 2], [3, 4]])
            b = tf.constant([[5, 0], [0, 6]])
            tf.math.accumulate_n([a, b, a])  # [[7, 4], [6, 14]] 

# Explicitly pass shape and type tf.math.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32) # [[7, 4], # [6, 14]]

Tensor acos(IGraphNodeBase x, string name)

Computes acos of x element-wise.
Parameters
IGraphNodeBase x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `x`.

object acos_dyn(object x, object name)

Computes acos of x element-wise.
Parameters
object x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `x`.

Tensor acosh(IGraphNodeBase x, string name)

Computes inverse hyperbolic cosine of x element-wise.

Given an input tensor, the function computes inverse hyperbolic cosine of every element. Input range is `[1, inf]`. It returns `nan` if the input lies outside the range.
Parameters
IGraphNodeBase x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `x`.
Show Example
x = tf.constant([-2, -0.5, 1, 1.2, 200, 10000, float("inf")])
            tf.math.acosh(x) ==> [nan nan 0. 0.62236255 5.9914584 9.903487 inf] 

object acosh_dyn(object x, object name)

Computes inverse hyperbolic cosine of x element-wise.

Given an input tensor, the function computes inverse hyperbolic cosine of every element. Input range is `[1, inf]`. It returns `nan` if the input lies outside the range.
Parameters
object x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `x`.
Show Example
x = tf.constant([-2, -0.5, 1, 1.2, 200, 10000, float("inf")])
            tf.math.acosh(x) ==> [nan nan 0. 0.62236255 5.9914584 9.903487 inf] 

Tensor add(IGraphNodeBase x, IGraphNodeBase y, string name)

Returns x + y element-wise.

*NOTE*: `math.add` supports broadcasting. `AddN` does not. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
Parameters
IGraphNodeBase x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`.
IGraphNodeBase y
A `Tensor`. Must have the same type as `x`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `x`.

Tensor add(IGraphNodeBase x, IGraphNodeBase y, PythonFunctionContainer name)

Returns x + y element-wise.

*NOTE*: `math.add` supports broadcasting. `AddN` does not. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
Parameters
IGraphNodeBase x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`.
IGraphNodeBase y
A `Tensor`. Must have the same type as `x`.
PythonFunctionContainer name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `x`.

object add_check_numerics_ops()

Connect a tf.debugging.check_numerics to every floating point tensor.

`check_numerics` operations themselves are added for each `half`, `float`, or `double` tensor in the current default graph. For all ops in the graph, the `check_numerics` op for all of its (`half`, `float`, or `double`) inputs is guaranteed to run before the `check_numerics` op on any of its outputs.

Note: This API is not compatible with the use of tf.cond or tf.while_loop, and will raise a `ValueError` if you attempt to call it in such a graph.
Returns
object
A `group` op depending on all `check_numerics` ops added.

object add_check_numerics_ops_dyn()

Connect a tf.debugging.check_numerics to every floating point tensor.

`check_numerics` operations themselves are added for each `half`, `float`, or `double` tensor in the current default graph. For all ops in the graph, the `check_numerics` op for all of its (`half`, `float`, or `double`) inputs is guaranteed to run before the `check_numerics` op on any of its outputs.

Note: This API is not compatible with the use of tf.cond or tf.while_loop, and will raise a `ValueError` if you attempt to call it in such a graph.
Returns
object
A `group` op depending on all `check_numerics` ops added.

object add_dyn(object x, object y, object name)

Returns x + y element-wise.

*NOTE*: `math.add` supports broadcasting. `AddN` does not. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
Parameters
object x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`.
object y
A `Tensor`. Must have the same type as `x`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `x`.

Tensor add_n(object inputs, string name)

Adds all input tensors element-wise.

Converts `IndexedSlices` objects into dense tensors prior to adding.

tf.math.add_n performs the same operation as tf.math.accumulate_n, but it waits for all of its inputs to be ready before beginning to sum. This buffering can result in higher memory consumption when inputs are ready at different times, since the minimum temporary storage required is proportional to the input size rather than the output size.

This op does not [broadcast]( https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html) its inputs. If you need broadcasting, use tf.math.add (or the `+` operator) instead.
Parameters
object inputs
A list of tf.Tensor or tf.IndexedSlices objects, each with same shape and type.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of same shape and type as the elements of `inputs`.
Show Example
a = tf.constant([[3, 5], [4, 8]])
            b = tf.constant([[1, 6], [2, 9]])
            tf.math.add_n([a, b, a])  # [[7, 16], [10, 25]] 

Tensor add_n(PythonFunctionContainer inputs, string name)

Adds all input tensors element-wise.

Converts `IndexedSlices` objects into dense tensors prior to adding.

tf.math.add_n performs the same operation as tf.math.accumulate_n, but it waits for all of its inputs to be ready before beginning to sum. This buffering can result in higher memory consumption when inputs are ready at different times, since the minimum temporary storage required is proportional to the input size rather than the output size.

This op does not [broadcast]( https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html) its inputs. If you need broadcasting, use tf.math.add (or the `+` operator) instead.
Parameters
PythonFunctionContainer inputs
A list of tf.Tensor or tf.IndexedSlices objects, each with same shape and type.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of same shape and type as the elements of `inputs`.
Show Example
a = tf.constant([[3, 5], [4, 8]])
            b = tf.constant([[1, 6], [2, 9]])
            tf.math.add_n([a, b, a])  # [[7, 16], [10, 25]] 

object add_n_dyn(object inputs, object name)

Adds all input tensors element-wise.

Converts `IndexedSlices` objects into dense tensors prior to adding.

tf.math.add_n performs the same operation as tf.math.accumulate_n, but it waits for all of its inputs to be ready before beginning to sum. This buffering can result in higher memory consumption when inputs are ready at different times, since the minimum temporary storage required is proportional to the input size rather than the output size.

This op does not [broadcast]( https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html) its inputs. If you need broadcasting, use tf.math.add (or the `+` operator) instead.
Parameters
object inputs
A list of tf.Tensor or tf.IndexedSlices objects, each with same shape and type.
object name
A name for the operation (optional).
Returns
object
A `Tensor` of same shape and type as the elements of `inputs`.
Show Example
a = tf.constant([[3, 5], [4, 8]])
            b = tf.constant([[1, 6], [2, 9]])
            tf.math.add_n([a, b, a])  # [[7, 16], [10, 25]] 

void add_to_collection(Saver name, object value)

Wrapper for `Graph.add_to_collection()` using the default graph.

See tf.Graph.add_to_collection for more details.
Parameters
Saver name
The key for the collection. For example, the `GraphKeys` class contains many standard names for collections.
object value
The value to add to the collection.

void add_to_collection(Saver name, IEnumerable<object> value)

Wrapper for `Graph.add_to_collection()` using the default graph.

See tf.Graph.add_to_collection for more details.
Parameters
Saver name
The key for the collection. For example, the `GraphKeys` class contains many standard names for collections.
IEnumerable<object> value
The value to add to the collection.

void add_to_collection(IEnumerable<string> name, object value)

Wrapper for `Graph.add_to_collection()` using the default graph.

See tf.Graph.add_to_collection for more details.
Parameters
IEnumerable<string> name
The key for the collection. For example, the `GraphKeys` class contains many standard names for collections.
object value
The value to add to the collection.

void add_to_collection(IEnumerable<string> name, IEnumerable<object> value)

Wrapper for `Graph.add_to_collection()` using the default graph.

See tf.Graph.add_to_collection for more details.
Parameters
IEnumerable<string> name
The key for the collection. For example, the `GraphKeys` class contains many standard names for collections.
IEnumerable<object> value
The value to add to the collection.

void add_to_collections(ValueTuple names, object value)

Wrapper for `Graph.add_to_collections()` using the default graph.

See tf.Graph.add_to_collections for more details.
Parameters
ValueTuple names
The key for the collections. The `GraphKeys` class contains many standard names for collections.
object value
The value to add to the collections.

void add_to_collections(ValueTuple names, IEnumerable<IGraphNodeBase> value)

Wrapper for `Graph.add_to_collections()` using the default graph.

See tf.Graph.add_to_collections for more details.
Parameters
ValueTuple names
The key for the collections. The `GraphKeys` class contains many standard names for collections.
IEnumerable<IGraphNodeBase> value
The value to add to the collections.

Tensor adjust_hsv_in_yiq(IGraphNodeBase images, IGraphNodeBase delta_h, IGraphNodeBase scale_s, IGraphNodeBase scale_v, string name)

object adjust_hsv_in_yiq_dyn(object images, object delta_h, object scale_s, object scale_v, object name)

object all_variables()

Use `tf.compat.v1.global_variables` instead. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02. Instructions for updating: Please use tf.global_variables instead.

object all_variables_dyn()

Use `tf.compat.v1.global_variables` instead. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02. Instructions for updating: Please use tf.global_variables instead.

Tensor angle(IGraphNodeBase input, string name)

Returns the element-wise argument of a complex (or real) tensor.

Given a tensor `input`, this operation returns a tensor of type `float` that is the argument of each element in `input` considered as a complex number.

The elements in `input` are considered to be complex numbers of the form \\(a + bj\\), where *a* is the real part and *b* is the imaginary part. If `input` is real then *b* is zero by definition.

The argument returned by this function is of the form \\(atan2(b, a)\\). If `input` is real, a tensor of all zeros is returned.

For example:

``` input = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j], dtype=tf.complex64) tf.math.angle(input).numpy() # ==> array([2.0131705, 1.056345 ], dtype=float32) ```
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `float`, `double`, `complex64`, `complex128`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of type `float32` or `float64`.

object angle_dyn(object input, object name)

Returns the element-wise argument of a complex (or real) tensor.

Given a tensor `input`, this operation returns a tensor of type `float` that is the argument of each element in `input` considered as a complex number.

The elements in `input` are considered to be complex numbers of the form \\(a + bj\\), where *a* is the real part and *b* is the imaginary part. If `input` is real then *b* is zero by definition.

The argument returned by this function is of the form \\(atan2(b, a)\\). If `input` is real, a tensor of all zeros is returned.

For example:

``` input = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j], dtype=tf.complex64) tf.math.angle(input).numpy() # ==> array([2.0131705, 1.056345 ], dtype=float32) ```
Parameters
object input
A `Tensor`. Must be one of the following types: `float`, `double`, `complex64`, `complex128`.
object name
A name for the operation (optional).
Returns
object
A `Tensor` of type `float32` or `float64`.

Tensor arg_max(IGraphNodeBase input, IGraphNodeBase dimension, ndarray output_type, string name)

Returns the index with the largest value across dimensions of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
IGraphNodeBase dimension
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.
ndarray output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

Tensor arg_max(IGraphNodeBase input, IGraphNodeBase dimension, ImplicitContainer<T> output_type, string name)

Returns the index with the largest value across dimensions of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
IGraphNodeBase dimension
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

object arg_max_dyn(object input, object dimension, ImplicitContainer<T> output_type, object name)

Returns the index with the largest value across dimensions of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
object input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
object dimension
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
object name
A name for the operation (optional).
Returns
object
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

Tensor arg_min(IGraphNodeBase input, IGraphNodeBase dimension, ImplicitContainer<T> output_type, string name)

Returns the index with the smallest value across dimensions of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
IGraphNodeBase dimension
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

Tensor arg_min(IGraphNodeBase input, IGraphNodeBase dimension, ndarray output_type, string name)

Returns the index with the smallest value across dimensions of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
IGraphNodeBase dimension
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.
ndarray output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

object arg_min_dyn(object input, object dimension, ImplicitContainer<T> output_type, object name)

Returns the index with the smallest value across dimensions of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
object input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
object dimension
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
object name
A name for the operation (optional).
Returns
object
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

Tensor argmax(IEnumerable<IGraphNodeBase> input, ValueTuple<object, ndarray, object> axis, string name, Nullable<int> dimension, ImplicitContainer<T> output_type)

Returns the index with the largest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IEnumerable<IGraphNodeBase> input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
ValueTuple<object, ndarray, object> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

Tensor argmax(IEnumerable<IGraphNodeBase> input, int axis, string name, Nullable<int> dimension, ndarray output_type)

Returns the index with the largest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IEnumerable<IGraphNodeBase> input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
int axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ndarray output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

Tensor argmax(IEnumerable<IGraphNodeBase> input, int axis, string name, Nullable<int> dimension, ImplicitContainer<T> output_type)

Returns the index with the largest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IEnumerable<IGraphNodeBase> input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
int axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

Tensor argmax(IEnumerable<IGraphNodeBase> input, IGraphNodeBase axis, string name, Nullable<int> dimension, ImplicitContainer<T> output_type)

Returns the index with the largest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IEnumerable<IGraphNodeBase> input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
IGraphNodeBase axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

Tensor argmax(object input, IGraphNodeBase axis, string name, Nullable<int> dimension, ImplicitContainer<T> output_type)

Returns the index with the largest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
object input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
IGraphNodeBase axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

Tensor argmax(IEnumerable<IGraphNodeBase> input, IGraphNodeBase axis, string name, Nullable<int> dimension, ndarray output_type)

Returns the index with the largest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IEnumerable<IGraphNodeBase> input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
IGraphNodeBase axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ndarray output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

Tensor argmax(object input, IGraphNodeBase axis, string name, Nullable<int> dimension, ndarray output_type)

Returns the index with the largest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
object input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
IGraphNodeBase axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ndarray output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

Tensor argmax(object input, int axis, string name, Nullable<int> dimension, ImplicitContainer<T> output_type)

Returns the index with the largest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
object input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
int axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

Tensor argmax(object input, int axis, string name, Nullable<int> dimension, ndarray output_type)

Returns the index with the largest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
object input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
int axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ndarray output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

Tensor argmax(IEnumerable<IGraphNodeBase> input, ValueTuple<object, ndarray, object> axis, string name, Nullable<int> dimension, ndarray output_type)

Returns the index with the largest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IEnumerable<IGraphNodeBase> input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
ValueTuple<object, ndarray, object> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ndarray output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

Tensor argmax(object input, ValueTuple<object, ndarray, object> axis, string name, Nullable<int> dimension, ndarray output_type)

Returns the index with the largest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
object input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
ValueTuple<object, ndarray, object> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ndarray output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

Tensor argmax(object input, ValueTuple<object, ndarray, object> axis, string name, Nullable<int> dimension, ImplicitContainer<T> output_type)

Returns the index with the largest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
object input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
ValueTuple<object, ndarray, object> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

object argmax_dyn(object input, object axis, object name, object dimension, ImplicitContainer<T> output_type)

Returns the index with the largest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
object input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
object axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
object name
A name for the operation (optional).
object dimension
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
object
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmax(input = a)
            c = tf.keras.backend.eval(b)
            # c = 4
            # here a[4] = 166.32 which is the largest element of a across axis 0 

Tensor argmin(ndarray input, ValueTuple<object, ndarray, object> axis, string name, Nullable<int> dimension, ImplicitContainer<T> output_type)

Returns the index with the smallest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
ndarray input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
ValueTuple<object, ndarray, object> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

Tensor argmin(IEnumerable<int> input, Nullable<int> axis, string name, Nullable<int> dimension, ImplicitContainer<T> output_type)

Returns the index with the smallest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IEnumerable<int> input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
Nullable<int> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

Tensor argmin(IEnumerable<int> input, Nullable<int> axis, string name, Nullable<int> dimension, ndarray output_type)

Returns the index with the smallest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IEnumerable<int> input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
Nullable<int> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ndarray output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

Tensor argmin(IGraphNodeBase input, ValueTuple<object, ndarray, object> axis, string name, Nullable<int> dimension, ndarray output_type)

Returns the index with the smallest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
ValueTuple<object, ndarray, object> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ndarray output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

Tensor argmin(IGraphNodeBase input, Nullable<int> axis, string name, Nullable<int> dimension, ImplicitContainer<T> output_type)

Returns the index with the smallest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
Nullable<int> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

Tensor argmin(IGraphNodeBase input, Nullable<int> axis, string name, Nullable<int> dimension, ndarray output_type)

Returns the index with the smallest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
Nullable<int> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ndarray output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

Tensor argmin(IEnumerable<int> input, ValueTuple<object, ndarray, object> axis, string name, Nullable<int> dimension, ndarray output_type)

Returns the index with the smallest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IEnumerable<int> input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
ValueTuple<object, ndarray, object> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ndarray output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

Tensor argmin(ndarray input, Nullable<int> axis, string name, Nullable<int> dimension, ImplicitContainer<T> output_type)

Returns the index with the smallest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
ndarray input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
Nullable<int> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

Tensor argmin(ndarray input, Nullable<int> axis, string name, Nullable<int> dimension, ndarray output_type)

Returns the index with the smallest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
ndarray input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
Nullable<int> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ndarray output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

Tensor argmin(ndarray input, ValueTuple<object, ndarray, object> axis, string name, Nullable<int> dimension, ndarray output_type)

Returns the index with the smallest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
ndarray input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
ValueTuple<object, ndarray, object> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ndarray output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

Tensor argmin(IEnumerable<int> input, ValueTuple<object, ndarray, object> axis, string name, Nullable<int> dimension, ImplicitContainer<T> output_type)

Returns the index with the smallest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IEnumerable<int> input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
ValueTuple<object, ndarray, object> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

Tensor argmin(IGraphNodeBase input, ValueTuple<object, ndarray, object> axis, string name, Nullable<int> dimension, ImplicitContainer<T> output_type)

Returns the index with the smallest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
ValueTuple<object, ndarray, object> axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
string name
A name for the operation (optional).
Nullable<int> dimension
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
Tensor
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

object argmin_dyn(object input, object axis, object name, object dimension, ImplicitContainer<T> output_type)

Returns the index with the smallest value across axes of a tensor. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. Instructions for updating: Use the `axis` argument instead

Note that in case of ties the identity of the return value is not guaranteed.

Usage:
Parameters
object input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
object axis
A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `[-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
object name
A name for the operation (optional).
object dimension
ImplicitContainer<T> output_type
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
Returns
object
A `Tensor` of type `output_type`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.math.argmin(input = a)
            c = tf.keras.backend.eval(b)
            # c = 0
            # here a[0] = 1 which is the smallest element of a across axis 0 

object argsort(int values, int axis, string direction, bool stable, string name)

Returns the indices of a tensor that give its sorted order along an axis.

For a 1D tensor, `tf.gather(values, tf.argsort(values))` is equivalent to `tf.sort(values)`. For higher dimensions, the output has the same shape as `values`, but along the given axis, values represent the index of the sorted element in that slice of the tensor at the given position.

Usage:
Parameters
int values
1-D or higher numeric `Tensor`.
int axis
The axis along which to sort. The default is -1, which sorts the last axis.
string direction
The direction in which to sort the values (`'ASCENDING'` or `'DESCENDING'`).
bool stable
If True, equal elements in the original tensor will not be re-ordered in the returned order. Unstable sort is not yet implemented, but will eventually be the default for performance reasons. If you require a stable order, pass `stable=True` for forwards compatibility.
string name
Optional name for the operation.
Returns
object
An int32 `Tensor` with the same shape as `values`. The indices that would sort each slice of the given `values` along the given `axis`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.argsort(a,axis=-1,direction='ASCENDING',stable=False,name=None)
            c = tf.keras.backend.eval(b)
            # Here, c = [0 3 1 2 5 4] 

object argsort(IEnumerable<object> values, int axis, string direction, bool stable, string name)

Returns the indices of a tensor that give its sorted order along an axis.

For a 1D tensor, `tf.gather(values, tf.argsort(values))` is equivalent to `tf.sort(values)`. For higher dimensions, the output has the same shape as `values`, but along the given axis, values represent the index of the sorted element in that slice of the tensor at the given position.

Usage:
Parameters
IEnumerable<object> values
1-D or higher numeric `Tensor`.
int axis
The axis along which to sort. The default is -1, which sorts the last axis.
string direction
The direction in which to sort the values (`'ASCENDING'` or `'DESCENDING'`).
bool stable
If True, equal elements in the original tensor will not be re-ordered in the returned order. Unstable sort is not yet implemented, but will eventually be the default for performance reasons. If you require a stable order, pass `stable=True` for forwards compatibility.
string name
Optional name for the operation.
Returns
object
An int32 `Tensor` with the same shape as `values`. The indices that would sort each slice of the given `values` along the given `axis`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.argsort(a,axis=-1,direction='ASCENDING',stable=False,name=None)
            c = tf.keras.backend.eval(b)
            # Here, c = [0 3 1 2 5 4] 

object argsort(CompositeTensor values, int axis, string direction, bool stable, string name)

Returns the indices of a tensor that give its sorted order along an axis.

For a 1D tensor, `tf.gather(values, tf.argsort(values))` is equivalent to `tf.sort(values)`. For higher dimensions, the output has the same shape as `values`, but along the given axis, values represent the index of the sorted element in that slice of the tensor at the given position.

Usage:
Parameters
CompositeTensor values
1-D or higher numeric `Tensor`.
int axis
The axis along which to sort. The default is -1, which sorts the last axis.
string direction
The direction in which to sort the values (`'ASCENDING'` or `'DESCENDING'`).
bool stable
If True, equal elements in the original tensor will not be re-ordered in the returned order. Unstable sort is not yet implemented, but will eventually be the default for performance reasons. If you require a stable order, pass `stable=True` for forwards compatibility.
string name
Optional name for the operation.
Returns
object
An int32 `Tensor` with the same shape as `values`. The indices that would sort each slice of the given `values` along the given `axis`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.argsort(a,axis=-1,direction='ASCENDING',stable=False,name=None)
            c = tf.keras.backend.eval(b)
            # Here, c = [0 3 1 2 5 4] 

object argsort(ValueTuple<PythonClassContainer, PythonClassContainer> values, int axis, string direction, bool stable, string name)

Returns the indices of a tensor that give its sorted order along an axis.

For a 1D tensor, `tf.gather(values, tf.argsort(values))` is equivalent to `tf.sort(values)`. For higher dimensions, the output has the same shape as `values`, but along the given axis, values represent the index of the sorted element in that slice of the tensor at the given position.

Usage:
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> values
1-D or higher numeric `Tensor`.
int axis
The axis along which to sort. The default is -1, which sorts the last axis.
string direction
The direction in which to sort the values (`'ASCENDING'` or `'DESCENDING'`).
bool stable
If True, equal elements in the original tensor will not be re-ordered in the returned order. Unstable sort is not yet implemented, but will eventually be the default for performance reasons. If you require a stable order, pass `stable=True` for forwards compatibility.
string name
Optional name for the operation.
Returns
object
An int32 `Tensor` with the same shape as `values`. The indices that would sort each slice of the given `values` along the given `axis`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.argsort(a,axis=-1,direction='ASCENDING',stable=False,name=None)
            c = tf.keras.backend.eval(b)
            # Here, c = [0 3 1 2 5 4] 

object argsort(IGraphNodeBase values, int axis, string direction, bool stable, string name)

Returns the indices of a tensor that give its sorted order along an axis.

For a 1D tensor, `tf.gather(values, tf.argsort(values))` is equivalent to `tf.sort(values)`. For higher dimensions, the output has the same shape as `values`, but along the given axis, values represent the index of the sorted element in that slice of the tensor at the given position.

Usage:
Parameters
IGraphNodeBase values
1-D or higher numeric `Tensor`.
int axis
The axis along which to sort. The default is -1, which sorts the last axis.
string direction
The direction in which to sort the values (`'ASCENDING'` or `'DESCENDING'`).
bool stable
If True, equal elements in the original tensor will not be re-ordered in the returned order. Unstable sort is not yet implemented, but will eventually be the default for performance reasons. If you require a stable order, pass `stable=True` for forwards compatibility.
string name
Optional name for the operation.
Returns
object
An int32 `Tensor` with the same shape as `values`. The indices that would sort each slice of the given `values` along the given `axis`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.argsort(a,axis=-1,direction='ASCENDING',stable=False,name=None)
            c = tf.keras.backend.eval(b)
            # Here, c = [0 3 1 2 5 4] 

object argsort_dyn(object values, ImplicitContainer<T> axis, ImplicitContainer<T> direction, ImplicitContainer<T> stable, object name)

Returns the indices of a tensor that give its sorted order along an axis.

For a 1D tensor, `tf.gather(values, tf.argsort(values))` is equivalent to `tf.sort(values)`. For higher dimensions, the output has the same shape as `values`, but along the given axis, values represent the index of the sorted element in that slice of the tensor at the given position.

Usage:
Parameters
object values
1-D or higher numeric `Tensor`.
ImplicitContainer<T> axis
The axis along which to sort. The default is -1, which sorts the last axis.
ImplicitContainer<T> direction
The direction in which to sort the values (`'ASCENDING'` or `'DESCENDING'`).
ImplicitContainer<T> stable
If True, equal elements in the original tensor will not be re-ordered in the returned order. Unstable sort is not yet implemented, but will eventually be the default for performance reasons. If you require a stable order, pass `stable=True` for forwards compatibility.
object name
Optional name for the operation.
Returns
object
An int32 `Tensor` with the same shape as `values`. The indices that would sort each slice of the given `values` along the given `axis`.
Show Example
import tensorflow as tf
            a = [1, 10, 26.9, 2.8, 166.32, 62.3]
            b = tf.argsort(a,axis=-1,direction='ASCENDING',stable=False,name=None)
            c = tf.keras.backend.eval(b)
            # Here, c = [0 3 1 2 5 4] 

DType as_dtype(object type_value)

Converts the given `type_value` to a `DType`.
Parameters
object type_value
A value that can be converted to a tf.DType object. This may currently be a tf.DType object, a [`DataType` enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto), a string type name, or a `numpy.dtype`.
Returns
DType
A `DType` corresponding to `type_value`.

DType as_dtype(PythonFunctionContainer type_value)

Converts the given `type_value` to a `DType`.
Parameters
PythonFunctionContainer type_value
A value that can be converted to a tf.DType object. This may currently be a tf.DType object, a [`DataType` enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto), a string type name, or a `numpy.dtype`.
Returns
DType
A `DType` corresponding to `type_value`.

object as_dtype_dyn(object type_value)

Converts the given `type_value` to a `DType`.
Parameters
object type_value
A value that can be converted to a tf.DType object. This may currently be a tf.DType object, a [`DataType` enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto), a string type name, or a `numpy.dtype`.
Returns
object
A `DType` corresponding to `type_value`.

Tensor as_string(IGraphNodeBase input, int precision, bool scientific, bool shortest, int width, string fill, string name)

Converts each entry in the given tensor to strings.

Supports many numeric types and boolean.

For Unicode, see the [https://www.tensorflow.org/tutorials/representation/unicode](Working with Unicode text) tutorial.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `float32`, `float64`, `bool`.
int precision
An optional `int`. Defaults to `-1`. The post-decimal precision to use for floating point numbers. Only used if precision > -1.
bool scientific
An optional `bool`. Defaults to `False`. Use scientific notation for floating point numbers.
bool shortest
An optional `bool`. Defaults to `False`. Use shortest representation (either scientific or standard) for floating point numbers.
int width
An optional `int`. Defaults to `-1`. Pad pre-decimal numbers to this width. Applies to both floating point and integer numbers. Only used if width > -1.
string fill
An optional `string`. Defaults to `""`. The value to pad if width > -1. If empty, pads with spaces. Another typical value is '0'. String cannot be longer than 1 character.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of type `string`.

object as_string_dyn(object input, ImplicitContainer<T> precision, ImplicitContainer<T> scientific, ImplicitContainer<T> shortest, ImplicitContainer<T> width, ImplicitContainer<T> fill, object name)

Converts each entry in the given tensor to strings.

Supports many numeric types and boolean.

For Unicode, see the [https://www.tensorflow.org/tutorials/representation/unicode](Working with Unicode text) tutorial.
Parameters
object input
A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `float32`, `float64`, `bool`.
ImplicitContainer<T> precision
An optional `int`. Defaults to `-1`. The post-decimal precision to use for floating point numbers. Only used if precision > -1.
ImplicitContainer<T> scientific
An optional `bool`. Defaults to `False`. Use scientific notation for floating point numbers.
ImplicitContainer<T> shortest
An optional `bool`. Defaults to `False`. Use shortest representation (either scientific or standard) for floating point numbers.
ImplicitContainer<T> width
An optional `int`. Defaults to `-1`. Pad pre-decimal numbers to this width. Applies to both floating point and integer numbers. Only used if width > -1.
ImplicitContainer<T> fill
An optional `string`. Defaults to `""`. The value to pad if width > -1. If empty, pads with spaces. Another typical value is '0'. String cannot be longer than 1 character.
object name
A name for the operation (optional).
Returns
object
A `Tensor` of type `string`.

Tensor asin(IGraphNodeBase x, string name)

Computes the trignometric inverse sine of x element-wise.

The tf.math.asin operation returns the inverse of tf.math.sin, such that if `y = tf.math.sin(x)` then, `x = tf.math.asin(y)`.

**Note**: The output of tf.math.asin will lie within the invertible range of sine, i.e [-pi/2, pi/2].
Parameters
IGraphNodeBase x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `x`.
Show Example
# Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]
            x = tf.constant([1.047, 0.785])
            y = tf.math.sin(x) # [0.8659266, 0.7068252] 

tf.math.asin(y) # [1.047, 0.785] = x

object asin_dyn(object x, object name)

Computes the trignometric inverse sine of x element-wise.

The tf.math.asin operation returns the inverse of tf.math.sin, such that if `y = tf.math.sin(x)` then, `x = tf.math.asin(y)`.

**Note**: The output of tf.math.asin will lie within the invertible range of sine, i.e [-pi/2, pi/2].
Parameters
object x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `x`.
Show Example
# Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]
            x = tf.constant([1.047, 0.785])
            y = tf.math.sin(x) # [0.8659266, 0.7068252] 

tf.math.asin(y) # [1.047, 0.785] = x

Tensor asinh(IGraphNodeBase x, string name)

Computes inverse hyperbolic sine of x element-wise.

Given an input tensor, this function computes inverse hyperbolic sine for every element in the tensor. Both input and output has a range of `[-inf, inf]`.
Parameters
IGraphNodeBase x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `x`.
Show Example
x = tf.constant([-float("inf"), -2, -0.5, 1, 1.2, 200, 10000, float("inf")])
            tf.math.asinh(x) ==> [-inf -1.4436355 -0.4812118 0.8813736 1.0159732 5.991471 9.903487 inf] 

object asinh_dyn(object x, object name)

Computes inverse hyperbolic sine of x element-wise.

Given an input tensor, this function computes inverse hyperbolic sine for every element in the tensor. Both input and output has a range of `[-inf, inf]`.
Parameters
object x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `x`.
Show Example
x = tf.constant([-float("inf"), -2, -0.5, 1, 1.2, 200, 10000, float("inf")])
            tf.math.asinh(x) ==> [-inf -1.4436355 -0.4812118 0.8813736 1.0159732 5.991471 9.903487 inf] 

object Assert(IEnumerable<object> condition, ValueTuple<string, IGraphNodeBase> data, int summarize, PythonFunctionContainer name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
IEnumerable<object> condition
The condition to evaluate.
ValueTuple<string, IGraphNodeBase> data
The tensors to print out when condition is false.
int summarize
Print this many entries of each tensor.
PythonFunctionContainer name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(IEnumerable<object> condition, object data, Nullable<double> summarize, string name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
IEnumerable<object> condition
The condition to evaluate.
object data
The tensors to print out when condition is false.
Nullable<double> summarize
Print this many entries of each tensor.
string name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(IEnumerable<object> condition, object data, int summarize, string name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
IEnumerable<object> condition
The condition to evaluate.
object data
The tensors to print out when condition is false.
int summarize
Print this many entries of each tensor.
string name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(IEnumerable<object> condition, object data, Nullable<double> summarize, PythonFunctionContainer name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
IEnumerable<object> condition
The condition to evaluate.
object data
The tensors to print out when condition is false.
Nullable<double> summarize
Print this many entries of each tensor.
PythonFunctionContainer name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(IEnumerable<object> condition, object data, int summarize, PythonFunctionContainer name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
IEnumerable<object> condition
The condition to evaluate.
object data
The tensors to print out when condition is false.
int summarize
Print this many entries of each tensor.
PythonFunctionContainer name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(object condition, IEnumerable<object> data, Nullable<double> summarize, PythonFunctionContainer name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
object condition
The condition to evaluate.
IEnumerable<object> data
The tensors to print out when condition is false.
Nullable<double> summarize
Print this many entries of each tensor.
PythonFunctionContainer name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(IEnumerable<object> condition, IEnumerable<object> data, Nullable<double> summarize, PythonFunctionContainer name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
IEnumerable<object> condition
The condition to evaluate.
IEnumerable<object> data
The tensors to print out when condition is false.
Nullable<double> summarize
Print this many entries of each tensor.
PythonFunctionContainer name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(IEnumerable<object> condition, ValueTuple<string, IGraphNodeBase> data, Nullable<double> summarize, string name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
IEnumerable<object> condition
The condition to evaluate.
ValueTuple<string, IGraphNodeBase> data
The tensors to print out when condition is false.
Nullable<double> summarize
Print this many entries of each tensor.
string name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(object condition, IEnumerable<object> data, Nullable<double> summarize, string name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
object condition
The condition to evaluate.
IEnumerable<object> data
The tensors to print out when condition is false.
Nullable<double> summarize
Print this many entries of each tensor.
string name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(IEnumerable<object> condition, IEnumerable<object> data, int summarize, string name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
IEnumerable<object> condition
The condition to evaluate.
IEnumerable<object> data
The tensors to print out when condition is false.
int summarize
Print this many entries of each tensor.
string name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(object condition, IEnumerable<object> data, int summarize, string name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
object condition
The condition to evaluate.
IEnumerable<object> data
The tensors to print out when condition is false.
int summarize
Print this many entries of each tensor.
string name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(object condition, object data, int summarize, string name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
object condition
The condition to evaluate.
object data
The tensors to print out when condition is false.
int summarize
Print this many entries of each tensor.
string name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(object condition, object data, int summarize, PythonFunctionContainer name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
object condition
The condition to evaluate.
object data
The tensors to print out when condition is false.
int summarize
Print this many entries of each tensor.
PythonFunctionContainer name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(IEnumerable<object> condition, IEnumerable<object> data, int summarize, PythonFunctionContainer name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
IEnumerable<object> condition
The condition to evaluate.
IEnumerable<object> data
The tensors to print out when condition is false.
int summarize
Print this many entries of each tensor.
PythonFunctionContainer name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(object condition, object data, Nullable<double> summarize, string name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
object condition
The condition to evaluate.
object data
The tensors to print out when condition is false.
Nullable<double> summarize
Print this many entries of each tensor.
string name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(object condition, IEnumerable<object> data, int summarize, PythonFunctionContainer name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
object condition
The condition to evaluate.
IEnumerable<object> data
The tensors to print out when condition is false.
int summarize
Print this many entries of each tensor.
PythonFunctionContainer name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(object condition, object data, Nullable<double> summarize, PythonFunctionContainer name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
object condition
The condition to evaluate.
object data
The tensors to print out when condition is false.
Nullable<double> summarize
Print this many entries of each tensor.
PythonFunctionContainer name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(object condition, ValueTuple<string, IGraphNodeBase> data, int summarize, string name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
object condition
The condition to evaluate.
ValueTuple<string, IGraphNodeBase> data
The tensors to print out when condition is false.
int summarize
Print this many entries of each tensor.
string name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(IEnumerable<object> condition, ValueTuple<string, IGraphNodeBase> data, Nullable<double> summarize, PythonFunctionContainer name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
IEnumerable<object> condition
The condition to evaluate.
ValueTuple<string, IGraphNodeBase> data
The tensors to print out when condition is false.
Nullable<double> summarize
Print this many entries of each tensor.
PythonFunctionContainer name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(object condition, ValueTuple<string, IGraphNodeBase> data, int summarize, PythonFunctionContainer name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
object condition
The condition to evaluate.
ValueTuple<string, IGraphNodeBase> data
The tensors to print out when condition is false.
int summarize
Print this many entries of each tensor.
PythonFunctionContainer name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(object condition, ValueTuple<string, IGraphNodeBase> data, Nullable<double> summarize, string name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
object condition
The condition to evaluate.
ValueTuple<string, IGraphNodeBase> data
The tensors to print out when condition is false.
Nullable<double> summarize
Print this many entries of each tensor.
string name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(object condition, ValueTuple<string, IGraphNodeBase> data, Nullable<double> summarize, PythonFunctionContainer name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
object condition
The condition to evaluate.
ValueTuple<string, IGraphNodeBase> data
The tensors to print out when condition is false.
Nullable<double> summarize
Print this many entries of each tensor.
PythonFunctionContainer name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(IEnumerable<object> condition, IEnumerable<object> data, Nullable<double> summarize, string name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
IEnumerable<object> condition
The condition to evaluate.
IEnumerable<object> data
The tensors to print out when condition is false.
Nullable<double> summarize
Print this many entries of each tensor.
string name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert(IEnumerable<object> condition, ValueTuple<string, IGraphNodeBase> data, int summarize, string name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
IEnumerable<object> condition
The condition to evaluate.
ValueTuple<string, IGraphNodeBase> data
The tensors to print out when condition is false.
int summarize
Print this many entries of each tensor.
string name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object Assert_dyn(object condition, object data, object summarize, object name)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:
Parameters
object condition
The condition to evaluate.
object data
The tensors to print out when condition is false.
object summarize
Print this many entries of each tensor.
object name
A name for this operation (optional).
Returns
object

Show Example
# Ensure maximum element of x is smaller or equal to 1
            assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
            with tf.control_dependencies([assert_op]):
             ... code using x... 

object assert_equal(PythonClassContainer x, object y, IEnumerable<object> data, Nullable<int> summarize, object message, string name)

Assert the condition `x == y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] == y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
PythonClassContainer x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x == y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_equal(PythonClassContainer x, object y, IEnumerable<object> data, Nullable<int> summarize, string message, string name)

Assert the condition `x == y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] == y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
PythonClassContainer x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x == y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_equal(PythonClassContainer x, object y, IEnumerable<object> data, Nullable<int> summarize, IGraphNodeBase message, string name)

Assert the condition `x == y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] == y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
PythonClassContainer x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
IGraphNodeBase message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x == y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_equal(object x, object y, IEnumerable<object> data, Nullable<int> summarize, int message, string name)

Assert the condition `x == y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] == y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
int message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x == y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_equal(object x, object y, IEnumerable<object> data, Nullable<int> summarize, double message, string name)

Assert the condition `x == y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] == y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
double message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x == y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_equal(PythonClassContainer x, object y, IEnumerable<object> data, Nullable<int> summarize, double message, string name)

Assert the condition `x == y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] == y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
PythonClassContainer x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
double message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x == y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_equal(object x, object y, IEnumerable<object> data, Nullable<int> summarize, object message, string name)

Assert the condition `x == y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] == y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x == y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_equal(object x, object y, IEnumerable<object> data, Nullable<int> summarize, IGraphNodeBase message, string name)

Assert the condition `x == y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] == y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
IGraphNodeBase message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x == y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_equal(PythonClassContainer x, object y, IEnumerable<object> data, Nullable<int> summarize, int message, string name)

Assert the condition `x == y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] == y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
PythonClassContainer x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
int message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x == y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_equal(object x, object y, IEnumerable<object> data, Nullable<int> summarize, string message, string name)

Assert the condition `x == y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] == y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x == y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_equal_dyn(object x, object y, object data, object summarize, object message, object name)

Assert the condition `x == y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] == y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
object name
A name for this operation (optional). Defaults to "assert_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x == y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater(int x, IndexedSlices y, object data, object summarize, string message, string name)

Assert the condition `x > y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] > y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
int x
Numeric `Tensor`.
IndexedSlices y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater".
Returns
object
Op that raises `InvalidArgumentError` if `x > y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater(IGraphNodeBase x, int y, object data, object summarize, string message, string name)

Assert the condition `x > y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] > y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IGraphNodeBase x
Numeric `Tensor`.
int y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater".
Returns
object
Op that raises `InvalidArgumentError` if `x > y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater(IGraphNodeBase x, ValueTuple<PythonClassContainer, PythonClassContainer> y, object data, object summarize, string message, string name)

Assert the condition `x > y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] > y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IGraphNodeBase x
Numeric `Tensor`.
ValueTuple<PythonClassContainer, PythonClassContainer> y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater".
Returns
object
Op that raises `InvalidArgumentError` if `x > y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater(int x, double y, object data, object summarize, string message, string name)

Assert the condition `x > y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] > y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
int x
Numeric `Tensor`.
double y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater".
Returns
object
Op that raises `InvalidArgumentError` if `x > y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater(int x, int y, object data, object summarize, string message, string name)

Assert the condition `x > y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] > y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
int x
Numeric `Tensor`.
int y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater".
Returns
object
Op that raises `InvalidArgumentError` if `x > y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater(IGraphNodeBase x, IGraphNodeBase y, object data, object summarize, string message, string name)

Assert the condition `x > y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] > y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IGraphNodeBase x
Numeric `Tensor`.
IGraphNodeBase y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater".
Returns
object
Op that raises `InvalidArgumentError` if `x > y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater(IGraphNodeBase x, double y, object data, object summarize, string message, string name)

Assert the condition `x > y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] > y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IGraphNodeBase x
Numeric `Tensor`.
double y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater".
Returns
object
Op that raises `InvalidArgumentError` if `x > y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater(int x, IGraphNodeBase y, object data, object summarize, string message, string name)

Assert the condition `x > y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] > y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
int x
Numeric `Tensor`.
IGraphNodeBase y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater".
Returns
object
Op that raises `InvalidArgumentError` if `x > y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater(IGraphNodeBase x, IndexedSlices y, object data, object summarize, string message, string name)

Assert the condition `x > y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] > y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IGraphNodeBase x
Numeric `Tensor`.
IndexedSlices y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater".
Returns
object
Op that raises `InvalidArgumentError` if `x > y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater(int x, ValueTuple<PythonClassContainer, PythonClassContainer> y, object data, object summarize, string message, string name)

Assert the condition `x > y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] > y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
int x
Numeric `Tensor`.
ValueTuple<PythonClassContainer, PythonClassContainer> y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater".
Returns
object
Op that raises `InvalidArgumentError` if `x > y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater_dyn(object x, object y, object data, object summarize, object message, object name)

Assert the condition `x > y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] > y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
object name
A name for this operation (optional). Defaults to "assert_greater".
Returns
object
Op that raises `InvalidArgumentError` if `x > y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater_equal(object x, object y, object data, object summarize, string message, string name)

Assert the condition `x >= y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] >= y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x >= y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater_equal(object x, object y, object data, object summarize, IGraphNodeBase message, string name)

Assert the condition `x >= y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] >= y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
IGraphNodeBase message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x >= y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater_equal(object x, object y, object data, object summarize, TensorShape message, string name)

Assert the condition `x >= y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] >= y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
TensorShape message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x >= y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater_equal(object x, object y, IEnumerable<object> data, object summarize, TensorShape message, string name)

Assert the condition `x >= y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] >= y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
TensorShape message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x >= y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater_equal(object x, object y, IEnumerable<object> data, object summarize, IEnumerable<int> message, string name)

Assert the condition `x >= y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] >= y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
IEnumerable<int> message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x >= y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater_equal(object x, object y, object data, object summarize, IEnumerable<int> message, string name)

Assert the condition `x >= y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] >= y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
IEnumerable<int> message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x >= y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater_equal(object x, object y, IEnumerable<object> data, object summarize, string message, string name)

Assert the condition `x >= y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] >= y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x >= y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater_equal(object x, object y, IEnumerable<object> data, object summarize, IGraphNodeBase message, string name)

Assert the condition `x >= y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] >= y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
IGraphNodeBase message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x >= y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater_equal(object x, object y, IEnumerable<object> data, object summarize, int message, string name)

Assert the condition `x >= y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] >= y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
int message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x >= y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater_equal(object x, object y, object data, object summarize, int message, string name)

Assert the condition `x >= y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] >= y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
int message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_greater_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x >= y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_greater_equal_dyn(object x, object y, object data, object summarize, object message, object name)

Assert the condition `x >= y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] >= y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
object name
A name for this operation (optional). Defaults to "assert_greater_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x >= y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_greater_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_integer(IEnumerable<IGraphNodeBase> x, string message, string name)

Assert that `x` is of integer dtype.

Example of adding a dependency to an operation:
Parameters
IEnumerable<IGraphNodeBase> x
`Tensor` whose basetype is integer and is not quantized.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_integer".
Returns
object
A `no_op` that does nothing. Type can be determined statically.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_integer(x)]):
              output = tf.reduce_sum(x) 

object assert_integer(object x, string message, string name)

Assert that `x` is of integer dtype.

Example of adding a dependency to an operation:
Parameters
object x
`Tensor` whose basetype is integer and is not quantized.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_integer".
Returns
object
A `no_op` that does nothing. Type can be determined statically.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_integer(x)]):
              output = tf.reduce_sum(x) 

object assert_integer_dyn(object x, object message, object name)

Assert that `x` is of integer dtype.

Example of adding a dependency to an operation:
Parameters
object x
`Tensor` whose basetype is integer and is not quantized.
object message
A string to prefix to the default message.
object name
A name for this operation (optional). Defaults to "assert_integer".
Returns
object
A `no_op` that does nothing. Type can be determined statically.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_integer(x)]):
              output = tf.reduce_sum(x) 

object assert_less(object x, object y, object data, Nullable<int> summarize, string message, string name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less(IEnumerable<IGraphNodeBase> x, object y, IEnumerable<object> data, Nullable<int> summarize, IEnumerable<object> message, PythonFunctionContainer name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IEnumerable<IGraphNodeBase> x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
IEnumerable<object> message
A string to prefix to the default message.
PythonFunctionContainer name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less(object x, object y, IEnumerable<object> data, Nullable<int> summarize, string message, string name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less(object x, object y, object data, Nullable<int> summarize, string message, PythonFunctionContainer name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
PythonFunctionContainer name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less(object x, object y, IEnumerable<object> data, Nullable<int> summarize, string message, PythonFunctionContainer name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
PythonFunctionContainer name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less(object x, object y, object data, Nullable<int> summarize, IEnumerable<object> message, PythonFunctionContainer name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
IEnumerable<object> message
A string to prefix to the default message.
PythonFunctionContainer name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less(object x, object y, object data, Nullable<int> summarize, IEnumerable<object> message, string name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
IEnumerable<object> message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less(IEnumerable<IGraphNodeBase> x, object y, IEnumerable<object> data, Nullable<int> summarize, string message, PythonFunctionContainer name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IEnumerable<IGraphNodeBase> x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
PythonFunctionContainer name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less(IEnumerable<IGraphNodeBase> x, object y, IEnumerable<object> data, Nullable<int> summarize, IEnumerable<object> message, string name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IEnumerable<IGraphNodeBase> x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
IEnumerable<object> message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less(object x, object y, IEnumerable<object> data, Nullable<int> summarize, IEnumerable<object> message, string name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
IEnumerable<object> message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less(IEnumerable<IGraphNodeBase> x, object y, IEnumerable<object> data, Nullable<int> summarize, string message, string name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IEnumerable<IGraphNodeBase> x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less(object x, object y, IEnumerable<object> data, Nullable<int> summarize, IEnumerable<object> message, PythonFunctionContainer name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
IEnumerable<object> message
A string to prefix to the default message.
PythonFunctionContainer name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less(IEnumerable<IGraphNodeBase> x, object y, object data, Nullable<int> summarize, IEnumerable<object> message, string name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IEnumerable<IGraphNodeBase> x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
IEnumerable<object> message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less(IEnumerable<IGraphNodeBase> x, object y, object data, Nullable<int> summarize, string message, PythonFunctionContainer name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IEnumerable<IGraphNodeBase> x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
PythonFunctionContainer name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less(IEnumerable<IGraphNodeBase> x, object y, object data, Nullable<int> summarize, IEnumerable<object> message, PythonFunctionContainer name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IEnumerable<IGraphNodeBase> x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
IEnumerable<object> message
A string to prefix to the default message.
PythonFunctionContainer name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less(IEnumerable<IGraphNodeBase> x, object y, object data, Nullable<int> summarize, string message, string name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IEnumerable<IGraphNodeBase> x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less_dyn(object x, object y, object data, object summarize, object message, object name)

Assert the condition `x < y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] < y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
object name
A name for this operation (optional). Defaults to "assert_less".
Returns
object
Op that raises `InvalidArgumentError` if `x < y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less_equal(object x, object y, IEnumerable<object> data, object summarize, object message, string name)

Assert the condition `x <= y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] <= y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_less_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x <= y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less_equal(object x, IEnumerable<object> y, IEnumerable<object> data, object summarize, object message, string name)

Assert the condition `x <= y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] <= y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
IEnumerable<object> y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_less_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x <= y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_less_equal_dyn(object x, object y, object data, object summarize, object message, object name)

Assert the condition `x <= y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] <= y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
object name
A name for this operation (optional). Defaults to "assert_less_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x <= y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_less_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_near(IEnumerable<object> x, IGraphNodeBase y, Nullable<double> rtol, Nullable<double> atol, object data, object summarize, string message, string name)

Assert the condition `x` and `y` are close element-wise.

Example of adding a dependency to an operation: This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have

```tf.abs(x[i] - y[i]) <= atol + rtol * tf.abs(y[i])```.

If both `x` and `y` are empty, this is trivially satisfied.

The default `atol` and `rtol` is `10 * eps`, where `eps` is the smallest representable positive number such that `1 + eps != 1`. This is about `1.2e-6` in `32bit`, `2.22e-15` in `64bit`, and `0.00977` in `16bit`. See `numpy.finfo`.
Parameters
IEnumerable<object> x
Float or complex `Tensor`.
IGraphNodeBase y
Float or complex `Tensor`, same `dtype` as, and broadcastable to, `x`.
Nullable<double> rtol
`Tensor`. Same `dtype` as, and broadcastable to, `x`. The relative tolerance. Default is `10 * eps`.
Nullable<double> atol
`Tensor`. Same `dtype` as, and broadcastable to, `x`. The absolute tolerance. Default is `10 * eps`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_near".
Returns
object
Op that raises `InvalidArgumentError` if `x` and `y` are not close enough.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_near(x, y)]):
              output = tf.reduce_sum(x) 

object assert_near(IGraphNodeBase x, IGraphNodeBase y, Nullable<double> rtol, Nullable<double> atol, object data, object summarize, string message, string name)

Assert the condition `x` and `y` are close element-wise.

Example of adding a dependency to an operation: This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have

```tf.abs(x[i] - y[i]) <= atol + rtol * tf.abs(y[i])```.

If both `x` and `y` are empty, this is trivially satisfied.

The default `atol` and `rtol` is `10 * eps`, where `eps` is the smallest representable positive number such that `1 + eps != 1`. This is about `1.2e-6` in `32bit`, `2.22e-15` in `64bit`, and `0.00977` in `16bit`. See `numpy.finfo`.
Parameters
IGraphNodeBase x
Float or complex `Tensor`.
IGraphNodeBase y
Float or complex `Tensor`, same `dtype` as, and broadcastable to, `x`.
Nullable<double> rtol
`Tensor`. Same `dtype` as, and broadcastable to, `x`. The relative tolerance. Default is `10 * eps`.
Nullable<double> atol
`Tensor`. Same `dtype` as, and broadcastable to, `x`. The absolute tolerance. Default is `10 * eps`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_near".
Returns
object
Op that raises `InvalidArgumentError` if `x` and `y` are not close enough.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_near(x, y)]):
              output = tf.reduce_sum(x) 

object assert_near_dyn(object x, object y, object rtol, object atol, object data, object summarize, object message, object name)

Assert the condition `x` and `y` are close element-wise.

Example of adding a dependency to an operation: This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have

```tf.abs(x[i] - y[i]) <= atol + rtol * tf.abs(y[i])```.

If both `x` and `y` are empty, this is trivially satisfied.

The default `atol` and `rtol` is `10 * eps`, where `eps` is the smallest representable positive number such that `1 + eps != 1`. This is about `1.2e-6` in `32bit`, `2.22e-15` in `64bit`, and `0.00977` in `16bit`. See `numpy.finfo`.
Parameters
object x
Float or complex `Tensor`.
object y
Float or complex `Tensor`, same `dtype` as, and broadcastable to, `x`.
object rtol
`Tensor`. Same `dtype` as, and broadcastable to, `x`. The relative tolerance. Default is `10 * eps`.
object atol
`Tensor`. Same `dtype` as, and broadcastable to, `x`. The absolute tolerance. Default is `10 * eps`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
object name
A name for this operation (optional). Defaults to "assert_near".
Returns
object
Op that raises `InvalidArgumentError` if `x` and `y` are not close enough.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_near(x, y)]):
              output = tf.reduce_sum(x) 

object assert_negative(int x, object data, object summarize, string message, string name)

Assert the condition `x < 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Negative means, for every element `x[i]` of `x`, we have `x[i] < 0`. If `x` is empty this is trivially satisfied.
Parameters
int x
Numeric `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_negative".
Returns
object
Op that raises `InvalidArgumentError` if `x < 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_negative(x, y)]):
              output = tf.reduce_sum(x) 

object assert_negative(IGraphNodeBase x, object data, object summarize, string message, string name)

Assert the condition `x < 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Negative means, for every element `x[i]` of `x`, we have `x[i] < 0`. If `x` is empty this is trivially satisfied.
Parameters
IGraphNodeBase x
Numeric `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_negative".
Returns
object
Op that raises `InvalidArgumentError` if `x < 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_negative(x, y)]):
              output = tf.reduce_sum(x) 

object assert_negative_dyn(object x, object data, object summarize, object message, object name)

Assert the condition `x < 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Negative means, for every element `x[i]` of `x`, we have `x[i] < 0`. If `x` is empty this is trivially satisfied.
Parameters
object x
Numeric `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
object name
A name for this operation (optional). Defaults to "assert_negative".
Returns
object
Op that raises `InvalidArgumentError` if `x < 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_negative(x, y)]):
              output = tf.reduce_sum(x) 

object assert_non_negative(IEnumerable<IGraphNodeBase> x, object data, object summarize, string message, string name)

Assert the condition `x >= 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Non-negative means, for every element `x[i]` of `x`, we have `x[i] >= 0`. If `x` is empty this is trivially satisfied.
Parameters
IEnumerable<IGraphNodeBase> x
Numeric `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_non_negative".
Returns
object
Op that raises `InvalidArgumentError` if `x >= 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_non_negative(x, y)]):
              output = tf.reduce_sum(x) 

object assert_non_negative(IEnumerable<IGraphNodeBase> x, object data, object summarize, IGraphNodeBase message, string name)

Assert the condition `x >= 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Non-negative means, for every element `x[i]` of `x`, we have `x[i] >= 0`. If `x` is empty this is trivially satisfied.
Parameters
IEnumerable<IGraphNodeBase> x
Numeric `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
IGraphNodeBase message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_non_negative".
Returns
object
Op that raises `InvalidArgumentError` if `x >= 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_non_negative(x, y)]):
              output = tf.reduce_sum(x) 

object assert_non_negative(object x, object data, object summarize, IGraphNodeBase message, string name)

Assert the condition `x >= 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Non-negative means, for every element `x[i]` of `x`, we have `x[i] >= 0`. If `x` is empty this is trivially satisfied.
Parameters
object x
Numeric `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
IGraphNodeBase message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_non_negative".
Returns
object
Op that raises `InvalidArgumentError` if `x >= 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_non_negative(x, y)]):
              output = tf.reduce_sum(x) 

object assert_non_negative(object x, object data, object summarize, double message, string name)

Assert the condition `x >= 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Non-negative means, for every element `x[i]` of `x`, we have `x[i] >= 0`. If `x` is empty this is trivially satisfied.
Parameters
object x
Numeric `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
double message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_non_negative".
Returns
object
Op that raises `InvalidArgumentError` if `x >= 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_non_negative(x, y)]):
              output = tf.reduce_sum(x) 

object assert_non_negative(IEnumerable<IGraphNodeBase> x, object data, object summarize, double message, string name)

Assert the condition `x >= 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Non-negative means, for every element `x[i]` of `x`, we have `x[i] >= 0`. If `x` is empty this is trivially satisfied.
Parameters
IEnumerable<IGraphNodeBase> x
Numeric `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
double message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_non_negative".
Returns
object
Op that raises `InvalidArgumentError` if `x >= 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_non_negative(x, y)]):
              output = tf.reduce_sum(x) 

object assert_non_negative(object x, object data, object summarize, string message, string name)

Assert the condition `x >= 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Non-negative means, for every element `x[i]` of `x`, we have `x[i] >= 0`. If `x` is empty this is trivially satisfied.
Parameters
object x
Numeric `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_non_negative".
Returns
object
Op that raises `InvalidArgumentError` if `x >= 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_non_negative(x, y)]):
              output = tf.reduce_sum(x) 

object assert_non_negative_dyn(object x, object data, object summarize, object message, object name)

Assert the condition `x >= 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Non-negative means, for every element `x[i]` of `x`, we have `x[i] >= 0`. If `x` is empty this is trivially satisfied.
Parameters
object x
Numeric `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
object name
A name for this operation (optional). Defaults to "assert_non_negative".
Returns
object
Op that raises `InvalidArgumentError` if `x >= 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_non_negative(x, y)]):
              output = tf.reduce_sum(x) 

object assert_non_positive(int x, object data, object summarize, string message, string name)

Assert the condition `x <= 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Non-positive means, for every element `x[i]` of `x`, we have `x[i] <= 0`. If `x` is empty this is trivially satisfied.
Parameters
int x
Numeric `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_non_positive".
Returns
object
Op that raises `InvalidArgumentError` if `x <= 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_non_positive(x, y)]):
              output = tf.reduce_sum(x) 

object assert_non_positive(IGraphNodeBase x, object data, object summarize, string message, string name)

Assert the condition `x <= 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Non-positive means, for every element `x[i]` of `x`, we have `x[i] <= 0`. If `x` is empty this is trivially satisfied.
Parameters
IGraphNodeBase x
Numeric `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_non_positive".
Returns
object
Op that raises `InvalidArgumentError` if `x <= 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_non_positive(x, y)]):
              output = tf.reduce_sum(x) 

object assert_non_positive_dyn(object x, object data, object summarize, object message, object name)

Assert the condition `x <= 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Non-positive means, for every element `x[i]` of `x`, we have `x[i] <= 0`. If `x` is empty this is trivially satisfied.
Parameters
object x
Numeric `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
object name
A name for this operation (optional). Defaults to "assert_non_positive".
Returns
object
Op that raises `InvalidArgumentError` if `x <= 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_non_positive(x, y)]):
              output = tf.reduce_sum(x) 

object assert_none_equal(double x, IGraphNodeBase y, IEnumerable<string> data, Nullable<int> summarize, string message, string name)

Assert the condition `x != y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] != y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
double x
Numeric `Tensor`.
IGraphNodeBase y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_none_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x != y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_none_equal(float32 x, int y, IEnumerable<string> data, Nullable<int> summarize, string message, string name)

Assert the condition `x != y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] != y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
float32 x
Numeric `Tensor`.
int y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_none_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x != y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_none_equal(double x, int y, IEnumerable<string> data, Nullable<int> summarize, string message, string name)

Assert the condition `x != y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] != y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
double x
Numeric `Tensor`.
int y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_none_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x != y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_none_equal(IGraphNodeBase x, IGraphNodeBase y, IEnumerable<string> data, Nullable<int> summarize, string message, string name)

Assert the condition `x != y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] != y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IGraphNodeBase x
Numeric `Tensor`.
IGraphNodeBase y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_none_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x != y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_none_equal(IGraphNodeBase x, int y, IEnumerable<string> data, Nullable<int> summarize, string message, string name)

Assert the condition `x != y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] != y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IGraphNodeBase x
Numeric `Tensor`.
int y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_none_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x != y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_none_equal(float64 x, int y, IEnumerable<string> data, Nullable<int> summarize, string message, string name)

Assert the condition `x != y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] != y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
float64 x
Numeric `Tensor`.
int y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_none_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x != y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_none_equal(float64 x, IGraphNodeBase y, IEnumerable<string> data, Nullable<int> summarize, string message, string name)

Assert the condition `x != y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] != y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
float64 x
Numeric `Tensor`.
IGraphNodeBase y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_none_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x != y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_none_equal(int x, IGraphNodeBase y, IEnumerable<string> data, Nullable<int> summarize, string message, string name)

Assert the condition `x != y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] != y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
int x
Numeric `Tensor`.
IGraphNodeBase y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_none_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x != y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_none_equal(int x, int y, IEnumerable<string> data, Nullable<int> summarize, string message, string name)

Assert the condition `x != y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] != y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
int x
Numeric `Tensor`.
int y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_none_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x != y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_none_equal(float32 x, IGraphNodeBase y, IEnumerable<string> data, Nullable<int> summarize, string message, string name)

Assert the condition `x != y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] != y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
float32 x
Numeric `Tensor`.
IGraphNodeBase y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_none_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x != y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_none_equal(ndarray x, IGraphNodeBase y, IEnumerable<string> data, Nullable<int> summarize, string message, string name)

Assert the condition `x != y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] != y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
ndarray x
Numeric `Tensor`.
IGraphNodeBase y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_none_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x != y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_none_equal(IEnumerable<double> x, int y, IEnumerable<string> data, Nullable<int> summarize, string message, string name)

Assert the condition `x != y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] != y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IEnumerable<double> x
Numeric `Tensor`.
int y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_none_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x != y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_none_equal(ndarray x, int y, IEnumerable<string> data, Nullable<int> summarize, string message, string name)

Assert the condition `x != y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] != y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
ndarray x
Numeric `Tensor`.
int y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_none_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x != y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_none_equal(IEnumerable<double> x, IGraphNodeBase y, IEnumerable<string> data, Nullable<int> summarize, string message, string name)

Assert the condition `x != y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] != y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
IEnumerable<double> x
Numeric `Tensor`.
IGraphNodeBase y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
Nullable<int> summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_none_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x != y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_none_equal_dyn(object x, object y, object data, object summarize, object message, object name)

Assert the condition `x != y` holds element-wise.

This condition holds if for every pair of (possibly broadcast) elements `x[i]`, `y[i]`, we have `x[i] != y[i]`. If both `x` and `y` are empty, this is trivially satisfied.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object y
Numeric `Tensor`, same dtype as and broadcastable to `x`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`, `y`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
object name
A name for this operation (optional). Defaults to "assert_none_equal".
Returns
object
Op that raises `InvalidArgumentError` if `x != y` is False.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
              output = tf.reduce_sum(x) 

object assert_positive(object x, IEnumerable<string> data, object summarize, object message, string name)

Assert the condition `x > 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Positive means, for every element `x[i]` of `x`, we have `x[i] > 0`. If `x` is empty this is trivially satisfied.
Parameters
object x
Numeric `Tensor`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_positive".
Returns
object
Op that raises `InvalidArgumentError` if `x > 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_positive(x, y)]):
              output = tf.reduce_sum(x) 

object assert_positive(object x, IEnumerable<string> data, object summarize, string message, string name)

Assert the condition `x > 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Positive means, for every element `x[i]` of `x`, we have `x[i] > 0`. If `x` is empty this is trivially satisfied.
Parameters
object x
Numeric `Tensor`.
IEnumerable<string> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_positive".
Returns
object
Op that raises `InvalidArgumentError` if `x > 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_positive(x, y)]):
              output = tf.reduce_sum(x) 

object assert_positive_dyn(object x, object data, object summarize, object message, object name)

Assert the condition `x > 0` holds element-wise.

When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: Positive means, for every element `x[i]` of `x`, we have `x[i] > 0`. If `x` is empty this is trivially satisfied.
Parameters
object x
Numeric `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
object name
A name for this operation (optional). Defaults to "assert_positive".
Returns
object
Op that raises `InvalidArgumentError` if `x > 0` is False.
Show Example
with tf.control_dependencies([tf.debugging.assert_positive(x, y)]):
              output = tf.reduce_sum(x) 

void assert_proper_iterable(string values)

Static assert that values is a "proper" iterable.

`Ops` that expect iterables of `Tensor` can call this to validate input. Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves.
Parameters
string values
Object to be checked.

void assert_proper_iterable(IGraphNodeBase values)

Static assert that values is a "proper" iterable.

`Ops` that expect iterables of `Tensor` can call this to validate input. Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves.
Parameters
IGraphNodeBase values
Object to be checked.

void assert_proper_iterable(int values)

Static assert that values is a "proper" iterable.

`Ops` that expect iterables of `Tensor` can call this to validate input. Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves.
Parameters
int values
Object to be checked.

void assert_proper_iterable(ValueTuple<IGraphNodeBase, object> values)

Static assert that values is a "proper" iterable.

`Ops` that expect iterables of `Tensor` can call this to validate input. Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves.
Parameters
ValueTuple<IGraphNodeBase, object> values
Object to be checked.

void assert_proper_iterable(IEnumerable<IGraphNodeBase> values)

Static assert that values is a "proper" iterable.

`Ops` that expect iterables of `Tensor` can call this to validate input. Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves.
Parameters
IEnumerable<IGraphNodeBase> values
Object to be checked.

void assert_proper_iterable(ndarray values)

Static assert that values is a "proper" iterable.

`Ops` that expect iterables of `Tensor` can call this to validate input. Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves.
Parameters
ndarray values
Object to be checked.

void assert_proper_iterable(object values)

Static assert that values is a "proper" iterable.

`Ops` that expect iterables of `Tensor` can call this to validate input. Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves.
Parameters
object values
Object to be checked.

object assert_proper_iterable_dyn(object values)

Static assert that values is a "proper" iterable.

`Ops` that expect iterables of `Tensor` can call this to validate input. Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves.
Parameters
object values
Object to be checked.

object assert_rank(object x, double rank, IEnumerable<object> data, object summarize, string message, string name)

Assert `x` has rank equal to `rank`.

Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
double rank
Scalar integer `Tensor`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and the shape of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank(object x, IGraphNodeBase rank, IEnumerable<object> data, object summarize, string message, string name)

Assert `x` has rank equal to `rank`.

Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
IGraphNodeBase rank
Scalar integer `Tensor`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and the shape of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank(object x, ndarray rank, IEnumerable<object> data, object summarize, string message, string name)

Assert `x` has rank equal to `rank`.

Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
ndarray rank
Scalar integer `Tensor`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and the shape of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank(PythonClassContainer x, ndarray rank, IEnumerable<object> data, object summarize, string message, string name)

Assert `x` has rank equal to `rank`.

Example of adding a dependency to an operation:
Parameters
PythonClassContainer x
Numeric `Tensor`.
ndarray rank
Scalar integer `Tensor`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and the shape of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank(object x, int rank, IEnumerable<object> data, object summarize, string message, string name)

Assert `x` has rank equal to `rank`.

Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
int rank
Scalar integer `Tensor`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and the shape of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank(PythonClassContainer x, double rank, IEnumerable<object> data, object summarize, string message, string name)

Assert `x` has rank equal to `rank`.

Example of adding a dependency to an operation:
Parameters
PythonClassContainer x
Numeric `Tensor`.
double rank
Scalar integer `Tensor`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and the shape of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank(PythonClassContainer x, int rank, IEnumerable<object> data, object summarize, string message, string name)

Assert `x` has rank equal to `rank`.

Example of adding a dependency to an operation:
Parameters
PythonClassContainer x
Numeric `Tensor`.
int rank
Scalar integer `Tensor`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and the shape of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank(PythonClassContainer x, IGraphNodeBase rank, IEnumerable<object> data, object summarize, string message, string name)

Assert `x` has rank equal to `rank`.

Example of adding a dependency to an operation:
Parameters
PythonClassContainer x
Numeric `Tensor`.
IGraphNodeBase rank
Scalar integer `Tensor`.
IEnumerable<object> data
The tensors to print out if the condition is False. Defaults to error message and the shape of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank_at_least(object x, int rank, IEnumerable<IGraphNodeBase> data, object summarize, object message, string name)

Assert `x` has rank equal to `rank` or higher.

Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
int rank
Scalar `Tensor`.
IEnumerable<IGraphNodeBase> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_at_least".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank or higher. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_at_least(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank_at_least(object x, int rank, IEnumerable<IGraphNodeBase> data, object summarize, int message, string name)

Assert `x` has rank equal to `rank` or higher.

Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
int rank
Scalar `Tensor`.
IEnumerable<IGraphNodeBase> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
int message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_at_least".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank or higher. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_at_least(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank_at_least(object x, object rank, IEnumerable<IGraphNodeBase> data, object summarize, string message, string name)

Assert `x` has rank equal to `rank` or higher.

Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object rank
Scalar `Tensor`.
IEnumerable<IGraphNodeBase> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_at_least".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank or higher. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_at_least(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank_at_least(object x, object rank, IEnumerable<IGraphNodeBase> data, object summarize, int message, string name)

Assert `x` has rank equal to `rank` or higher.

Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object rank
Scalar `Tensor`.
IEnumerable<IGraphNodeBase> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
int message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_at_least".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank or higher. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_at_least(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank_at_least(object x, object rank, IEnumerable<IGraphNodeBase> data, object summarize, object message, string name)

Assert `x` has rank equal to `rank` or higher.

Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object rank
Scalar `Tensor`.
IEnumerable<IGraphNodeBase> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_at_least".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank or higher. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_at_least(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank_at_least(object x, int rank, IEnumerable<IGraphNodeBase> data, object summarize, string message, string name)

Assert `x` has rank equal to `rank` or higher.

Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
int rank
Scalar `Tensor`.
IEnumerable<IGraphNodeBase> data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_at_least".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank or higher. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_at_least(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank_at_least_dyn(object x, object rank, object data, object summarize, object message, object name)

Assert `x` has rank equal to `rank` or higher.

Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object rank
Scalar `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
object name
A name for this operation (optional). Defaults to "assert_rank_at_least".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank or higher. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_at_least(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank_dyn(object x, object rank, object data, object summarize, object message, object name)

Assert `x` has rank equal to `rank`.

Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object rank
Scalar integer `Tensor`.
object data
The tensors to print out if the condition is False. Defaults to error message and the shape of `x`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
object name
A name for this operation (optional). Defaults to "assert_rank".
Returns
object
Op raising `InvalidArgumentError` unless `x` has specified rank. If static checks determine `x` has correct rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank(x, 2)]):
              output = tf.reduce_sum(x) 

object assert_rank_in(IGraphNodeBase x, IEnumerable<int> ranks, object data, object summarize, string message, string name)

Assert `x` has rank in `ranks`.

Example of adding a dependency to an operation:
Parameters
IGraphNodeBase x
Numeric `Tensor`.
IEnumerable<int> ranks
Iterable of scalar `Tensor` objects.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_in".
Returns
object
Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`. If static checks determine `x` has matching rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_in(x, (2, 4))]):
              output = tf.reduce_sum(x) 

object assert_rank_in(ValueTuple<PythonClassContainer, PythonClassContainer> x, IEnumerable<int> ranks, object data, object summarize, string message, string name)

Assert `x` has rank in `ranks`.

Example of adding a dependency to an operation:
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> x
Numeric `Tensor`.
IEnumerable<int> ranks
Iterable of scalar `Tensor` objects.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_in".
Returns
object
Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`. If static checks determine `x` has matching rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_in(x, (2, 4))]):
              output = tf.reduce_sum(x) 

object assert_rank_in(IndexedSlices x, IEnumerable<int> ranks, object data, object summarize, string message, string name)

Assert `x` has rank in `ranks`.

Example of adding a dependency to an operation:
Parameters
IndexedSlices x
Numeric `Tensor`.
IEnumerable<int> ranks
Iterable of scalar `Tensor` objects.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_in".
Returns
object
Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`. If static checks determine `x` has matching rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_in(x, (2, 4))]):
              output = tf.reduce_sum(x) 

object assert_rank_in(IGraphNodeBase x, ValueTuple<ndarray, object> ranks, object data, object summarize, string message, string name)

Assert `x` has rank in `ranks`.

Example of adding a dependency to an operation:
Parameters
IGraphNodeBase x
Numeric `Tensor`.
ValueTuple<ndarray, object> ranks
Iterable of scalar `Tensor` objects.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_in".
Returns
object
Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`. If static checks determine `x` has matching rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_in(x, (2, 4))]):
              output = tf.reduce_sum(x) 

object assert_rank_in(ValueTuple<PythonClassContainer, PythonClassContainer> x, ValueTuple<ndarray, object> ranks, object data, object summarize, string message, string name)

Assert `x` has rank in `ranks`.

Example of adding a dependency to an operation:
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> x
Numeric `Tensor`.
ValueTuple<ndarray, object> ranks
Iterable of scalar `Tensor` objects.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_in".
Returns
object
Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`. If static checks determine `x` has matching rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_in(x, (2, 4))]):
              output = tf.reduce_sum(x) 

object assert_rank_in(int x, ValueTuple<ndarray, object> ranks, object data, object summarize, string message, string name)

Assert `x` has rank in `ranks`.

Example of adding a dependency to an operation:
Parameters
int x
Numeric `Tensor`.
ValueTuple<ndarray, object> ranks
Iterable of scalar `Tensor` objects.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_in".
Returns
object
Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`. If static checks determine `x` has matching rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_in(x, (2, 4))]):
              output = tf.reduce_sum(x) 

object assert_rank_in(double x, ValueTuple<ndarray, object> ranks, object data, object summarize, string message, string name)

Assert `x` has rank in `ranks`.

Example of adding a dependency to an operation:
Parameters
double x
Numeric `Tensor`.
ValueTuple<ndarray, object> ranks
Iterable of scalar `Tensor` objects.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_in".
Returns
object
Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`. If static checks determine `x` has matching rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_in(x, (2, 4))]):
              output = tf.reduce_sum(x) 

object assert_rank_in(IndexedSlices x, ValueTuple<ndarray, object> ranks, object data, object summarize, string message, string name)

Assert `x` has rank in `ranks`.

Example of adding a dependency to an operation:
Parameters
IndexedSlices x
Numeric `Tensor`.
ValueTuple<ndarray, object> ranks
Iterable of scalar `Tensor` objects.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_in".
Returns
object
Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`. If static checks determine `x` has matching rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_in(x, (2, 4))]):
              output = tf.reduce_sum(x) 

object assert_rank_in(int x, IEnumerable<int> ranks, object data, object summarize, string message, string name)

Assert `x` has rank in `ranks`.

Example of adding a dependency to an operation:
Parameters
int x
Numeric `Tensor`.
IEnumerable<int> ranks
Iterable of scalar `Tensor` objects.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_in".
Returns
object
Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`. If static checks determine `x` has matching rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_in(x, (2, 4))]):
              output = tf.reduce_sum(x) 

object assert_rank_in(double x, IEnumerable<int> ranks, object data, object summarize, string message, string name)

Assert `x` has rank in `ranks`.

Example of adding a dependency to an operation:
Parameters
double x
Numeric `Tensor`.
IEnumerable<int> ranks
Iterable of scalar `Tensor` objects.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
string message
A string to prefix to the default message.
string name
A name for this operation (optional). Defaults to "assert_rank_in".
Returns
object
Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`. If static checks determine `x` has matching rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_in(x, (2, 4))]):
              output = tf.reduce_sum(x) 

object assert_rank_in_dyn(object x, object ranks, object data, object summarize, object message, object name)

Assert `x` has rank in `ranks`.

Example of adding a dependency to an operation:
Parameters
object x
Numeric `Tensor`.
object ranks
Iterable of scalar `Tensor` objects.
object data
The tensors to print out if the condition is False. Defaults to error message and first few entries of `x`.
object summarize
Print this many entries of each tensor.
object message
A string to prefix to the default message.
object name
A name for this operation (optional). Defaults to "assert_rank_in".
Returns
object
Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`. If static checks determine `x` has matching rank, a `no_op` is returned.
Show Example
with tf.control_dependencies([tf.compat.v1.assert_rank_in(x, (2, 4))]):
              output = tf.reduce_sum(x) 

DType assert_same_float_dtype(ValueTuple<IGraphNodeBase, object, object> tensors, DType dtype)

Validate and return float type based on `tensors` and `dtype`.

For ops such as matrix multiplication, inputs and weights must be of the same float type. This function validates that all `tensors` are the same type, validates that type is `dtype` (if supplied), and returns the type. Type must be a floating point type. If neither `tensors` nor `dtype` is supplied, the function will return `dtypes.float32`.
Parameters
ValueTuple<IGraphNodeBase, object, object> tensors
Tensors of input values. Can include `None` elements, which will be ignored.
DType dtype
Expected type.
Returns
DType
Validated type.

DType assert_same_float_dtype(IEnumerable<object> tensors, DType dtype)

Validate and return float type based on `tensors` and `dtype`.

For ops such as matrix multiplication, inputs and weights must be of the same float type. This function validates that all `tensors` are the same type, validates that type is `dtype` (if supplied), and returns the type. Type must be a floating point type. If neither `tensors` nor `dtype` is supplied, the function will return `dtypes.float32`.
Parameters
IEnumerable<object> tensors
Tensors of input values. Can include `None` elements, which will be ignored.
DType dtype
Expected type.
Returns
DType
Validated type.

DType assert_same_float_dtype(object tensors, DType dtype)

Validate and return float type based on `tensors` and `dtype`.

For ops such as matrix multiplication, inputs and weights must be of the same float type. This function validates that all `tensors` are the same type, validates that type is `dtype` (if supplied), and returns the type. Type must be a floating point type. If neither `tensors` nor `dtype` is supplied, the function will return `dtypes.float32`.
Parameters
object tensors
Tensors of input values. Can include `None` elements, which will be ignored.
DType dtype
Expected type.
Returns
DType
Validated type.

object assert_same_float_dtype_dyn(object tensors, object dtype)

Validate and return float type based on `tensors` and `dtype`.

For ops such as matrix multiplication, inputs and weights must be of the same float type. This function validates that all `tensors` are the same type, validates that type is `dtype` (if supplied), and returns the type. Type must be a floating point type. If neither `tensors` nor `dtype` is supplied, the function will return `dtypes.float32`.
Parameters
object tensors
Tensors of input values. Can include `None` elements, which will be ignored.
object dtype
Expected type.
Returns
object
Validated type.

Tensor assert_scalar(IGraphNodeBase tensor, string name, object message)

Asserts that the given `tensor` is a scalar (i.e. zero-dimensional).

This function raises `ValueError` unless it can be certain that the given `tensor` is a scalar. `ValueError` is also raised if the shape of `tensor` is unknown.
Parameters
IGraphNodeBase tensor
A `Tensor`.
string name
A name for this operation. Defaults to "assert_scalar"
object message
A string to prefix to the default message.
Returns
Tensor
The input tensor (potentially converted to a `Tensor`).

object assert_scalar_dyn(object tensor, object name, object message)

Asserts that the given `tensor` is a scalar (i.e. zero-dimensional).

This function raises `ValueError` unless it can be certain that the given `tensor` is a scalar. `ValueError` is also raised if the shape of `tensor` is unknown.
Parameters
object tensor
A `Tensor`.
object name
A name for this operation. Defaults to "assert_scalar"
object message
A string to prefix to the default message.
Returns
object
The input tensor (potentially converted to a `Tensor`).

object assert_type(IGraphNodeBase tensor, DType tf_type, string message, string name)

Statically asserts that the given `Tensor` is of the specified type.
Parameters
IGraphNodeBase tensor
A `Tensor`.
DType tf_type
A tensorflow type (`dtypes.float32`, tf.int64, `dtypes.bool`, etc).
string message
A string to prefix to the default message.
string name
A name to give this `Op`. Defaults to "assert_type"
Returns
object
A `no_op` that does nothing. Type can be determined statically.

object assert_type(IGraphNodeBase tensor, DType tf_type, DType message, string name)

Statically asserts that the given `Tensor` is of the specified type.
Parameters
IGraphNodeBase tensor
A `Tensor`.
DType tf_type
A tensorflow type (`dtypes.float32`, tf.int64, `dtypes.bool`, etc).
DType message
A string to prefix to the default message.
string name
A name to give this `Op`. Defaults to "assert_type"
Returns
object
A `no_op` that does nothing. Type can be determined statically.

object assert_type_dyn(object tensor, object tf_type, object message, object name)

Statically asserts that the given `Tensor` is of the specified type.
Parameters
object tensor
A `Tensor`.
object tf_type
A tensorflow type (`dtypes.float32`, tf.int64, `dtypes.bool`, etc).
object message
A string to prefix to the default message.
object name
A name to give this `Op`. Defaults to "assert_type"
Returns
object
A `no_op` that does nothing. Type can be determined statically.

Tensor assert_variables_initialized(IEnumerable<Variable> var_list)

Returns an Op to check if variables are initialized.

NOTE: This function is obsolete and will be removed in 6 months. Please change your implementation to use `report_uninitialized_variables()`.

When run, the returned Op will raise the exception `FailedPreconditionError` if any of the variables has not yet been initialized.

Note: This function is implemented by trying to fetch the values of the variables. If one of the variables is not initialized a message may be logged by the C++ runtime. This is expected.
Parameters
IEnumerable<Variable> var_list
List of `Variable` objects to check. Defaults to the value of `global_variables().`
Returns
Tensor
An Op, or None if there are no variables.

**NOTE** The output of this function should be used. If it is not, a warning will be logged. To mark the output as used, call its.mark_used() method.

object assert_variables_initialized_dyn(object var_list)

Returns an Op to check if variables are initialized.

NOTE: This function is obsolete and will be removed in 6 months. Please change your implementation to use `report_uninitialized_variables()`.

When run, the returned Op will raise the exception `FailedPreconditionError` if any of the variables has not yet been initialized.

Note: This function is implemented by trying to fetch the values of the variables. If one of the variables is not initialized a message may be logged by the C++ runtime. This is expected.
Parameters
object var_list
List of `Variable` objects to check. Defaults to the value of `global_variables().`
Returns
object
An Op, or None if there are no variables.

**NOTE** The output of this function should be used. If it is not, a warning will be logged. To mark the output as used, call its.mark_used() method.

Tensor assign(PartitionedVariable ref, IGraphNodeBase value, Nullable<bool> validate_shape, Nullable<bool> use_locking, string name)

Update `ref` by assigning `value` to it.

This operation outputs a Tensor that holds the new value of `ref` after the value has been assigned. This makes it easier to chain operations that need to use the reset value.
Parameters
PartitionedVariable ref
A mutable `Tensor`. Should be from a `Variable` node. May be uninitialized.
IGraphNodeBase value
A `Tensor`. Must have the same shape and dtype as `ref`. The value to be assigned to the variable.
Nullable<bool> validate_shape
An optional `bool`. Defaults to `True`. If true, the operation will validate that the shape of 'value' matches the shape of the Tensor being assigned to. If false, 'ref' will take on the shape of 'value'.
Nullable<bool> use_locking
An optional `bool`. Defaults to `True`. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` that will hold the new value of `ref` after the assignment has completed.

Tensor assign(Variable ref, IGraphNodeBase value, Nullable<bool> validate_shape, Nullable<bool> use_locking, string name)

Update `ref` by assigning `value` to it.

This operation outputs a Tensor that holds the new value of `ref` after the value has been assigned. This makes it easier to chain operations that need to use the reset value.
Parameters
Variable ref
A mutable `Tensor`. Should be from a `Variable` node. May be uninitialized.
IGraphNodeBase value
A `Tensor`. Must have the same shape and dtype as `ref`. The value to be assigned to the variable.
Nullable<bool> validate_shape
An optional `bool`. Defaults to `True`. If true, the operation will validate that the shape of 'value' matches the shape of the Tensor being assigned to. If false, 'ref' will take on the shape of 'value'.
Nullable<bool> use_locking
An optional `bool`. Defaults to `True`. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` that will hold the new value of `ref` after the assignment has completed.

Tensor assign_add(IEnumerable<object> ref, IGraphNodeBase value, Nullable<bool> use_locking, PythonFunctionContainer name)

Update `ref` by adding `value` to it.

This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value. Unlike tf.math.add, this op does not broadcast. `ref` and `value` must have the same shape.
Parameters
IEnumerable<object> ref
A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. Should be from a `Variable` node.
IGraphNodeBase value
A `Tensor`. Must have the same shape and dtype as `ref`. The value to be added to the variable.
Nullable<bool> use_locking
An optional `bool`. Defaults to `False`. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
PythonFunctionContainer name
A name for the operation (optional).
Returns
Tensor
Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.

Tensor assign_add(IEnumerable<object> ref, IGraphNodeBase value, Nullable<bool> use_locking, string name)

Update `ref` by adding `value` to it.

This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value. Unlike tf.math.add, this op does not broadcast. `ref` and `value` must have the same shape.
Parameters
IEnumerable<object> ref
A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. Should be from a `Variable` node.
IGraphNodeBase value
A `Tensor`. Must have the same shape and dtype as `ref`. The value to be added to the variable.
Nullable<bool> use_locking
An optional `bool`. Defaults to `False`. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
string name
A name for the operation (optional).
Returns
Tensor
Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.

Tensor assign_add(object ref, IGraphNodeBase value, Nullable<bool> use_locking, PythonFunctionContainer name)

Update `ref` by adding `value` to it.

This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value. Unlike tf.math.add, this op does not broadcast. `ref` and `value` must have the same shape.
Parameters
object ref
A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. Should be from a `Variable` node.
IGraphNodeBase value
A `Tensor`. Must have the same shape and dtype as `ref`. The value to be added to the variable.
Nullable<bool> use_locking
An optional `bool`. Defaults to `False`. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
PythonFunctionContainer name
A name for the operation (optional).
Returns
Tensor
Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.

Tensor assign_add(object ref, IGraphNodeBase value, Nullable<bool> use_locking, string name)

Update `ref` by adding `value` to it.

This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value. Unlike tf.math.add, this op does not broadcast. `ref` and `value` must have the same shape.
Parameters
object ref
A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. Should be from a `Variable` node.
IGraphNodeBase value
A `Tensor`. Must have the same shape and dtype as `ref`. The value to be added to the variable.
Nullable<bool> use_locking
An optional `bool`. Defaults to `False`. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
string name
A name for the operation (optional).
Returns
Tensor
Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.

object assign_add_dyn(object ref, object value, object use_locking, object name)

Update `ref` by adding `value` to it.

This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value. Unlike tf.math.add, this op does not broadcast. `ref` and `value` must have the same shape.
Parameters
object ref
A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. Should be from a `Variable` node.
object value
A `Tensor`. Must have the same shape and dtype as `ref`. The value to be added to the variable.
object use_locking
An optional `bool`. Defaults to `False`. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
object name
A name for the operation (optional).
Returns
object
Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.

object assign_dyn(object ref, object value, object validate_shape, object use_locking, object name)

Update `ref` by assigning `value` to it.

This operation outputs a Tensor that holds the new value of `ref` after the value has been assigned. This makes it easier to chain operations that need to use the reset value.
Parameters
object ref
A mutable `Tensor`. Should be from a `Variable` node. May be uninitialized.
object value
A `Tensor`. Must have the same shape and dtype as `ref`. The value to be assigned to the variable.
object validate_shape
An optional `bool`. Defaults to `True`. If true, the operation will validate that the shape of 'value' matches the shape of the Tensor being assigned to. If false, 'ref' will take on the shape of 'value'.
object use_locking
An optional `bool`. Defaults to `True`. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
object name
A name for the operation (optional).
Returns
object
A `Tensor` that will hold the new value of `ref` after the assignment has completed.

object assign_sub(AutoCastVariable ref, IGraphNodeBase value, Nullable<bool> use_locking, string name)

Update `ref` by subtracting `value` from it.

This operation outputs `ref` after the update is done. This makes it easier to chain operations that need to use the reset value. Unlike tf.math.subtract, this op does not broadcast. `ref` and `value` must have the same shape.
Parameters
AutoCastVariable ref
A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. Should be from a `Variable` node.
IGraphNodeBase value
A `Tensor`. Must have the same shape and dtype as `ref`. The value to be subtracted to the variable.
Nullable<bool> use_locking
An optional `bool`. Defaults to `False`. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
string name
A name for the operation (optional).
Returns
object
Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.

object assign_sub(Operation ref, IGraphNodeBase value, Nullable<bool> use_locking, string name)

Update `ref` by subtracting `value` from it.

This operation outputs `ref` after the update is done. This makes it easier to chain operations that need to use the reset value. Unlike tf.math.subtract, this op does not broadcast. `ref` and `value` must have the same shape.
Parameters
Operation ref
A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. Should be from a `Variable` node.
IGraphNodeBase value
A `Tensor`. Must have the same shape and dtype as `ref`. The value to be subtracted to the variable.
Nullable<bool> use_locking
An optional `bool`. Defaults to `False`. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
string name
A name for the operation (optional).
Returns
object
Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.

object assign_sub(DistributedVariable ref, IGraphNodeBase value, Nullable<bool> use_locking, string name)

Update `ref` by subtracting `value` from it.

This operation outputs `ref` after the update is done. This makes it easier to chain operations that need to use the reset value. Unlike tf.math.subtract, this op does not broadcast. `ref` and `value` must have the same shape.
Parameters
DistributedVariable ref
A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. Should be from a `Variable` node.
IGraphNodeBase value
A `Tensor`. Must have the same shape and dtype as `ref`. The value to be subtracted to the variable.
Nullable<bool> use_locking
An optional `bool`. Defaults to `False`. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
string name
A name for the operation (optional).
Returns
object
Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.

object assign_sub(IGraphNodeBase ref, IGraphNodeBase value, Nullable<bool> use_locking, string name)

Update `ref` by subtracting `value` from it.

This operation outputs `ref` after the update is done. This makes it easier to chain operations that need to use the reset value. Unlike tf.math.subtract, this op does not broadcast. `ref` and `value` must have the same shape.
Parameters
IGraphNodeBase ref
A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. Should be from a `Variable` node.
IGraphNodeBase value
A `Tensor`. Must have the same shape and dtype as `ref`. The value to be subtracted to the variable.
Nullable<bool> use_locking
An optional `bool`. Defaults to `False`. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
string name
A name for the operation (optional).
Returns
object
Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.

object assign_sub_dyn(object ref, object value, object use_locking, object name)

Update `ref` by subtracting `value` from it.

This operation outputs `ref` after the update is done. This makes it easier to chain operations that need to use the reset value. Unlike tf.math.subtract, this op does not broadcast. `ref` and `value` must have the same shape.
Parameters
object ref
A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. Should be from a `Variable` node.
object value
A `Tensor`. Must have the same shape and dtype as `ref`. The value to be subtracted to the variable.
object use_locking
An optional `bool`. Defaults to `False`. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
object name
A name for the operation (optional).
Returns
object
Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.

Tensor atan(IGraphNodeBase x, string name)

Computes the trignometric inverse tangent of x element-wise.

The tf.math.atan operation returns the inverse of tf.math.tan, such that if `y = tf.math.tan(x)` then, `x = tf.math.atan(y)`.

**Note**: The output of tf.math.atan will lie within the invertible range of tan, i.e (-pi/2, pi/2).
Parameters
IGraphNodeBase x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `x`.
Show Example
# Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]
            x = tf.constant([1.047, 0.785])
            y = tf.math.tan(x) # [1.731261, 0.99920404] 

tf.math.atan(y) # [1.047, 0.785] = x

object atan_dyn(object x, object name)

Computes the trignometric inverse tangent of x element-wise.

The tf.math.atan operation returns the inverse of tf.math.tan, such that if `y = tf.math.tan(x)` then, `x = tf.math.atan(y)`.

**Note**: The output of tf.math.atan will lie within the invertible range of tan, i.e (-pi/2, pi/2).
Parameters
object x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `x`.
Show Example
# Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]
            x = tf.constant([1.047, 0.785])
            y = tf.math.tan(x) # [1.731261, 0.99920404] 

tf.math.atan(y) # [1.047, 0.785] = x

Tensor atan2(IGraphNodeBase y, IGraphNodeBase x, string name)

Computes arctangent of `y/x` element-wise, respecting signs of the arguments.

This is the angle \( \theta \in [-\pi, \pi] \) such that \[ x = r \cos(\theta) \] and \[ y = r \sin(\theta) \] where \(r = \sqrt(x^2 + y^2) \).
Parameters
IGraphNodeBase y
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.
IGraphNodeBase x
A `Tensor`. Must have the same type as `y`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `y`.

object atan2_dyn(object y, object x, object name)

Computes arctangent of `y/x` element-wise, respecting signs of the arguments.

This is the angle \( \theta \in [-\pi, \pi] \) such that \[ x = r \cos(\theta) \] and \[ y = r \sin(\theta) \] where \(r = \sqrt(x^2 + y^2) \).
Parameters
object y
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.
object x
A `Tensor`. Must have the same type as `y`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `y`.

Tensor atanh(IGraphNodeBase x, string name)

Computes inverse hyperbolic tangent of x element-wise.

Given an input tensor, this function computes inverse hyperbolic tangent for every element in the tensor. Input range is `[-1,1]` and output range is `[-inf, inf]`. If input is `-1`, output will be `-inf` and if the input is `1`, output will be `inf`. Values outside the range will have `nan` as output.
Parameters
IGraphNodeBase x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `x`.
Show Example
x = tf.constant([-float("inf"), -1, -0.5, 1, 0, 0.5, 10, float("inf")])
            tf.math.atanh(x) ==> [nan -inf -0.54930615 inf  0. 0.54930615 nan nan] 

object atanh_dyn(object x, object name)

Computes inverse hyperbolic tangent of x element-wise.

Given an input tensor, this function computes inverse hyperbolic tangent for every element in the tensor. Input range is `[-1,1]` and output range is `[-inf, inf]`. If input is `-1`, output will be `-inf` and if the input is `1`, output will be `inf`. Values outside the range will have `nan` as output.
Parameters
object x
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `x`.
Show Example
x = tf.constant([-float("inf"), -1, -0.5, 1, 0, 0.5, 10, float("inf")])
            tf.math.atanh(x) ==> [nan -inf -0.54930615 inf  0. 0.54930615 nan nan] 

object attr(object a, string name)

object attr_bool(object a, string name)

object attr_bool_dyn(object a, object name)

object attr_bool_list(object a, string name)

object attr_bool_list_dyn(object a, object name)

object attr_default(string a, string name)

object attr_default_dyn(ImplicitContainer<T> a, object name)

object attr_dyn(object a, object name)

object attr_empty_list_default(ImplicitContainer<T> a, string name)

object attr_empty_list_default_dyn(ImplicitContainer<T> a, object name)

object attr_enum(object a, string name)

object attr_enum_dyn(object a, object name)

object attr_enum_list(object a, string name)

object attr_enum_list_dyn(object a, object name)

object attr_float(object a, string name)

object attr_float_dyn(object a, object name)

object attr_list_default(ImplicitContainer<T> a, string name)

object attr_list_default_dyn(ImplicitContainer<T> a, object name)

object attr_list_min(object a, string name)

object attr_list_min_dyn(object a, object name)

object attr_list_type_default(object a, object b, string name)

object attr_list_type_default_dyn(object a, object b, object name)

object attr_min(object a, string name)

object attr_min_dyn(object a, object name)

object attr_partial_shape(object a, string name)

object attr_partial_shape_dyn(object a, object name)

object attr_partial_shape_list(object a, string name)

object attr_partial_shape_list_dyn(object a, object name)

object attr_shape(object a, string name)

object attr_shape_dyn(object a, object name)

object attr_shape_list(object a, string name)

object attr_shape_list_dyn(object a, object name)

object attr_type_default(IGraphNodeBase a, string name)

object attr_type_default_dyn(object a, object name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, int window_size, int window_step, int num_channels, int upper_band_limit, int lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, int pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, double window_size, int window_step, int num_channels, double upper_band_limit, double lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, double pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, int window_size, int window_step, int num_channels, int upper_band_limit, int lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, double pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, double window_size, int window_step, int num_channels, double upper_band_limit, double lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, int pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, double window_size, int window_step, int num_channels, double upper_band_limit, int lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, double pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, double window_size, int window_step, int num_channels, double upper_band_limit, int lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, int pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, int window_size, int window_step, int num_channels, int upper_band_limit, double lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, int pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, double window_size, int window_step, int num_channels, int upper_band_limit, double lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, double pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, double window_size, int window_step, int num_channels, int upper_band_limit, double lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, int pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, double window_size, int window_step, int num_channels, int upper_band_limit, int lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, double pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, double window_size, int window_step, int num_channels, int upper_band_limit, int lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, int pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, int window_size, int window_step, int num_channels, double upper_band_limit, double lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, double pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, int window_size, int window_step, int num_channels, double upper_band_limit, double lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, int pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, int window_size, int window_step, int num_channels, double upper_band_limit, int lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, double pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, int window_size, int window_step, int num_channels, double upper_band_limit, int lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, int pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

Tensor audio_microfrontend(IGraphNodeBase audio, int sample_rate, int window_size, int window_step, int num_channels, int upper_band_limit, double lower_band_limit, int smoothing_bits, double even_smoothing, double odd_smoothing, double min_signal_remaining, bool enable_pcan, double pcan_strength, double pcan_offset, int gain_bits, bool enable_log, int scale_shift, int left_context, int right_context, int frame_stride, bool zero_padding, int out_scale, ImplicitContainer<T> out_type, string name)

object audio_microfrontend_dyn(object audio, ImplicitContainer<T> sample_rate, ImplicitContainer<T> window_size, ImplicitContainer<T> window_step, ImplicitContainer<T> num_channels, ImplicitContainer<T> upper_band_limit, ImplicitContainer<T> lower_band_limit, ImplicitContainer<T> smoothing_bits, ImplicitContainer<T> even_smoothing, ImplicitContainer<T> odd_smoothing, ImplicitContainer<T> min_signal_remaining, ImplicitContainer<T> enable_pcan, ImplicitContainer<T> pcan_strength, ImplicitContainer<T> pcan_offset, ImplicitContainer<T> gain_bits, ImplicitContainer<T> enable_log, ImplicitContainer<T> scale_shift, ImplicitContainer<T> left_context, ImplicitContainer<T> right_context, ImplicitContainer<T> frame_stride, ImplicitContainer<T> zero_padding, ImplicitContainer<T> out_scale, ImplicitContainer<T> out_type, object name)

Tensor b(string name)

object b_dyn(object name)

Tensor batch_gather(RaggedTensor params, RaggedTensor indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(ndarray params, IGraphNodeBase indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(ndarray params, IEnumerable<int> indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(IGraphNodeBase params, RaggedTensor indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(IGraphNodeBase params, ndarray indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(IGraphNodeBase params, IGraphNodeBase indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(ndarray params, ndarray indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(RaggedTensor params, IGraphNodeBase indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(IGraphNodeBase params, IEnumerable<int> indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(ndarray params, RaggedTensor indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(ndarray params, ValueTuple<object> indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(RaggedTensor params, ValueTuple<object> indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(RaggedTensor params, IEnumerable<int> indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(IEnumerable<object> params, ndarray indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(IEnumerable<object> params, IEnumerable<int> indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(IEnumerable<object> params, ValueTuple<object> indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(IEnumerable<object> params, RaggedTensor indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(RaggedTensor params, ndarray indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(IGraphNodeBase params, ValueTuple<object> indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

Tensor batch_gather(IEnumerable<object> params, IGraphNodeBase indices, string name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

object batch_gather_dyn(object params, object indices, object name)

Gather slices from params according to indices with leading batch dims. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with `batch_dims=-1` instead.

object batch_scatter_update(Variable ref, IEnumerable<int> indices, object updates, bool use_locking, string name)

Generalization of `tf.compat.v1.scatter_update` to axis different than 0. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-11-29. Instructions for updating: Use the batch_scatter_update method of Variable instead.

Analogous to `batch_gather`. This assumes that `ref`, `indices` and `updates` have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following:

`num_prefix_dims = indices.ndims - 1` `batch_dim = num_prefix_dims + 1` `updates.shape = indices.shape + var.shape[batch_dim:]`

where

`updates.shape[:num_prefix_dims]` `== indices.shape[:num_prefix_dims]` `== var.shape[:num_prefix_dims]`

And the operation performed can be expressed as:

`var[i_1,..., i_n, indices[i_1,..., i_n, j]] = updates[i_1,..., i_n, j]`

When indices is a 1D tensor, this operation is equivalent to `tf.compat.v1.scatter_update`.

To avoid this operation there would be 2 alternatives: 1) Reshaping the variable by merging the first `ndims` dimensions. However, this is not possible because tf.reshape returns a Tensor, which we cannot use `tf.compat.v1.scatter_update` on. 2) Looping over the first `ndims` of the variable and using `tf.compat.v1.scatter_update` on the subtensors that result of slicing the first dimension. This is a valid option for `ndims = 1`, but less efficient than this implementation.

See also `tf.compat.v1.scatter_update` and `tf.compat.v1.scatter_nd_update`.
Parameters
Variable ref
`Variable` to scatter onto.
IEnumerable<int> indices
Tensor containing indices as described above.
object updates
Tensor of updates to apply to `ref`.
bool use_locking
Boolean indicating whether to lock the writing operation.
string name
Optional scope name string.
Returns
object
Ref to `variable` after it has been modified.

object batch_scatter_update(Variable ref, ValueTuple<PythonClassContainer, PythonClassContainer> indices, object updates, bool use_locking, string name)

Generalization of `tf.compat.v1.scatter_update` to axis different than 0. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-11-29. Instructions for updating: Use the batch_scatter_update method of Variable instead.

Analogous to `batch_gather`. This assumes that `ref`, `indices` and `updates` have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following:

`num_prefix_dims = indices.ndims - 1` `batch_dim = num_prefix_dims + 1` `updates.shape = indices.shape + var.shape[batch_dim:]`

where

`updates.shape[:num_prefix_dims]` `== indices.shape[:num_prefix_dims]` `== var.shape[:num_prefix_dims]`

And the operation performed can be expressed as:

`var[i_1,..., i_n, indices[i_1,..., i_n, j]] = updates[i_1,..., i_n, j]`

When indices is a 1D tensor, this operation is equivalent to `tf.compat.v1.scatter_update`.

To avoid this operation there would be 2 alternatives: 1) Reshaping the variable by merging the first `ndims` dimensions. However, this is not possible because tf.reshape returns a Tensor, which we cannot use `tf.compat.v1.scatter_update` on. 2) Looping over the first `ndims` of the variable and using `tf.compat.v1.scatter_update` on the subtensors that result of slicing the first dimension. This is a valid option for `ndims = 1`, but less efficient than this implementation.

See also `tf.compat.v1.scatter_update` and `tf.compat.v1.scatter_nd_update`.
Parameters
Variable ref
`Variable` to scatter onto.
ValueTuple<PythonClassContainer, PythonClassContainer> indices
Tensor containing indices as described above.
object updates
Tensor of updates to apply to `ref`.
bool use_locking
Boolean indicating whether to lock the writing operation.
string name
Optional scope name string.
Returns
object
Ref to `variable` after it has been modified.

object batch_scatter_update(Variable ref, int indices, object updates, bool use_locking, string name)

Generalization of `tf.compat.v1.scatter_update` to axis different than 0. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-11-29. Instructions for updating: Use the batch_scatter_update method of Variable instead.

Analogous to `batch_gather`. This assumes that `ref`, `indices` and `updates` have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following:

`num_prefix_dims = indices.ndims - 1` `batch_dim = num_prefix_dims + 1` `updates.shape = indices.shape + var.shape[batch_dim:]`

where

`updates.shape[:num_prefix_dims]` `== indices.shape[:num_prefix_dims]` `== var.shape[:num_prefix_dims]`

And the operation performed can be expressed as:

`var[i_1,..., i_n, indices[i_1,..., i_n, j]] = updates[i_1,..., i_n, j]`

When indices is a 1D tensor, this operation is equivalent to `tf.compat.v1.scatter_update`.

To avoid this operation there would be 2 alternatives: 1) Reshaping the variable by merging the first `ndims` dimensions. However, this is not possible because tf.reshape returns a Tensor, which we cannot use `tf.compat.v1.scatter_update` on. 2) Looping over the first `ndims` of the variable and using `tf.compat.v1.scatter_update` on the subtensors that result of slicing the first dimension. This is a valid option for `ndims = 1`, but less efficient than this implementation.

See also `tf.compat.v1.scatter_update` and `tf.compat.v1.scatter_nd_update`.
Parameters
Variable ref
`Variable` to scatter onto.
int indices
Tensor containing indices as described above.
object updates
Tensor of updates to apply to `ref`.
bool use_locking
Boolean indicating whether to lock the writing operation.
string name
Optional scope name string.
Returns
object
Ref to `variable` after it has been modified.

object batch_scatter_update(Variable ref, IDictionary<object, object> indices, object updates, bool use_locking, string name)

Generalization of `tf.compat.v1.scatter_update` to axis different than 0. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-11-29. Instructions for updating: Use the batch_scatter_update method of Variable instead.

Analogous to `batch_gather`. This assumes that `ref`, `indices` and `updates` have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following:

`num_prefix_dims = indices.ndims - 1` `batch_dim = num_prefix_dims + 1` `updates.shape = indices.shape + var.shape[batch_dim:]`

where

`updates.shape[:num_prefix_dims]` `== indices.shape[:num_prefix_dims]` `== var.shape[:num_prefix_dims]`

And the operation performed can be expressed as:

`var[i_1,..., i_n, indices[i_1,..., i_n, j]] = updates[i_1,..., i_n, j]`

When indices is a 1D tensor, this operation is equivalent to `tf.compat.v1.scatter_update`.

To avoid this operation there would be 2 alternatives: 1) Reshaping the variable by merging the first `ndims` dimensions. However, this is not possible because tf.reshape returns a Tensor, which we cannot use `tf.compat.v1.scatter_update` on. 2) Looping over the first `ndims` of the variable and using `tf.compat.v1.scatter_update` on the subtensors that result of slicing the first dimension. This is a valid option for `ndims = 1`, but less efficient than this implementation.

See also `tf.compat.v1.scatter_update` and `tf.compat.v1.scatter_nd_update`.
Parameters
Variable ref
`Variable` to scatter onto.
IDictionary<object, object> indices
Tensor containing indices as described above.
object updates
Tensor of updates to apply to `ref`.
bool use_locking
Boolean indicating whether to lock the writing operation.
string name
Optional scope name string.
Returns
object
Ref to `variable` after it has been modified.

object batch_scatter_update(Variable ref, ndarray indices, object updates, bool use_locking, string name)

Generalization of `tf.compat.v1.scatter_update` to axis different than 0. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-11-29. Instructions for updating: Use the batch_scatter_update method of Variable instead.

Analogous to `batch_gather`. This assumes that `ref`, `indices` and `updates` have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following:

`num_prefix_dims = indices.ndims - 1` `batch_dim = num_prefix_dims + 1` `updates.shape = indices.shape + var.shape[batch_dim:]`

where

`updates.shape[:num_prefix_dims]` `== indices.shape[:num_prefix_dims]` `== var.shape[:num_prefix_dims]`

And the operation performed can be expressed as:

`var[i_1,..., i_n, indices[i_1,..., i_n, j]] = updates[i_1,..., i_n, j]`

When indices is a 1D tensor, this operation is equivalent to `tf.compat.v1.scatter_update`.

To avoid this operation there would be 2 alternatives: 1) Reshaping the variable by merging the first `ndims` dimensions. However, this is not possible because tf.reshape returns a Tensor, which we cannot use `tf.compat.v1.scatter_update` on. 2) Looping over the first `ndims` of the variable and using `tf.compat.v1.scatter_update` on the subtensors that result of slicing the first dimension. This is a valid option for `ndims = 1`, but less efficient than this implementation.

See also `tf.compat.v1.scatter_update` and `tf.compat.v1.scatter_nd_update`.
Parameters
Variable ref
`Variable` to scatter onto.
ndarray indices
Tensor containing indices as described above.
object updates
Tensor of updates to apply to `ref`.
bool use_locking
Boolean indicating whether to lock the writing operation.
string name
Optional scope name string.
Returns
object
Ref to `variable` after it has been modified.

object batch_scatter_update(Variable ref, float64 indices, object updates, bool use_locking, string name)

Generalization of `tf.compat.v1.scatter_update` to axis different than 0. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-11-29. Instructions for updating: Use the batch_scatter_update method of Variable instead.

Analogous to `batch_gather`. This assumes that `ref`, `indices` and `updates` have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following:

`num_prefix_dims = indices.ndims - 1` `batch_dim = num_prefix_dims + 1` `updates.shape = indices.shape + var.shape[batch_dim:]`

where

`updates.shape[:num_prefix_dims]` `== indices.shape[:num_prefix_dims]` `== var.shape[:num_prefix_dims]`

And the operation performed can be expressed as:

`var[i_1,..., i_n, indices[i_1,..., i_n, j]] = updates[i_1,..., i_n, j]`

When indices is a 1D tensor, this operation is equivalent to `tf.compat.v1.scatter_update`.

To avoid this operation there would be 2 alternatives: 1) Reshaping the variable by merging the first `ndims` dimensions. However, this is not possible because tf.reshape returns a Tensor, which we cannot use `tf.compat.v1.scatter_update` on. 2) Looping over the first `ndims` of the variable and using `tf.compat.v1.scatter_update` on the subtensors that result of slicing the first dimension. This is a valid option for `ndims = 1`, but less efficient than this implementation.

See also `tf.compat.v1.scatter_update` and `tf.compat.v1.scatter_nd_update`.
Parameters
Variable ref
`Variable` to scatter onto.
float64 indices
Tensor containing indices as described above.
object updates
Tensor of updates to apply to `ref`.
bool use_locking
Boolean indicating whether to lock the writing operation.
string name
Optional scope name string.
Returns
object
Ref to `variable` after it has been modified.

object batch_scatter_update(Variable ref, float32 indices, object updates, bool use_locking, string name)

Generalization of `tf.compat.v1.scatter_update` to axis different than 0. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-11-29. Instructions for updating: Use the batch_scatter_update method of Variable instead.

Analogous to `batch_gather`. This assumes that `ref`, `indices` and `updates` have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following:

`num_prefix_dims = indices.ndims - 1` `batch_dim = num_prefix_dims + 1` `updates.shape = indices.shape + var.shape[batch_dim:]`

where

`updates.shape[:num_prefix_dims]` `== indices.shape[:num_prefix_dims]` `== var.shape[:num_prefix_dims]`

And the operation performed can be expressed as:

`var[i_1,..., i_n, indices[i_1,..., i_n, j]] = updates[i_1,..., i_n, j]`

When indices is a 1D tensor, this operation is equivalent to `tf.compat.v1.scatter_update`.

To avoid this operation there would be 2 alternatives: 1) Reshaping the variable by merging the first `ndims` dimensions. However, this is not possible because tf.reshape returns a Tensor, which we cannot use `tf.compat.v1.scatter_update` on. 2) Looping over the first `ndims` of the variable and using `tf.compat.v1.scatter_update` on the subtensors that result of slicing the first dimension. This is a valid option for `ndims = 1`, but less efficient than this implementation.

See also `tf.compat.v1.scatter_update` and `tf.compat.v1.scatter_nd_update`.
Parameters
Variable ref
`Variable` to scatter onto.
float32 indices
Tensor containing indices as described above.
object updates
Tensor of updates to apply to `ref`.
bool use_locking
Boolean indicating whether to lock the writing operation.
string name
Optional scope name string.
Returns
object
Ref to `variable` after it has been modified.

object batch_scatter_update(Variable ref, IGraphNodeBase indices, object updates, bool use_locking, string name)

Generalization of `tf.compat.v1.scatter_update` to axis different than 0. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-11-29. Instructions for updating: Use the batch_scatter_update method of Variable instead.

Analogous to `batch_gather`. This assumes that `ref`, `indices` and `updates` have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following:

`num_prefix_dims = indices.ndims - 1` `batch_dim = num_prefix_dims + 1` `updates.shape = indices.shape + var.shape[batch_dim:]`

where

`updates.shape[:num_prefix_dims]` `== indices.shape[:num_prefix_dims]` `== var.shape[:num_prefix_dims]`

And the operation performed can be expressed as:

`var[i_1,..., i_n, indices[i_1,..., i_n, j]] = updates[i_1,..., i_n, j]`

When indices is a 1D tensor, this operation is equivalent to `tf.compat.v1.scatter_update`.

To avoid this operation there would be 2 alternatives: 1) Reshaping the variable by merging the first `ndims` dimensions. However, this is not possible because tf.reshape returns a Tensor, which we cannot use `tf.compat.v1.scatter_update` on. 2) Looping over the first `ndims` of the variable and using `tf.compat.v1.scatter_update` on the subtensors that result of slicing the first dimension. This is a valid option for `ndims = 1`, but less efficient than this implementation.

See also `tf.compat.v1.scatter_update` and `tf.compat.v1.scatter_nd_update`.
Parameters
Variable ref
`Variable` to scatter onto.
IGraphNodeBase indices
Tensor containing indices as described above.
object updates
Tensor of updates to apply to `ref`.
bool use_locking
Boolean indicating whether to lock the writing operation.
string name
Optional scope name string.
Returns
object
Ref to `variable` after it has been modified.

object batch_scatter_update(Variable ref, IndexedSlices indices, object updates, bool use_locking, string name)

Generalization of `tf.compat.v1.scatter_update` to axis different than 0. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-11-29. Instructions for updating: Use the batch_scatter_update method of Variable instead.

Analogous to `batch_gather`. This assumes that `ref`, `indices` and `updates` have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following:

`num_prefix_dims = indices.ndims - 1` `batch_dim = num_prefix_dims + 1` `updates.shape = indices.shape + var.shape[batch_dim:]`

where

`updates.shape[:num_prefix_dims]` `== indices.shape[:num_prefix_dims]` `== var.shape[:num_prefix_dims]`

And the operation performed can be expressed as:

`var[i_1,..., i_n, indices[i_1,..., i_n, j]] = updates[i_1,..., i_n, j]`

When indices is a 1D tensor, this operation is equivalent to `tf.compat.v1.scatter_update`.

To avoid this operation there would be 2 alternatives: 1) Reshaping the variable by merging the first `ndims` dimensions. However, this is not possible because tf.reshape returns a Tensor, which we cannot use `tf.compat.v1.scatter_update` on. 2) Looping over the first `ndims` of the variable and using `tf.compat.v1.scatter_update` on the subtensors that result of slicing the first dimension. This is a valid option for `ndims = 1`, but less efficient than this implementation.

See also `tf.compat.v1.scatter_update` and `tf.compat.v1.scatter_nd_update`.
Parameters
Variable ref
`Variable` to scatter onto.
IndexedSlices indices
Tensor containing indices as described above.
object updates
Tensor of updates to apply to `ref`.
bool use_locking
Boolean indicating whether to lock the writing operation.
string name
Optional scope name string.
Returns
object
Ref to `variable` after it has been modified.

object batch_scatter_update_dyn(object ref, object indices, object updates, ImplicitContainer<T> use_locking, object name)

Generalization of `tf.compat.v1.scatter_update` to axis different than 0. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-11-29. Instructions for updating: Use the batch_scatter_update method of Variable instead.

Analogous to `batch_gather`. This assumes that `ref`, `indices` and `updates` have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following:

`num_prefix_dims = indices.ndims - 1` `batch_dim = num_prefix_dims + 1` `updates.shape = indices.shape + var.shape[batch_dim:]`

where

`updates.shape[:num_prefix_dims]` `== indices.shape[:num_prefix_dims]` `== var.shape[:num_prefix_dims]`

And the operation performed can be expressed as:

`var[i_1,..., i_n, indices[i_1,..., i_n, j]] = updates[i_1,..., i_n, j]`

When indices is a 1D tensor, this operation is equivalent to `tf.compat.v1.scatter_update`.

To avoid this operation there would be 2 alternatives: 1) Reshaping the variable by merging the first `ndims` dimensions. However, this is not possible because tf.reshape returns a Tensor, which we cannot use `tf.compat.v1.scatter_update` on. 2) Looping over the first `ndims` of the variable and using `tf.compat.v1.scatter_update` on the subtensors that result of slicing the first dimension. This is a valid option for `ndims = 1`, but less efficient than this implementation.

See also `tf.compat.v1.scatter_update` and `tf.compat.v1.scatter_nd_update`.
Parameters
object ref
`Variable` to scatter onto.
object indices
Tensor containing indices as described above.
object updates
Tensor of updates to apply to `ref`.
ImplicitContainer<T> use_locking
Boolean indicating whether to lock the writing operation.
object name
Optional scope name string.
Returns
object
Ref to `variable` after it has been modified.

Tensor batch_to_space(IGraphNodeBase input, IEnumerable<object> crops, int block_size, string name, object block_shape)

BatchToSpace for 4-D tensors of type T.

This is a legacy version of the more general BatchToSpaceND.

Rearranges (permutes) data from batch into blocks of spatial data, followed by cropping. This is the reverse transformation of SpaceToBatch. More specifically, this op outputs a copy of the input tensor where values from the `batch` dimension are moved in spatial blocks to the `height` and `width` dimensions, followed by cropping along the `height` and `width` dimensions.
Parameters
IGraphNodeBase input
A `Tensor`. 4-D tensor with shape `[batch*block_size*block_size, height_pad/block_size, width_pad/block_size, depth]`. Note that the batch size of the input tensor must be divisible by `block_size * block_size`.
IEnumerable<object> crops
A `Tensor`. Must be one of the following types: `int32`, `int64`. 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies how many elements to crop from the intermediate result across the spatial dimensions as follows:

crops = [[crop_top, crop_bottom], [crop_left, crop_right]]
int block_size
An `int` that is `>= 2`.
string name
A name for the operation (optional).
object block_shape
Returns
Tensor
A `Tensor`. Has the same type as `input`.

object batch_to_space_dyn(object input, object crops, object block_size, object name, object block_shape)

BatchToSpace for 4-D tensors of type T.

This is a legacy version of the more general BatchToSpaceND.

Rearranges (permutes) data from batch into blocks of spatial data, followed by cropping. This is the reverse transformation of SpaceToBatch. More specifically, this op outputs a copy of the input tensor where values from the `batch` dimension are moved in spatial blocks to the `height` and `width` dimensions, followed by cropping along the `height` and `width` dimensions.
Parameters
object input
A `Tensor`. 4-D tensor with shape `[batch*block_size*block_size, height_pad/block_size, width_pad/block_size, depth]`. Note that the batch size of the input tensor must be divisible by `block_size * block_size`.
object crops
A `Tensor`. Must be one of the following types: `int32`, `int64`. 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies how many elements to crop from the intermediate result across the spatial dimensions as follows:

crops = [[crop_top, crop_bottom], [crop_left, crop_right]]
object block_size
An `int` that is `>= 2`.
object name
A name for the operation (optional).
object block_shape
Returns
object
A `Tensor`. Has the same type as `input`.

Tensor batch_to_space_nd(IGraphNodeBase input, IGraphNodeBase block_shape, IGraphNodeBase crops, string name)

BatchToSpace for N-D tensors of type T.

This operation reshapes the "batch" dimension 0 into `M + 1` dimensions of shape `block_shape + [batch]`, interleaves these blocks back into the grid defined by the spatial dimensions `[1,..., M]`, to obtain a result with the same rank as the input. The spatial dimensions of this intermediate result are then optionally cropped according to `crops` to produce the output. This is the reverse of SpaceToBatch. See below for a precise description.
Parameters
IGraphNodeBase input
A `Tensor`. N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`, where spatial_shape has M dimensions.
IGraphNodeBase block_shape
A `Tensor`. Must be one of the following types: `int32`, `int64`. 1-D with shape `[M]`, all values must be >= 1.
IGraphNodeBase crops
A `Tensor`. Must be one of the following types: `int32`, `int64`. 2-D with shape `[M, 2]`, all values must be >= 0. `crops[i] = [crop_start, crop_end]` specifies the amount to crop from input dimension `i + 1`, which corresponds to spatial dimension `i`. It is required that `crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]`.

This operation is equivalent to the following steps:

1. Reshape `input` to `reshaped` of shape: [block_shape[0],..., block_shape[M-1], batch / prod(block_shape), input_shape[1],..., input_shape[N-1]]

2. Permute dimensions of `reshaped` to produce `permuted` of shape [batch / prod(block_shape),

input_shape[1], block_shape[0], ..., input_shape[M], block_shape[M-1],

input_shape[M+1],..., input_shape[N-1]]

3. Reshape `permuted` to produce `reshaped_permuted` of shape [batch / prod(block_shape),

input_shape[1] * block_shape[0], ..., input_shape[M] * block_shape[M-1],

input_shape[M+1], ..., input_shape[N-1]]

4. Crop the start and end of dimensions `[1,..., M]` of `reshaped_permuted` according to `crops` to produce the output of shape: [batch / prod(block_shape),

input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1], ..., input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1],

input_shape[M+1],..., input_shape[N-1]]

Some examples:

(1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`:

``` [[[[1]]], [[[2]]], [[[3]]], [[[4]]]] ```

The output tensor has shape `[1, 2, 2, 1]` and value:

``` x = [[[[1], [2]], [[3], [4]]]] ```

(2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`:

``` [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]] ```

The output tensor has shape `[1, 2, 2, 3]` and value:

``` x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]] ```

(3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`:

``` x = [[[[1], [3]], [[9], [11]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]] ```

The output tensor has shape `[1, 4, 4, 1]` and value:

``` x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]] ```

(4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and `crops = [[0, 0], [2, 0]]`:

``` x = [[[[0], [1], [3]]], [[[0], [9], [11]]], [[[0], [2], [4]]], [[[0], [10], [12]]], [[[0], [5], [7]]], [[[0], [13], [15]]], [[[0], [6], [8]]], [[[0], [14], [16]]]] ```

The output tensor has shape `[2, 2, 4, 1]` and value:

``` x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]]], [[[9], [10], [11], [12]], [[13], [14], [15], [16]]]] ```
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

object batch_to_space_nd_dyn(object input, object block_shape, object crops, object name)

BatchToSpace for N-D tensors of type T.

This operation reshapes the "batch" dimension 0 into `M + 1` dimensions of shape `block_shape + [batch]`, interleaves these blocks back into the grid defined by the spatial dimensions `[1,..., M]`, to obtain a result with the same rank as the input. The spatial dimensions of this intermediate result are then optionally cropped according to `crops` to produce the output. This is the reverse of SpaceToBatch. See below for a precise description.
Parameters
object input
A `Tensor`. N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`, where spatial_shape has M dimensions.
object block_shape
A `Tensor`. Must be one of the following types: `int32`, `int64`. 1-D with shape `[M]`, all values must be >= 1.
object crops
A `Tensor`. Must be one of the following types: `int32`, `int64`. 2-D with shape `[M, 2]`, all values must be >= 0. `crops[i] = [crop_start, crop_end]` specifies the amount to crop from input dimension `i + 1`, which corresponds to spatial dimension `i`. It is required that `crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]`.

This operation is equivalent to the following steps:

1. Reshape `input` to `reshaped` of shape: [block_shape[0],..., block_shape[M-1], batch / prod(block_shape), input_shape[1],..., input_shape[N-1]]

2. Permute dimensions of `reshaped` to produce `permuted` of shape [batch / prod(block_shape),

input_shape[1], block_shape[0], ..., input_shape[M], block_shape[M-1],

input_shape[M+1],..., input_shape[N-1]]

3. Reshape `permuted` to produce `reshaped_permuted` of shape [batch / prod(block_shape),

input_shape[1] * block_shape[0], ..., input_shape[M] * block_shape[M-1],

input_shape[M+1], ..., input_shape[N-1]]

4. Crop the start and end of dimensions `[1,..., M]` of `reshaped_permuted` according to `crops` to produce the output of shape: [batch / prod(block_shape),

input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1], ..., input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1],

input_shape[M+1],..., input_shape[N-1]]

Some examples:

(1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`:

``` [[[[1]]], [[[2]]], [[[3]]], [[[4]]]] ```

The output tensor has shape `[1, 2, 2, 1]` and value:

``` x = [[[[1], [2]], [[3], [4]]]] ```

(2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`:

``` [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]] ```

The output tensor has shape `[1, 2, 2, 3]` and value:

``` x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]] ```

(3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`:

``` x = [[[[1], [3]], [[9], [11]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]] ```

The output tensor has shape `[1, 4, 4, 1]` and value:

``` x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]] ```

(4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and `crops = [[0, 0], [2, 0]]`:

``` x = [[[[0], [1], [3]]], [[[0], [9], [11]]], [[[0], [2], [4]]], [[[0], [10], [12]]], [[[0], [5], [7]]], [[[0], [13], [15]]], [[[0], [6], [8]]], [[[0], [14], [16]]]] ```

The output tensor has shape `[2, 2, 4, 1]` and value:

``` x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]]], [[[9], [10], [11], [12]], [[13], [14], [15], [16]]]] ```
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `input`.

Tensor betainc(IGraphNodeBase a, IGraphNodeBase b, IGraphNodeBase x, string name)

Compute the regularized incomplete beta integral \\(I_x(a, b)\\).

The regularized incomplete beta integral is defined as:

\\(I_x(a, b) = \frac{B(x; a, b)}{B(a, b)}\\)

where

\\(B(x; a, b) = \int_0^x t^{a-1} (1 - t)^{b-1} dt\\)

is the incomplete beta function and \\(B(a, b)\\) is the *complete* beta function.
Parameters
IGraphNodeBase a
A `Tensor`. Must be one of the following types: `float32`, `float64`.
IGraphNodeBase b
A `Tensor`. Must have the same type as `a`.
IGraphNodeBase x
A `Tensor`. Must have the same type as `a`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `a`.

object betainc_dyn(object a, object b, object x, object name)

Compute the regularized incomplete beta integral \\(I_x(a, b)\\).

The regularized incomplete beta integral is defined as:

\\(I_x(a, b) = \frac{B(x; a, b)}{B(a, b)}\\)

where

\\(B(x; a, b) = \int_0^x t^{a-1} (1 - t)^{b-1} dt\\)

is the incomplete beta function and \\(B(a, b)\\) is the *complete* beta function.
Parameters
object a
A `Tensor`. Must be one of the following types: `float32`, `float64`.
object b
A `Tensor`. Must have the same type as `a`.
object x
A `Tensor`. Must have the same type as `a`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `a`.

Tensor binary(IGraphNodeBase a, IGraphNodeBase b, string name)

object binary_dyn(object a, object b, object name)

Tensor bincount(object arr, object weights, object minlength, object maxlength, ImplicitContainer<T> dtype)

Counts the number of occurrences of each value in an integer array.

If `minlength` and `maxlength` are not given, returns a vector with length `tf.reduce_max(arr) + 1` if `arr` is non-empty, and length 0 otherwise. If `weights` are non-None, then index `i` of the output stores the sum of the value in `weights` at each index where the corresponding value in `arr` is `i`.
Parameters
object arr
An int32 tensor of non-negative values.
object weights
If non-None, must be the same shape as arr. For each value in `arr`, the bin will be incremented by the corresponding weight instead of 1.
object minlength
If given, ensures the output has length at least `minlength`, padding with zeros at the end if necessary.
object maxlength
If given, skips values in `arr` that are equal or greater than `maxlength`, ensuring that the output has length at most `maxlength`.
ImplicitContainer<T> dtype
If `weights` is None, determines the type of the output bins.
Returns
Tensor
A vector with the same dtype as `weights` or the given `dtype`. The bin values.

object bincount_dyn(object arr, object weights, object minlength, object maxlength, ImplicitContainer<T> dtype)

Counts the number of occurrences of each value in an integer array.

If `minlength` and `maxlength` are not given, returns a vector with length `tf.reduce_max(arr) + 1` if `arr` is non-empty, and length 0 otherwise. If `weights` are non-None, then index `i` of the output stores the sum of the value in `weights` at each index where the corresponding value in `arr` is `i`.
Parameters
object arr
An int32 tensor of non-negative values.
object weights
If non-None, must be the same shape as arr. For each value in `arr`, the bin will be incremented by the corresponding weight instead of 1.
object minlength
If given, ensures the output has length at least `minlength`, padding with zeros at the end if necessary.
object maxlength
If given, skips values in `arr` that are equal or greater than `maxlength`, ensuring that the output has length at most `maxlength`.
ImplicitContainer<T> dtype
If `weights` is None, determines the type of the output bins.
Returns
object
A vector with the same dtype as `weights` or the given `dtype`. The bin values.

object bipartite_match(IGraphNodeBase distance_mat, IGraphNodeBase num_valid_rows, int top_k, string name)

object bipartite_match_dyn(object distance_mat, object num_valid_rows, ImplicitContainer<T> top_k, object name)

Tensor bitcast(IGraphNodeBase input, DType type, string name)

Bitcasts a tensor from one type to another without copying data.

Given a tensor `input`, this operation returns a tensor that has the same buffer data as `input` with datatype `type`.

If the input datatype `T` is larger than the output datatype `type` then the shape changes from [...] to [..., sizeof(`T`)/sizeof(`type`)].

If `T` is smaller than `type`, the operator requires that the rightmost dimension be equal to sizeof(`type`)/sizeof(`T`). The shape then goes from [..., sizeof(`type`)/sizeof(`T`)] to [...].

tf.bitcast() and tf.cast() work differently when real dtype is casted as a complex dtype (e.g. tf.complex64 or tf.complex128) as tf.cast() make imaginary part 0 while tf.bitcast() gives module error. For example,

Example 1: Example 2: Example 3: *NOTE*: Bitcast is implemented as a low-level cast, so machines with different endian orderings will give different results.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `complex64`, `complex128`, `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
DType type
A tf.DType from: `tf.bfloat16, tf.half, tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.uint32, tf.uint64, tf.int8, tf.int16, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of type `type`.
Show Example
>>> a = [1., 2., 3.]
            >>> equality_bitcast = tf.bitcast(a,tf.complex128)
            tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot bitcast from float to complex128: shape [3] [Op:Bitcast]
            >>> equality_cast = tf.cast(a,tf.complex128)
            >>> print(equality_cast)
            tf.Tensor([1.+0.j 2.+0.j 3.+0.j], shape=(3,), dtype=complex128) 

object bitcast_dyn(object input, object type, object name)

Bitcasts a tensor from one type to another without copying data.

Given a tensor `input`, this operation returns a tensor that has the same buffer data as `input` with datatype `type`.

If the input datatype `T` is larger than the output datatype `type` then the shape changes from [...] to [..., sizeof(`T`)/sizeof(`type`)].

If `T` is smaller than `type`, the operator requires that the rightmost dimension be equal to sizeof(`type`)/sizeof(`T`). The shape then goes from [..., sizeof(`type`)/sizeof(`T`)] to [...].

tf.bitcast() and tf.cast() work differently when real dtype is casted as a complex dtype (e.g. tf.complex64 or tf.complex128) as tf.cast() make imaginary part 0 while tf.bitcast() gives module error. For example,

Example 1: Example 2: Example 3: *NOTE*: Bitcast is implemented as a low-level cast, so machines with different endian orderings will give different results.
Parameters
object input
A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `complex64`, `complex128`, `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
object type
A tf.DType from: `tf.bfloat16, tf.half, tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.uint32, tf.uint64, tf.int8, tf.int16, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32`.
object name
A name for the operation (optional).
Returns
object
A `Tensor` of type `type`.
Show Example
>>> a = [1., 2., 3.]
            >>> equality_bitcast = tf.bitcast(a,tf.complex128)
            tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot bitcast from float to complex128: shape [3] [Op:Bitcast]
            >>> equality_cast = tf.cast(a,tf.complex128)
            >>> print(equality_cast)
            tf.Tensor([1.+0.j 2.+0.j 3.+0.j], shape=(3,), dtype=complex128) 

object boolean_mask(object tensor, object mask, string name, Nullable<int> axis)

Apply boolean mask to tensor.

Numpy equivalent is `tensor[mask]`. In general, `0 < dim(mask) = K <= dim(tensor)`, and `mask`'s shape must match the first K dimensions of `tensor`'s shape. We then have: `boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]` where `(i1,...,iK)` is the ith `True` entry of `mask` (row-major order). The `axis` could be used with `mask` to indicate the axis to mask from. In that case, `axis + dim(mask) <= dim(tensor)` and `mask`'s shape must match the first `axis + dim(mask)` dimensions of `tensor`'s shape.

See also: tf.ragged.boolean_mask, which can be applied to both dense and ragged tensors, and can be used if you need to preserve the masked dimensions of `tensor` (rather than flattening them, as tf.boolean_mask does).
Parameters
object tensor
N-D tensor.
object mask
K-D boolean tensor, K <= N and K must be known statically.
string name
A name for this operation (optional).
Nullable<int> axis
A 0-D int Tensor representing the axis in `tensor` to mask from. By default, axis is 0 which will mask from the first dimension. Otherwise K + axis <= N.
Returns
object
(N-K+1)-dimensional tensor populated by entries in `tensor` corresponding to `True` values in `mask`.
Show Example
# 1-D example
            tensor = [0, 1, 2, 3]
            mask = np.array([True, False, True, False])
            boolean_mask(tensor, mask)  # [0, 2] 

object boolean_mask(IEnumerable<IGraphNodeBase> tensor, object mask, string name, Nullable<int> axis)

Apply boolean mask to tensor.

Numpy equivalent is `tensor[mask]`. In general, `0 < dim(mask) = K <= dim(tensor)`, and `mask`'s shape must match the first K dimensions of `tensor`'s shape. We then have: `boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]` where `(i1,...,iK)` is the ith `True` entry of `mask` (row-major order). The `axis` could be used with `mask` to indicate the axis to mask from. In that case, `axis + dim(mask) <= dim(tensor)` and `mask`'s shape must match the first `axis + dim(mask)` dimensions of `tensor`'s shape.

See also: tf.ragged.boolean_mask, which can be applied to both dense and ragged tensors, and can be used if you need to preserve the masked dimensions of `tensor` (rather than flattening them, as tf.boolean_mask does).
Parameters
IEnumerable<IGraphNodeBase> tensor
N-D tensor.
object mask
K-D boolean tensor, K <= N and K must be known statically.
string name
A name for this operation (optional).
Nullable<int> axis
A 0-D int Tensor representing the axis in `tensor` to mask from. By default, axis is 0 which will mask from the first dimension. Otherwise K + axis <= N.
Returns
object
(N-K+1)-dimensional tensor populated by entries in `tensor` corresponding to `True` values in `mask`.
Show Example
# 1-D example
            tensor = [0, 1, 2, 3]
            mask = np.array([True, False, True, False])
            boolean_mask(tensor, mask)  # [0, 2] 

object boolean_mask_dyn(object tensor, object mask, ImplicitContainer<T> name, object axis)

Apply boolean mask to tensor.

Numpy equivalent is `tensor[mask]`. In general, `0 < dim(mask) = K <= dim(tensor)`, and `mask`'s shape must match the first K dimensions of `tensor`'s shape. We then have: `boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]` where `(i1,...,iK)` is the ith `True` entry of `mask` (row-major order). The `axis` could be used with `mask` to indicate the axis to mask from. In that case, `axis + dim(mask) <= dim(tensor)` and `mask`'s shape must match the first `axis + dim(mask)` dimensions of `tensor`'s shape.

See also: tf.ragged.boolean_mask, which can be applied to both dense and ragged tensors, and can be used if you need to preserve the masked dimensions of `tensor` (rather than flattening them, as tf.boolean_mask does).
Parameters
object tensor
N-D tensor.
object mask
K-D boolean tensor, K <= N and K must be known statically.
ImplicitContainer<T> name
A name for this operation (optional).
object axis
A 0-D int Tensor representing the axis in `tensor` to mask from. By default, axis is 0 which will mask from the first dimension. Otherwise K + axis <= N.
Returns
object
(N-K+1)-dimensional tensor populated by entries in `tensor` corresponding to `True` values in `mask`.
Show Example
# 1-D example
            tensor = [0, 1, 2, 3]
            mask = np.array([True, False, True, False])
            boolean_mask(tensor, mask)  # [0, 2] 

Tensor broadcast_dynamic_shape(IGraphNodeBase shape_x, IGraphNodeBase shape_y)

Computes the shape of a broadcast given symbolic shapes.

When shape_x and shape_y are Tensors representing shapes (i.e. the result of calling tf.shape on another Tensor) this computes a Tensor which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a Tensor whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors do not have statically known shapes.
Parameters
IGraphNodeBase shape_x
A rank 1 integer `Tensor`, representing the shape of x.
IGraphNodeBase shape_y
A rank 1 integer `Tensor`, representing the shape of y.
Returns
Tensor
A rank 1 integer `Tensor` representing the broadcasted shape.

Tensor broadcast_dynamic_shape(IGraphNodeBase shape_x, TensorShape shape_y)

Computes the shape of a broadcast given symbolic shapes.

When shape_x and shape_y are Tensors representing shapes (i.e. the result of calling tf.shape on another Tensor) this computes a Tensor which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a Tensor whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors do not have statically known shapes.
Parameters
IGraphNodeBase shape_x
A rank 1 integer `Tensor`, representing the shape of x.
TensorShape shape_y
A rank 1 integer `Tensor`, representing the shape of y.
Returns
Tensor
A rank 1 integer `Tensor` representing the broadcasted shape.

Tensor broadcast_dynamic_shape(int shape_x, IEnumerable<int> shape_y)

Computes the shape of a broadcast given symbolic shapes.

When shape_x and shape_y are Tensors representing shapes (i.e. the result of calling tf.shape on another Tensor) this computes a Tensor which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a Tensor whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors do not have statically known shapes.
Parameters
int shape_x
A rank 1 integer `Tensor`, representing the shape of x.
IEnumerable<int> shape_y
A rank 1 integer `Tensor`, representing the shape of y.
Returns
Tensor
A rank 1 integer `Tensor` representing the broadcasted shape.

Tensor broadcast_dynamic_shape(IGraphNodeBase shape_x, IEnumerable<int> shape_y)

Computes the shape of a broadcast given symbolic shapes.

When shape_x and shape_y are Tensors representing shapes (i.e. the result of calling tf.shape on another Tensor) this computes a Tensor which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a Tensor whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors do not have statically known shapes.
Parameters
IGraphNodeBase shape_x
A rank 1 integer `Tensor`, representing the shape of x.
IEnumerable<int> shape_y
A rank 1 integer `Tensor`, representing the shape of y.
Returns
Tensor
A rank 1 integer `Tensor` representing the broadcasted shape.

Tensor broadcast_dynamic_shape(int shape_x, IGraphNodeBase shape_y)

Computes the shape of a broadcast given symbolic shapes.

When shape_x and shape_y are Tensors representing shapes (i.e. the result of calling tf.shape on another Tensor) this computes a Tensor which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a Tensor whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors do not have statically known shapes.
Parameters
int shape_x
A rank 1 integer `Tensor`, representing the shape of x.
IGraphNodeBase shape_y
A rank 1 integer `Tensor`, representing the shape of y.
Returns
Tensor
A rank 1 integer `Tensor` representing the broadcasted shape.

Tensor broadcast_dynamic_shape(int shape_x, TensorShape shape_y)

Computes the shape of a broadcast given symbolic shapes.

When shape_x and shape_y are Tensors representing shapes (i.e. the result of calling tf.shape on another Tensor) this computes a Tensor which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a Tensor whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors do not have statically known shapes.
Parameters
int shape_x
A rank 1 integer `Tensor`, representing the shape of x.
TensorShape shape_y
A rank 1 integer `Tensor`, representing the shape of y.
Returns
Tensor
A rank 1 integer `Tensor` representing the broadcasted shape.

Tensor broadcast_dynamic_shape(TensorShape shape_x, IGraphNodeBase shape_y)

Computes the shape of a broadcast given symbolic shapes.

When shape_x and shape_y are Tensors representing shapes (i.e. the result of calling tf.shape on another Tensor) this computes a Tensor which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a Tensor whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors do not have statically known shapes.
Parameters
TensorShape shape_x
A rank 1 integer `Tensor`, representing the shape of x.
IGraphNodeBase shape_y
A rank 1 integer `Tensor`, representing the shape of y.
Returns
Tensor
A rank 1 integer `Tensor` representing the broadcasted shape.

Tensor broadcast_dynamic_shape(TensorShape shape_x, IEnumerable<int> shape_y)

Computes the shape of a broadcast given symbolic shapes.

When shape_x and shape_y are Tensors representing shapes (i.e. the result of calling tf.shape on another Tensor) this computes a Tensor which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a Tensor whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors do not have statically known shapes.
Parameters
TensorShape shape_x
A rank 1 integer `Tensor`, representing the shape of x.
IEnumerable<int> shape_y
A rank 1 integer `Tensor`, representing the shape of y.
Returns
Tensor
A rank 1 integer `Tensor` representing the broadcasted shape.

Tensor broadcast_dynamic_shape(Dimension shape_x, TensorShape shape_y)

Computes the shape of a broadcast given symbolic shapes.

When shape_x and shape_y are Tensors representing shapes (i.e. the result of calling tf.shape on another Tensor) this computes a Tensor which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a Tensor whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors do not have statically known shapes.
Parameters
Dimension shape_x
A rank 1 integer `Tensor`, representing the shape of x.
TensorShape shape_y
A rank 1 integer `Tensor`, representing the shape of y.
Returns
Tensor
A rank 1 integer `Tensor` representing the broadcasted shape.

Tensor broadcast_dynamic_shape(Dimension shape_x, IGraphNodeBase shape_y)

Computes the shape of a broadcast given symbolic shapes.

When shape_x and shape_y are Tensors representing shapes (i.e. the result of calling tf.shape on another Tensor) this computes a Tensor which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a Tensor whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors do not have statically known shapes.
Parameters
Dimension shape_x
A rank 1 integer `Tensor`, representing the shape of x.
IGraphNodeBase shape_y
A rank 1 integer `Tensor`, representing the shape of y.
Returns
Tensor
A rank 1 integer `Tensor` representing the broadcasted shape.

Tensor broadcast_dynamic_shape(TensorShape shape_x, TensorShape shape_y)

Computes the shape of a broadcast given symbolic shapes.

When shape_x and shape_y are Tensors representing shapes (i.e. the result of calling tf.shape on another Tensor) this computes a Tensor which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a Tensor whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors do not have statically known shapes.
Parameters
TensorShape shape_x
A rank 1 integer `Tensor`, representing the shape of x.
TensorShape shape_y
A rank 1 integer `Tensor`, representing the shape of y.
Returns
Tensor
A rank 1 integer `Tensor` representing the broadcasted shape.

Tensor broadcast_dynamic_shape(Dimension shape_x, IEnumerable<int> shape_y)

Computes the shape of a broadcast given symbolic shapes.

When shape_x and shape_y are Tensors representing shapes (i.e. the result of calling tf.shape on another Tensor) this computes a Tensor which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a Tensor whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors do not have statically known shapes.
Parameters
Dimension shape_x
A rank 1 integer `Tensor`, representing the shape of x.
IEnumerable<int> shape_y
A rank 1 integer `Tensor`, representing the shape of y.
Returns
Tensor
A rank 1 integer `Tensor` representing the broadcasted shape.

object broadcast_dynamic_shape_dyn(object shape_x, object shape_y)

Computes the shape of a broadcast given symbolic shapes.

When shape_x and shape_y are Tensors representing shapes (i.e. the result of calling tf.shape on another Tensor) this computes a Tensor which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a Tensor whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors do not have statically known shapes.
Parameters
object shape_x
A rank 1 integer `Tensor`, representing the shape of x.
object shape_y
A rank 1 integer `Tensor`, representing the shape of y.
Returns
object
A rank 1 integer `Tensor` representing the broadcasted shape.

TensorShape broadcast_static_shape(Dimension shape_x, int shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
Dimension shape_x
A `TensorShape`
int shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

TensorShape broadcast_static_shape(Dimension shape_x, TensorShape shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
Dimension shape_x
A `TensorShape`
TensorShape shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

TensorShape broadcast_static_shape(TensorShape shape_x, int shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
TensorShape shape_x
A `TensorShape`
int shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

TensorShape broadcast_static_shape(Dimension shape_x, Dimension shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
Dimension shape_x
A `TensorShape`
Dimension shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

TensorShape broadcast_static_shape(int shape_x, IGraphNodeBase shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
int shape_x
A `TensorShape`
IGraphNodeBase shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

TensorShape broadcast_static_shape(IGraphNodeBase shape_x, IGraphNodeBase shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
IGraphNodeBase shape_x
A `TensorShape`
IGraphNodeBase shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

TensorShape broadcast_static_shape(TensorShape shape_x, IGraphNodeBase shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
TensorShape shape_x
A `TensorShape`
IGraphNodeBase shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

TensorShape broadcast_static_shape(IGraphNodeBase shape_x, TensorShape shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
IGraphNodeBase shape_x
A `TensorShape`
TensorShape shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

TensorShape broadcast_static_shape(IGraphNodeBase shape_x, Dimension shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
IGraphNodeBase shape_x
A `TensorShape`
Dimension shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

TensorShape broadcast_static_shape(Dimension shape_x, IGraphNodeBase shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
Dimension shape_x
A `TensorShape`
IGraphNodeBase shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

TensorShape broadcast_static_shape(IGraphNodeBase shape_x, int shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
IGraphNodeBase shape_x
A `TensorShape`
int shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

TensorShape broadcast_static_shape(TensorShape shape_x, TensorShape shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
TensorShape shape_x
A `TensorShape`
TensorShape shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

TensorShape broadcast_static_shape(int shape_x, Dimension shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
int shape_x
A `TensorShape`
Dimension shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

TensorShape broadcast_static_shape(int shape_x, int shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
int shape_x
A `TensorShape`
int shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

TensorShape broadcast_static_shape(int shape_x, TensorShape shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
int shape_x
A `TensorShape`
TensorShape shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

TensorShape broadcast_static_shape(TensorShape shape_x, Dimension shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
TensorShape shape_x
A `TensorShape`
Dimension shape_y
A `TensorShape`
Returns
TensorShape
A `TensorShape` representing the broadcasted shape.

object broadcast_static_shape_dyn(object shape_x, object shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.
Parameters
object shape_x
A `TensorShape`
object shape_y
A `TensorShape`
Returns
object
A `TensorShape` representing the broadcasted shape.

Tensor broadcast_to(IGraphNodeBase input, IGraphNodeBase shape, string name)

Broadcast an array for a compatible shape.

Broadcasting is the process of making arrays to have compatible shapes for arithmetic operations. Two shapes are compatible if for each dimension pair they are either equal or one of them is one. When trying to broadcast a Tensor to a shape, it starts with the trailing dimensions, and works its way forward.

For example, In the above example, the input Tensor with the shape of `[1, 3]` is broadcasted to output Tensor with shape of `[3, 3]`.
Parameters
IGraphNodeBase input
A `Tensor`. A Tensor to broadcast.
IGraphNodeBase shape
A `Tensor`. Must be one of the following types: `int32`, `int64`. An 1-D `int` Tensor. The shape of the desired output.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.
Show Example
>>> x = tf.constant([1, 2, 3])
            >>> y = tf.broadcast_to(x, [3, 3])
            >>> sess.run(y)
            array([[1, 2, 3],
                   [1, 2, 3],
                   [1, 2, 3]], dtype=int32) 

object broadcast_to_dyn(object input, object shape, object name)

Broadcast an array for a compatible shape.

Broadcasting is the process of making arrays to have compatible shapes for arithmetic operations. Two shapes are compatible if for each dimension pair they are either equal or one of them is one. When trying to broadcast a Tensor to a shape, it starts with the trailing dimensions, and works its way forward.

For example, In the above example, the input Tensor with the shape of `[1, 3]` is broadcasted to output Tensor with shape of `[3, 3]`.
Parameters
object input
A `Tensor`. A Tensor to broadcast.
object shape
A `Tensor`. Must be one of the following types: `int32`, `int64`. An 1-D `int` Tensor. The shape of the desired output.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `input`.
Show Example
>>> x = tf.constant([1, 2, 3])
            >>> y = tf.broadcast_to(x, [3, 3])
            >>> sess.run(y)
            array([[1, 2, 3],
                   [1, 2, 3],
                   [1, 2, 3]], dtype=int32) 

Tensor bucketize_with_input_boundaries(IGraphNodeBase input, IGraphNodeBase boundaries, string name)

object bucketize_with_input_boundaries_dyn(object input, object boundaries, object name)

object build_categorical_equality_splits(IGraphNodeBase num_minibatches, IGraphNodeBase partition_ids, IGraphNodeBase feature_ids, IGraphNodeBase gradients, IGraphNodeBase hessians, IGraphNodeBase class_id, IGraphNodeBase feature_column_group_id, IGraphNodeBase bias_feature_id, IGraphNodeBase l1_regularization, IGraphNodeBase l2_regularization, IGraphNodeBase tree_complexity_regularization, IGraphNodeBase min_node_weight, IGraphNodeBase multiclass_strategy, IGraphNodeBase weak_learner_type, string name)

object build_categorical_equality_splits_dyn(object num_minibatches, object partition_ids, object feature_ids, object gradients, object hessians, object class_id, object feature_column_group_id, object bias_feature_id, object l1_regularization, object l2_regularization, object tree_complexity_regularization, object min_node_weight, object multiclass_strategy, object weak_learner_type, object name)

object build_dense_inequality_splits(IGraphNodeBase num_minibatches, IGraphNodeBase partition_ids, IGraphNodeBase bucket_ids, IGraphNodeBase gradients, IGraphNodeBase hessians, IGraphNodeBase bucket_boundaries, IGraphNodeBase class_id, IGraphNodeBase feature_column_group_id, IGraphNodeBase l1_regularization, IGraphNodeBase l2_regularization, IGraphNodeBase tree_complexity_regularization, IGraphNodeBase min_node_weight, IGraphNodeBase multiclass_strategy, IGraphNodeBase weak_learner_type, string name)

object build_dense_inequality_splits_dyn(object num_minibatches, object partition_ids, object bucket_ids, object gradients, object hessians, object bucket_boundaries, object class_id, object feature_column_group_id, object l1_regularization, object l2_regularization, object tree_complexity_regularization, object min_node_weight, object multiclass_strategy, object weak_learner_type, object name)