LostTech.TensorFlow : API Documentation

Type tf.nn

Namespace tensorflow

Methods

Properties

Public static methods

object all_candidate_sampler(IGraphNodeBase true_classes, int num_true, int num_sampled, bool unique, object seed, string name)

Generate the set of all classes.

Deterministically generates and returns the set of all possible classes. For testing purposes. There is no need to use this, since you might as well use full softmax or full logistic regression.
Parameters
IGraphNodeBase true_classes
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
int num_true
An `int`. The number of target classes per training example.
int num_sampled
An `int`. The number of possible classes.
bool unique
A `bool`. Ignored. unique.
object seed
An `int`. An operation-specific seed. Default is 0.
string name
A name for the operation (optional).
Returns
object

object all_candidate_sampler_dyn(object true_classes, object num_true, object num_sampled, object unique, object seed, object name)

Generate the set of all classes.

Deterministically generates and returns the set of all possible classes. For testing purposes. There is no need to use this, since you might as well use full softmax or full logistic regression.
Parameters
object true_classes
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
object num_true
An `int`. The number of target classes per training example.
object num_sampled
An `int`. The number of possible classes.
object unique
A `bool`. Ignored. unique.
object seed
An `int`. An operation-specific seed. Default is 0.
object name
A name for the operation (optional).
Returns
object

Tensor atrous_conv2d(IGraphNodeBase value, IGraphNodeBase filters, int rate, string padding, string name)

Atrous convolution (a.k.a. convolution with holes or dilated convolution).

This function is a simpler wrapper around the more general tf.nn.convolution, and exists only for backwards compatibility. You can use tf.nn.convolution to perform 1-D, 2-D, or 3-D atrous convolution.

Computes a 2-D atrous convolution, also known as convolution with holes or dilated convolution, given 4-D `value` and `filters` tensors. If the `rate` parameter is equal to one, it performs regular 2-D convolution. If the `rate` parameter is greater than one, it performs convolution with holes, sampling the input values every `rate` pixels in the `height` and `width` dimensions. This is equivalent to convolving the input with a set of upsampled filters, produced by inserting `rate - 1` zeros between two consecutive values of the filters along the `height` and `width` dimensions, hence the name atrous convolution or convolution with holes (the French word trous means holes in English).

More specifically:

``` output[batch, height, width, out_channel] = sum_{dheight, dwidth, in_channel} ( filters[dheight, dwidth, in_channel, out_channel] * value[batch, height + rate*dheight, width + rate*dwidth, in_channel] ) ```

Atrous convolution allows us to explicitly control how densely to compute feature responses in fully convolutional networks. Used in conjunction with bilinear interpolation, it offers an alternative to `conv2d_transpose` in dense prediction tasks such as semantic image segmentation, optical flow computation, or depth estimation. It also allows us to effectively enlarge the field of view of filters without increasing the number of parameters or the amount of computation.

For a description of atrous convolution and how it can be used for dense feature extraction, please see: [Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs](http://arxiv.org/abs/1412.7062). The same operation is investigated further in [Multi-Scale Context Aggregation by Dilated Convolutions](http://arxiv.org/abs/1511.07122). Previous works that effectively use atrous convolution in different ways are, among others, [OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks](http://arxiv.org/abs/1312.6229) and [Fast Image Scanning with Deep Max-Pooling Convolutional Neural Networks](http://arxiv.org/abs/1302.1700). Atrous convolution is also closely related to the so-called noble identities in multi-rate signal processing.

There are many different ways to implement atrous convolution (see the refs above). The implementation here reduces to the following three operations: Advanced usage. Note the following optimization: A sequence of `atrous_conv2d` operations with identical `rate` parameters, 'SAME' `padding`, and filters with odd heights/ widths: can be equivalently performed cheaper in terms of computation and memory as: because a pair of consecutive `space_to_batch` and `batch_to_space` ops with the same `block_size` cancel out when their respective `paddings` and `crops` inputs are identical.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float`. It needs to be in the default "NHWC" format. Its shape is `[batch, in_height, in_width, in_channels]`.
IGraphNodeBase filters
A 4-D `Tensor` with the same type as `value` and shape `[filter_height, filter_width, in_channels, out_channels]`. `filters`' `in_channels` dimension must match that of `value`. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height `filter_height + (filter_height - 1) * (rate - 1)` and effective width `filter_width + (filter_width - 1) * (rate - 1)`, produced by inserting `rate - 1` zeros along consecutive elements across the `filters`' spatial dimensions.
int rate
A positive int32. The stride with which we sample input values across the `height` and `width` dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the `height` and `width` dimensions. In the literature, the same parameter is sometimes called `input stride` or `dilation`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`. Output shape with `'VALID'` padding is:

[batch, height - 2 * (filter_width - 1), width - 2 * (filter_height - 1), out_channels].

Output shape with `'SAME'` padding is:

[batch, height, width, out_channels].
Show Example
atrous_conv2d(value, filters, rate, padding=padding) 

object atrous_conv2d_dyn(object value, object filters, object rate, object padding, object name)

Atrous convolution (a.k.a. convolution with holes or dilated convolution).

This function is a simpler wrapper around the more general tf.nn.convolution, and exists only for backwards compatibility. You can use tf.nn.convolution to perform 1-D, 2-D, or 3-D atrous convolution.

Computes a 2-D atrous convolution, also known as convolution with holes or dilated convolution, given 4-D `value` and `filters` tensors. If the `rate` parameter is equal to one, it performs regular 2-D convolution. If the `rate` parameter is greater than one, it performs convolution with holes, sampling the input values every `rate` pixels in the `height` and `width` dimensions. This is equivalent to convolving the input with a set of upsampled filters, produced by inserting `rate - 1` zeros between two consecutive values of the filters along the `height` and `width` dimensions, hence the name atrous convolution or convolution with holes (the French word trous means holes in English).

More specifically:

``` output[batch, height, width, out_channel] = sum_{dheight, dwidth, in_channel} ( filters[dheight, dwidth, in_channel, out_channel] * value[batch, height + rate*dheight, width + rate*dwidth, in_channel] ) ```

Atrous convolution allows us to explicitly control how densely to compute feature responses in fully convolutional networks. Used in conjunction with bilinear interpolation, it offers an alternative to `conv2d_transpose` in dense prediction tasks such as semantic image segmentation, optical flow computation, or depth estimation. It also allows us to effectively enlarge the field of view of filters without increasing the number of parameters or the amount of computation.

For a description of atrous convolution and how it can be used for dense feature extraction, please see: [Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs](http://arxiv.org/abs/1412.7062). The same operation is investigated further in [Multi-Scale Context Aggregation by Dilated Convolutions](http://arxiv.org/abs/1511.07122). Previous works that effectively use atrous convolution in different ways are, among others, [OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks](http://arxiv.org/abs/1312.6229) and [Fast Image Scanning with Deep Max-Pooling Convolutional Neural Networks](http://arxiv.org/abs/1302.1700). Atrous convolution is also closely related to the so-called noble identities in multi-rate signal processing.

There are many different ways to implement atrous convolution (see the refs above). The implementation here reduces to the following three operations: Advanced usage. Note the following optimization: A sequence of `atrous_conv2d` operations with identical `rate` parameters, 'SAME' `padding`, and filters with odd heights/ widths: can be equivalently performed cheaper in terms of computation and memory as: because a pair of consecutive `space_to_batch` and `batch_to_space` ops with the same `block_size` cancel out when their respective `paddings` and `crops` inputs are identical.
Parameters
object value
A 4-D `Tensor` of type `float`. It needs to be in the default "NHWC" format. Its shape is `[batch, in_height, in_width, in_channels]`.
object filters
A 4-D `Tensor` with the same type as `value` and shape `[filter_height, filter_width, in_channels, out_channels]`. `filters`' `in_channels` dimension must match that of `value`. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height `filter_height + (filter_height - 1) * (rate - 1)` and effective width `filter_width + (filter_width - 1) * (rate - 1)`, produced by inserting `rate - 1` zeros along consecutive elements across the `filters`' spatial dimensions.
object rate
A positive int32. The stride with which we sample input values across the `height` and `width` dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the `height` and `width` dimensions. In the literature, the same parameter is sometimes called `input stride` or `dilation`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm.
object name
Optional name for the returned tensor.
Returns
object
A `Tensor` with the same type as `value`. Output shape with `'VALID'` padding is:

[batch, height - 2 * (filter_width - 1), width - 2 * (filter_height - 1), out_channels].

Output shape with `'SAME'` padding is:

[batch, height, width, out_channels].
Show Example
atrous_conv2d(value, filters, rate, padding=padding) 

Tensor atrous_conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filters, IEnumerable<int> output_shape, int rate, ValueTuple<IEnumerable<object>, object> padding, string name)

The transpose of `atrous_conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `atrous_conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float`. It needs to be in the default `NHWC` format. Its shape is `[batch, in_height, in_width, in_channels]`.
IGraphNodeBase filters
A 4-D `Tensor` with the same type as `value` and shape `[filter_height, filter_width, out_channels, in_channels]`. `filters`' `in_channels` dimension must match that of `value`. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height `filter_height + (filter_height - 1) * (rate - 1)` and effective width `filter_width + (filter_width - 1) * (rate - 1)`, produced by inserting `rate - 1` zeros along consecutive elements across the `filters`' spatial dimensions.
IEnumerable<int> output_shape
A 1-D `Tensor` of shape representing the output shape of the deconvolution op.
int rate
A positive int32. The stride with which we sample input values across the `height` and `width` dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the `height` and `width` dimensions. In the literature, the same parameter is sometimes called `input stride` or `dilation`.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor atrous_conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filters, IEnumerable<int> output_shape, int rate, string padding, string name)

The transpose of `atrous_conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `atrous_conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float`. It needs to be in the default `NHWC` format. Its shape is `[batch, in_height, in_width, in_channels]`.
IGraphNodeBase filters
A 4-D `Tensor` with the same type as `value` and shape `[filter_height, filter_width, out_channels, in_channels]`. `filters`' `in_channels` dimension must match that of `value`. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height `filter_height + (filter_height - 1) * (rate - 1)` and effective width `filter_width + (filter_width - 1) * (rate - 1)`, produced by inserting `rate - 1` zeros along consecutive elements across the `filters`' spatial dimensions.
IEnumerable<int> output_shape
A 1-D `Tensor` of shape representing the output shape of the deconvolution op.
int rate
A positive int32. The stride with which we sample input values across the `height` and `width` dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the `height` and `width` dimensions. In the literature, the same parameter is sometimes called `input stride` or `dilation`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor atrous_conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filters, ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape, int rate, ValueTuple<IEnumerable<object>, object> padding, string name)

The transpose of `atrous_conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `atrous_conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float`. It needs to be in the default `NHWC` format. Its shape is `[batch, in_height, in_width, in_channels]`.
IGraphNodeBase filters
A 4-D `Tensor` with the same type as `value` and shape `[filter_height, filter_width, out_channels, in_channels]`. `filters`' `in_channels` dimension must match that of `value`. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height `filter_height + (filter_height - 1) * (rate - 1)` and effective width `filter_width + (filter_width - 1) * (rate - 1)`, produced by inserting `rate - 1` zeros along consecutive elements across the `filters`' spatial dimensions.
ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape
A 1-D `Tensor` of shape representing the output shape of the deconvolution op.
int rate
A positive int32. The stride with which we sample input values across the `height` and `width` dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the `height` and `width` dimensions. In the literature, the same parameter is sometimes called `input stride` or `dilation`.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor atrous_conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filters, IGraphNodeBase output_shape, int rate, ValueTuple<IEnumerable<object>, object> padding, string name)

The transpose of `atrous_conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `atrous_conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float`. It needs to be in the default `NHWC` format. Its shape is `[batch, in_height, in_width, in_channels]`.
IGraphNodeBase filters
A 4-D `Tensor` with the same type as `value` and shape `[filter_height, filter_width, out_channels, in_channels]`. `filters`' `in_channels` dimension must match that of `value`. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height `filter_height + (filter_height - 1) * (rate - 1)` and effective width `filter_width + (filter_width - 1) * (rate - 1)`, produced by inserting `rate - 1` zeros along consecutive elements across the `filters`' spatial dimensions.
IGraphNodeBase output_shape
A 1-D `Tensor` of shape representing the output shape of the deconvolution op.
int rate
A positive int32. The stride with which we sample input values across the `height` and `width` dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the `height` and `width` dimensions. In the literature, the same parameter is sometimes called `input stride` or `dilation`.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor atrous_conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filters, ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape, int rate, string padding, string name)

The transpose of `atrous_conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `atrous_conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float`. It needs to be in the default `NHWC` format. Its shape is `[batch, in_height, in_width, in_channels]`.
IGraphNodeBase filters
A 4-D `Tensor` with the same type as `value` and shape `[filter_height, filter_width, out_channels, in_channels]`. `filters`' `in_channels` dimension must match that of `value`. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height `filter_height + (filter_height - 1) * (rate - 1)` and effective width `filter_width + (filter_width - 1) * (rate - 1)`, produced by inserting `rate - 1` zeros along consecutive elements across the `filters`' spatial dimensions.
ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape
A 1-D `Tensor` of shape representing the output shape of the deconvolution op.
int rate
A positive int32. The stride with which we sample input values across the `height` and `width` dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the `height` and `width` dimensions. In the literature, the same parameter is sometimes called `input stride` or `dilation`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor atrous_conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filters, object output_shape, int rate, ValueTuple<IEnumerable<object>, object> padding, string name)

The transpose of `atrous_conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `atrous_conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float`. It needs to be in the default `NHWC` format. Its shape is `[batch, in_height, in_width, in_channels]`.
IGraphNodeBase filters
A 4-D `Tensor` with the same type as `value` and shape `[filter_height, filter_width, out_channels, in_channels]`. `filters`' `in_channels` dimension must match that of `value`. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height `filter_height + (filter_height - 1) * (rate - 1)` and effective width `filter_width + (filter_width - 1) * (rate - 1)`, produced by inserting `rate - 1` zeros along consecutive elements across the `filters`' spatial dimensions.
object output_shape
A 1-D `Tensor` of shape representing the output shape of the deconvolution op.
int rate
A positive int32. The stride with which we sample input values across the `height` and `width` dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the `height` and `width` dimensions. In the literature, the same parameter is sometimes called `input stride` or `dilation`.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor atrous_conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filters, object output_shape, int rate, string padding, string name)

The transpose of `atrous_conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `atrous_conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float`. It needs to be in the default `NHWC` format. Its shape is `[batch, in_height, in_width, in_channels]`.
IGraphNodeBase filters
A 4-D `Tensor` with the same type as `value` and shape `[filter_height, filter_width, out_channels, in_channels]`. `filters`' `in_channels` dimension must match that of `value`. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height `filter_height + (filter_height - 1) * (rate - 1)` and effective width `filter_width + (filter_width - 1) * (rate - 1)`, produced by inserting `rate - 1` zeros along consecutive elements across the `filters`' spatial dimensions.
object output_shape
A 1-D `Tensor` of shape representing the output shape of the deconvolution op.
int rate
A positive int32. The stride with which we sample input values across the `height` and `width` dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the `height` and `width` dimensions. In the literature, the same parameter is sometimes called `input stride` or `dilation`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor atrous_conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filters, IGraphNodeBase output_shape, int rate, string padding, string name)

The transpose of `atrous_conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `atrous_conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float`. It needs to be in the default `NHWC` format. Its shape is `[batch, in_height, in_width, in_channels]`.
IGraphNodeBase filters
A 4-D `Tensor` with the same type as `value` and shape `[filter_height, filter_width, out_channels, in_channels]`. `filters`' `in_channels` dimension must match that of `value`. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height `filter_height + (filter_height - 1) * (rate - 1)` and effective width `filter_width + (filter_width - 1) * (rate - 1)`, produced by inserting `rate - 1` zeros along consecutive elements across the `filters`' spatial dimensions.
IGraphNodeBase output_shape
A 1-D `Tensor` of shape representing the output shape of the deconvolution op.
int rate
A positive int32. The stride with which we sample input values across the `height` and `width` dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the `height` and `width` dimensions. In the literature, the same parameter is sometimes called `input stride` or `dilation`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

object atrous_conv2d_transpose_dyn(object value, object filters, object output_shape, object rate, object padding, object name)

The transpose of `atrous_conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `atrous_conv2d` rather than an actual deconvolution.
Parameters
object value
A 4-D `Tensor` of type `float`. It needs to be in the default `NHWC` format. Its shape is `[batch, in_height, in_width, in_channels]`.
object filters
A 4-D `Tensor` with the same type as `value` and shape `[filter_height, filter_width, out_channels, in_channels]`. `filters`' `in_channels` dimension must match that of `value`. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height `filter_height + (filter_height - 1) * (rate - 1)` and effective width `filter_width + (filter_width - 1) * (rate - 1)`, produced by inserting `rate - 1` zeros along consecutive elements across the `filters`' spatial dimensions.
object output_shape
A 1-D `Tensor` of shape representing the output shape of the deconvolution op.
object rate
A positive int32. The stride with which we sample input values across the `height` and `width` dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the `height` and `width` dimensions. In the literature, the same parameter is sometimes called `input stride` or `dilation`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm.
object name
Optional name for the returned tensor.
Returns
object
A `Tensor` with the same type as `value`.

Tensor avg_pool(IEnumerable<object> value, int ksize, object strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(object value, int ksize, object strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
object value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(object value, int ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
object value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(object value, ValueTuple<int, object> ksize, object strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
object value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(ndarray value, IEnumerable<int> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(object value, ValueTuple<int, object> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
object value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(IEnumerable<object> value, int ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(IGraphNodeBase value, IEnumerable<int> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(IGraphNodeBase value, int ksize, object strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(IGraphNodeBase value, ValueTuple<int, object> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(IGraphNodeBase value, ValueTuple<int, object> ksize, object strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(IGraphNodeBase value, int ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(PythonClassContainer value, IEnumerable<int> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
PythonClassContainer value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(PythonClassContainer value, IEnumerable<int> ksize, object strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
PythonClassContainer value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(PythonClassContainer value, ValueTuple<int, object> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
PythonClassContainer value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(PythonClassContainer value, ValueTuple<int, object> ksize, object strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
PythonClassContainer value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(PythonClassContainer value, int ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
PythonClassContainer value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(PythonClassContainer value, int ksize, object strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
PythonClassContainer value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(IGraphNodeBase value, IEnumerable<int> ksize, object strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(IEnumerable<object> value, ValueTuple<int, object> ksize, object strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(object value, IEnumerable<int> ksize, object strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
object value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(object value, IEnumerable<int> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
object value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(IEnumerable<object> value, IEnumerable<int> ksize, object strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(IEnumerable<object> value, IEnumerable<int> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(ndarray value, int ksize, object strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(IEnumerable<object> value, ValueTuple<int, object> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(ndarray value, ValueTuple<int, object> ksize, object strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(ndarray value, ValueTuple<int, object> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(ndarray value, IEnumerable<int> ksize, object strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool(ndarray value, int ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

object avg_pool_dyn(object value, object ksize, object strides, object padding, ImplicitContainer<T> data_format, object name, object input)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
object value
A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
object ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
ImplicitContainer<T> data_format
A string. 'NHWC' and 'NCHW' are supported.
object name
Optional name for the operation.
object input
Alias for value.
Returns
object
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool_v2(ndarray input, int ksize, int strides, string padding, object data_format, string name)

Performs the avg pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if `data_format` does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
int ksize
An int or list of `ints` that has length `1`, `N` or `N+2`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object data_format
A string. Specifies the channel dimension. For N=1 it can be either "NWC" (default) or "NCW", for N=2 it can be either "NHWC" (default) or "NCHW" and for N=3 either "NDHWC" (default) or "NCDHW".
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The average pooled output tensor.

Tensor avg_pool_v2(IGraphNodeBase input, int ksize, int strides, string padding, object data_format, string name)

Performs the avg pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if `data_format` does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
int ksize
An int or list of `ints` that has length `1`, `N` or `N+2`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object data_format
A string. Specifies the channel dimension. For N=1 it can be either "NWC" (default) or "NCW", for N=2 it can be either "NHWC" (default) or "NCHW" and for N=3 either "NDHWC" (default) or "NCDHW".
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The average pooled output tensor.

object avg_pool_v2_dyn(object input, object ksize, object strides, object padding, object data_format, object name)

Performs the avg pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
object input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if `data_format` does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
object ksize
An int or list of `ints` that has length `1`, `N` or `N+2`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object data_format
A string. Specifies the channel dimension. For N=1 it can be either "NWC" (default) or "NCW", for N=2 it can be either "NHWC" (default) or "NCHW" and for N=3 either "NDHWC" (default) or "NCDHW".
object name
Optional name for the operation.
Returns
object
A `Tensor` of format specified by `data_format`. The average pooled output tensor.

Tensor avg_pool1d(IGraphNodeBase input, int ksize, int strides, string padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.

Note internally this op reshapes and uses the underlying 2d operation.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1` or `3`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1` or `3`. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NWC", "NCW". Defaults to "NWC".
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor avg_pool1d(ndarray input, int ksize, int strides, string padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.

Note internally this op reshapes and uses the underlying 2d operation.
Parameters
ndarray input
A 3-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1` or `3`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1` or `3`. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NWC", "NCW". Defaults to "NWC".
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

object avg_pool1d_dyn(object input, object ksize, object strides, object padding, ImplicitContainer<T> data_format, object name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.

Note internally this op reshapes and uses the underlying 2d operation.
Parameters
object input
A 3-D `Tensor` of the format specified by `data_format`.
object ksize
An int or list of `ints` that has length `1` or `3`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1` or `3`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
ImplicitContainer<T> data_format
An optional string from: "NWC", "NCW". Defaults to "NWC".
object name
A name for the operation (optional).
Returns
object
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor avg_pool3d(ndarray input, int ksize, IEnumerable<int> strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(IGraphNodeBase input, int ksize, ValueTuple<int, object, object> strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
ValueTuple<int, object, object> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(IGraphNodeBase input, int ksize, IEnumerable<int> strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(IGraphNodeBase input, int ksize, int strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(IGraphNodeBase input, ValueTuple<int, object, object> ksize, int strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object, object> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(IGraphNodeBase input, ValueTuple<int, object, object> ksize, ValueTuple<int, object, object> strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object, object> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
ValueTuple<int, object, object> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(IGraphNodeBase input, IEnumerable<int> ksize, int strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(IGraphNodeBase input, IEnumerable<int> ksize, ValueTuple<int, object, object> strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
ValueTuple<int, object, object> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(IGraphNodeBase input, IEnumerable<int> ksize, IEnumerable<int> strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(ndarray input, int ksize, int strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(ndarray input, IEnumerable<int> ksize, IEnumerable<int> strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(ndarray input, IEnumerable<int> ksize, ValueTuple<int, object, object> strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
ValueTuple<int, object, object> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(ndarray input, IEnumerable<int> ksize, int strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(ndarray input, ValueTuple<int, object, object> ksize, IEnumerable<int> strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object, object> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(ndarray input, int ksize, ValueTuple<int, object, object> strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
int ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
ValueTuple<int, object, object> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(ndarray input, ValueTuple<int, object, object> ksize, ValueTuple<int, object, object> strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object, object> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
ValueTuple<int, object, object> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(IGraphNodeBase input, ValueTuple<int, object, object> ksize, IEnumerable<int> strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object, object> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

Tensor avg_pool3d(ndarray input, ValueTuple<int, object, object> ksize, int strides, object padding, string data_format, string name)

Performs the average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Parameters
ndarray input
A 5-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
ValueTuple<int, object, object> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NDHWC' and 'NCDHW' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` with the same type as `value`. The average pooled output tensor.

object batch_norm_with_global_normalization(object t, object m, object v, object beta, object gamma, Nullable<double> variance_epsilon, Nullable<bool> scale_after_normalization, string name, object input, object mean, object variance)

Batch normalization.

This op is deprecated. See tf.nn.batch_normalization.
Parameters
object t
A 4D input Tensor.
object m
A 1D mean Tensor with size matching the last dimension of t. This is the first output from tf.nn.moments, or a saved moving average thereof.
object v
A 1D variance Tensor with size matching the last dimension of t. This is the second output from tf.nn.moments, or a saved moving average thereof.
object beta
A 1D beta Tensor with size matching the last dimension of t. An offset to be added to the normalized tensor.
object gamma
A 1D gamma Tensor with size matching the last dimension of t. If "scale_after_normalization" is true, this tensor will be multiplied with the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
Nullable<bool> scale_after_normalization
A bool indicating whether the resulted tensor needs to be multiplied with gamma.
string name
A name for this operation (optional).
object input
Alias for t.
object mean
Alias for m.
object variance
Alias for v.
Returns
object
A batch-normalized `t`.

object batch_norm_with_global_normalization_dyn(object t, object m, object v, object beta, object gamma, object variance_epsilon, object scale_after_normalization, object name, object input, object mean, object variance)

Batch normalization.

This op is deprecated. See tf.nn.batch_normalization.
Parameters
object t
A 4D input Tensor.
object m
A 1D mean Tensor with size matching the last dimension of t. This is the first output from tf.nn.moments, or a saved moving average thereof.
object v
A 1D variance Tensor with size matching the last dimension of t. This is the second output from tf.nn.moments, or a saved moving average thereof.
object beta
A 1D beta Tensor with size matching the last dimension of t. An offset to be added to the normalized tensor.
object gamma
A 1D gamma Tensor with size matching the last dimension of t. If "scale_after_normalization" is true, this tensor will be multiplied with the normalized tensor.
object variance_epsilon
A small float number to avoid dividing by 0.
object scale_after_normalization
A bool indicating whether the resulted tensor needs to be multiplied with gamma.
object name
A name for this operation (optional).
object input
Alias for t.
object mean
Alias for m.
object variance
Alias for v.
Returns
object
A batch-normalized `t`.

object batch_normalization(IEnumerable<IGraphNodeBase> x, object mean, object variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
IEnumerable<IGraphNodeBase> x
Input `Tensor` of arbitrary dimensionality.
object mean
A mean `Tensor`.
object variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization(ValueTuple<PythonClassContainer, PythonClassContainer> x, object mean, IEnumerable<object> variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> x
Input `Tensor` of arbitrary dimensionality.
object mean
A mean `Tensor`.
IEnumerable<object> variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization(IndexedSlices x, IEnumerable<object> mean, IEnumerable<object> variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
IndexedSlices x
Input `Tensor` of arbitrary dimensionality.
IEnumerable<object> mean
A mean `Tensor`.
IEnumerable<object> variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization(IndexedSlices x, IEnumerable<object> mean, object variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
IndexedSlices x
Input `Tensor` of arbitrary dimensionality.
IEnumerable<object> mean
A mean `Tensor`.
object variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization(IndexedSlices x, object mean, IEnumerable<object> variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
IndexedSlices x
Input `Tensor` of arbitrary dimensionality.
object mean
A mean `Tensor`.
IEnumerable<object> variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization(IndexedSlices x, object mean, object variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
IndexedSlices x
Input `Tensor` of arbitrary dimensionality.
object mean
A mean `Tensor`.
object variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization(IGraphNodeBase x, IEnumerable<object> mean, IEnumerable<object> variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
IGraphNodeBase x
Input `Tensor` of arbitrary dimensionality.
IEnumerable<object> mean
A mean `Tensor`.
IEnumerable<object> variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization(IGraphNodeBase x, IEnumerable<object> mean, object variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
IGraphNodeBase x
Input `Tensor` of arbitrary dimensionality.
IEnumerable<object> mean
A mean `Tensor`.
object variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization(IGraphNodeBase x, object mean, IEnumerable<object> variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
IGraphNodeBase x
Input `Tensor` of arbitrary dimensionality.
object mean
A mean `Tensor`.
IEnumerable<object> variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization(IGraphNodeBase x, object mean, object variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
IGraphNodeBase x
Input `Tensor` of arbitrary dimensionality.
object mean
A mean `Tensor`.
object variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization(IEnumerable<IGraphNodeBase> x, IEnumerable<object> mean, IEnumerable<object> variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
IEnumerable<IGraphNodeBase> x
Input `Tensor` of arbitrary dimensionality.
IEnumerable<object> mean
A mean `Tensor`.
IEnumerable<object> variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization(IEnumerable<IGraphNodeBase> x, IEnumerable<object> mean, object variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
IEnumerable<IGraphNodeBase> x
Input `Tensor` of arbitrary dimensionality.
IEnumerable<object> mean
A mean `Tensor`.
object variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization(IEnumerable<IGraphNodeBase> x, object mean, IEnumerable<object> variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
IEnumerable<IGraphNodeBase> x
Input `Tensor` of arbitrary dimensionality.
object mean
A mean `Tensor`.
IEnumerable<object> variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization(ValueTuple<PythonClassContainer, PythonClassContainer> x, object mean, object variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> x
Input `Tensor` of arbitrary dimensionality.
object mean
A mean `Tensor`.
object variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization(ValueTuple<PythonClassContainer, PythonClassContainer> x, IEnumerable<object> mean, IEnumerable<object> variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> x
Input `Tensor` of arbitrary dimensionality.
IEnumerable<object> mean
A mean `Tensor`.
IEnumerable<object> variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization(ValueTuple<PythonClassContainer, PythonClassContainer> x, IEnumerable<object> mean, object variance, object offset, object scale, Nullable<double> variance_epsilon, string name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> x
Input `Tensor` of arbitrary dimensionality.
IEnumerable<object> mean
A mean `Tensor`.
object variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
Nullable<double> variance_epsilon
A small float number to avoid dividing by 0.
string name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

object batch_normalization_dyn(object x, object mean, object variance, object offset, object scale, object variance_epsilon, object name)

Batch normalization.

Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:

* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=True)` during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of `tf.nn.moments(..., keep_dims=False)` during training, or running averages thereof during inference.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
object x
Input `Tensor` of arbitrary dimensionality.
object mean
A mean `Tensor`.
object variance
A variance `Tensor`.
object offset
An offset `Tensor`, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor.
object scale
A scale `Tensor`, often denoted \\(\gamma\\) in equations, or `None`. If present, the scale is applied to the normalized tensor.
object variance_epsilon
A small float number to avoid dividing by 0.
object name
A name for this operation (optional).
Returns
object
the normalized, scaled, offset tensor.

Tensor bias_add(IEnumerable<int> value, IGraphNodeBase bias, string data_format, string name)

Adds `bias` to `value`.

This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike tf.add, the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
Parameters
IEnumerable<int> value
A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`.
IGraphNodeBase bias
A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used.
string data_format
A string. 'N...C' and 'NC...' are supported.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor bias_add(IGraphNodeBase value, IndexedSlices bias, string data_format, PythonFunctionContainer name)

Adds `bias` to `value`.

This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike tf.add, the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
Parameters
IGraphNodeBase value
A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`.
IndexedSlices bias
A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used.
string data_format
A string. 'N...C' and 'NC...' are supported.
PythonFunctionContainer name
A name for the operation (optional).
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor bias_add(IGraphNodeBase value, IndexedSlices bias, string data_format, string name)

Adds `bias` to `value`.

This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike tf.add, the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
Parameters
IGraphNodeBase value
A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`.
IndexedSlices bias
A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used.
string data_format
A string. 'N...C' and 'NC...' are supported.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor bias_add(IGraphNodeBase value, ValueTuple<PythonClassContainer, PythonClassContainer> bias, string data_format, string name)

Adds `bias` to `value`.

This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike tf.add, the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
Parameters
IGraphNodeBase value
A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`.
ValueTuple<PythonClassContainer, PythonClassContainer> bias
A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used.
string data_format
A string. 'N...C' and 'NC...' are supported.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor bias_add(IGraphNodeBase value, ValueTuple<PythonClassContainer, PythonClassContainer> bias, string data_format, PythonFunctionContainer name)

Adds `bias` to `value`.

This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike tf.add, the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
Parameters
IGraphNodeBase value
A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`.
ValueTuple<PythonClassContainer, PythonClassContainer> bias
A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used.
string data_format
A string. 'N...C' and 'NC...' are supported.
PythonFunctionContainer name
A name for the operation (optional).
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor bias_add(IGraphNodeBase value, IEnumerable<int> bias, string data_format, string name)

Adds `bias` to `value`.

This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike tf.add, the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
Parameters
IGraphNodeBase value
A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`.
IEnumerable<int> bias
A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used.
string data_format
A string. 'N...C' and 'NC...' are supported.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor bias_add(IGraphNodeBase value, IEnumerable<int> bias, string data_format, PythonFunctionContainer name)

Adds `bias` to `value`.

This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike tf.add, the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
Parameters
IGraphNodeBase value
A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`.
IEnumerable<int> bias
A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used.
string data_format
A string. 'N...C' and 'NC...' are supported.
PythonFunctionContainer name
A name for the operation (optional).
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor bias_add(IEnumerable<int> value, IEnumerable<int> bias, string data_format, string name)

Adds `bias` to `value`.

This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike tf.add, the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
Parameters
IEnumerable<int> value
A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`.
IEnumerable<int> bias
A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used.
string data_format
A string. 'N...C' and 'NC...' are supported.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor bias_add(IEnumerable<int> value, IEnumerable<int> bias, string data_format, PythonFunctionContainer name)

Adds `bias` to `value`.

This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike tf.add, the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
Parameters
IEnumerable<int> value
A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`.
IEnumerable<int> bias
A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used.
string data_format
A string. 'N...C' and 'NC...' are supported.
PythonFunctionContainer name
A name for the operation (optional).
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor bias_add(IEnumerable<int> value, ValueTuple<PythonClassContainer, PythonClassContainer> bias, string data_format, PythonFunctionContainer name)

Adds `bias` to `value`.

This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike tf.add, the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
Parameters
IEnumerable<int> value
A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`.
ValueTuple<PythonClassContainer, PythonClassContainer> bias
A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used.
string data_format
A string. 'N...C' and 'NC...' are supported.
PythonFunctionContainer name
A name for the operation (optional).
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor bias_add(IEnumerable<int> value, ValueTuple<PythonClassContainer, PythonClassContainer> bias, string data_format, string name)

Adds `bias` to `value`.

This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike tf.add, the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
Parameters
IEnumerable<int> value
A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`.
ValueTuple<PythonClassContainer, PythonClassContainer> bias
A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used.
string data_format
A string. 'N...C' and 'NC...' are supported.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor bias_add(IEnumerable<int> value, IndexedSlices bias, string data_format, PythonFunctionContainer name)

Adds `bias` to `value`.

This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike tf.add, the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
Parameters
IEnumerable<int> value
A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`.
IndexedSlices bias
A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used.
string data_format
A string. 'N...C' and 'NC...' are supported.
PythonFunctionContainer name
A name for the operation (optional).
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor bias_add(IEnumerable<int> value, IndexedSlices bias, string data_format, string name)

Adds `bias` to `value`.

This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike tf.add, the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
Parameters
IEnumerable<int> value
A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`.
IndexedSlices bias
A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used.
string data_format
A string. 'N...C' and 'NC...' are supported.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor bias_add(IEnumerable<int> value, IGraphNodeBase bias, string data_format, PythonFunctionContainer name)

Adds `bias` to `value`.

This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike tf.add, the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
Parameters
IEnumerable<int> value
A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`.
IGraphNodeBase bias
A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used.
string data_format
A string. 'N...C' and 'NC...' are supported.
PythonFunctionContainer name
A name for the operation (optional).
Returns
Tensor
A `Tensor` with the same type as `value`.

object bias_add_dyn(object value, object bias, object data_format, object name)

Adds `bias` to `value`.

This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike tf.add, the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
Parameters
object value
A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`.
object bias
A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used.
object data_format
A string. 'N...C' and 'NC...' are supported.
object name
A name for the operation (optional).
Returns
object
A `Tensor` with the same type as `value`.

ValueTuple<object, object> bidirectional_dynamic_rnn(LSTMCell cell_fw, LSTMCell cell_bw, IGraphNodeBase inputs, IGraphNodeBase sequence_length, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, object parallel_iterations, bool swap_memory, Nullable<bool> time_major, object scope)

Creates a dynamic version of bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell))`, which is equivalent to this API

Takes input and builds independent forward and backward RNNs. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
LSTMCell cell_fw
An instance of RNNCell, to be used for forward direction.
LSTMCell cell_bw
An instance of RNNCell, to be used for backward direction.
IGraphNodeBase inputs
The RNN inputs. If time_major == False (default), this must be a tensor of shape: `[batch_size, max_time,...]`, or a nested tuple of such elements. If time_major == True, this must be a tensor of shape: `[max_time, batch_size,...]`, or a nested tuple of such elements.
IGraphNodeBase sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences in the batch. If not provided, all batch entries are assumed to be full sequences; and time reversal is applied from time `0` to `max_time` for each sequence.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial states and expected output. Required if initial_states are not provided or RNN states have a heterogeneous dtype.
object parallel_iterations
(Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
bool swap_memory
Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
Nullable<bool> time_major
The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
object scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
ValueTuple<object, object>
A tuple (outputs, output_states) where:

ValueTuple<Tensor, object> collapse_repeated(IGraphNodeBase labels, IEnumerable<int> seq_length, string name)

Merge repeated labels into single labels.
Parameters
IGraphNodeBase labels
Tensor of shape [batch, max value in seq_length]
IEnumerable<int> seq_length
Tensor of shape [batch], sequence length of each batch element.
string name
A name for this `Op`. Defaults to "collapse_repeated_labels".
Returns
ValueTuple<Tensor, object>
A tuple `(collapsed_labels, new_seq_length)` where

ValueTuple<Tensor, object> collapse_repeated(IEnumerable<object> labels, IEnumerable<int> seq_length, string name)

Merge repeated labels into single labels.
Parameters
IEnumerable<object> labels
Tensor of shape [batch, max value in seq_length]
IEnumerable<int> seq_length
Tensor of shape [batch], sequence length of each batch element.
string name
A name for this `Op`. Defaults to "collapse_repeated_labels".
Returns
ValueTuple<Tensor, object>
A tuple `(collapsed_labels, new_seq_length)` where

ValueTuple<Tensor, object> collapse_repeated(IGraphNodeBase labels, IGraphNodeBase seq_length, string name)

Merge repeated labels into single labels.
Parameters
IGraphNodeBase labels
Tensor of shape [batch, max value in seq_length]
IGraphNodeBase seq_length
Tensor of shape [batch], sequence length of each batch element.
string name
A name for this `Op`. Defaults to "collapse_repeated_labels".
Returns
ValueTuple<Tensor, object>
A tuple `(collapsed_labels, new_seq_length)` where

ValueTuple<Tensor, object> collapse_repeated(IEnumerable<object> labels, IGraphNodeBase seq_length, string name)

Merge repeated labels into single labels.
Parameters
IEnumerable<object> labels
Tensor of shape [batch, max value in seq_length]
IGraphNodeBase seq_length
Tensor of shape [batch], sequence length of each batch element.
string name
A name for this `Op`. Defaults to "collapse_repeated_labels".
Returns
ValueTuple<Tensor, object>
A tuple `(collapsed_labels, new_seq_length)` where

object collapse_repeated_dyn(object labels, object seq_length, object name)

Merge repeated labels into single labels.
Parameters
object labels
Tensor of shape [batch, max value in seq_length]
object seq_length
Tensor of shape [batch], sequence length of each batch element.
object name
A name for this `Op`. Defaults to "collapse_repeated_labels".
Returns
object
A tuple `(collapsed_labels, new_seq_length)` where

object compute_accidental_hits(IGraphNodeBase true_classes, IndexedSlices sampled_candidates, int num_true, object seed, string name)

Compute the position ids in `sampled_candidates` matching `true_classes`.

In Candidate Sampling, this operation facilitates virtually removing sampled classes which happen to match target classes. This is done in Sampled Softmax and Sampled Logistic.

See our [Candidate Sampling Algorithms Reference](http://www.tensorflow.org/extras/candidate_sampling.pdf).

We presuppose that the `sampled_candidates` are unique.

We call it an 'accidental hit' when one of the target classes matches one of the sampled classes. This operation reports accidental hits as triples `(index, id, weight)`, where `index` represents the row number in `true_classes`, `id` represents the position in `sampled_candidates`, and weight is `-FLOAT_MAX`.

The result of this op should be passed through a `sparse_to_dense` operation, then added to the logits of the sampled classes. This removes the contradictory effect of accidentally sampling the true target classes as noise classes for the same example.
Parameters
IGraphNodeBase true_classes
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
IndexedSlices sampled_candidates
A tensor of type `int64` and shape `[num_sampled]`. The sampled_candidates output of CandidateSampler.
int num_true
An `int`. The number of target classes per training example.
object seed
An `int`. An operation-specific seed. Default is 0.
string name
A name for the operation (optional).
Returns
object

object compute_accidental_hits(IGraphNodeBase true_classes, IGraphNodeBase sampled_candidates, int num_true, object seed, string name)

Compute the position ids in `sampled_candidates` matching `true_classes`.

In Candidate Sampling, this operation facilitates virtually removing sampled classes which happen to match target classes. This is done in Sampled Softmax and Sampled Logistic.

See our [Candidate Sampling Algorithms Reference](http://www.tensorflow.org/extras/candidate_sampling.pdf).

We presuppose that the `sampled_candidates` are unique.

We call it an 'accidental hit' when one of the target classes matches one of the sampled classes. This operation reports accidental hits as triples `(index, id, weight)`, where `index` represents the row number in `true_classes`, `id` represents the position in `sampled_candidates`, and weight is `-FLOAT_MAX`.

The result of this op should be passed through a `sparse_to_dense` operation, then added to the logits of the sampled classes. This removes the contradictory effect of accidentally sampling the true target classes as noise classes for the same example.
Parameters
IGraphNodeBase true_classes
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
IGraphNodeBase sampled_candidates
A tensor of type `int64` and shape `[num_sampled]`. The sampled_candidates output of CandidateSampler.
int num_true
An `int`. The number of target classes per training example.
object seed
An `int`. An operation-specific seed. Default is 0.
string name
A name for the operation (optional).
Returns
object

object compute_accidental_hits(IGraphNodeBase true_classes, ValueTuple<PythonClassContainer, PythonClassContainer> sampled_candidates, int num_true, object seed, string name)

Compute the position ids in `sampled_candidates` matching `true_classes`.

In Candidate Sampling, this operation facilitates virtually removing sampled classes which happen to match target classes. This is done in Sampled Softmax and Sampled Logistic.

See our [Candidate Sampling Algorithms Reference](http://www.tensorflow.org/extras/candidate_sampling.pdf).

We presuppose that the `sampled_candidates` are unique.

We call it an 'accidental hit' when one of the target classes matches one of the sampled classes. This operation reports accidental hits as triples `(index, id, weight)`, where `index` represents the row number in `true_classes`, `id` represents the position in `sampled_candidates`, and weight is `-FLOAT_MAX`.

The result of this op should be passed through a `sparse_to_dense` operation, then added to the logits of the sampled classes. This removes the contradictory effect of accidentally sampling the true target classes as noise classes for the same example.
Parameters
IGraphNodeBase true_classes
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
ValueTuple<PythonClassContainer, PythonClassContainer> sampled_candidates
A tensor of type `int64` and shape `[num_sampled]`. The sampled_candidates output of CandidateSampler.
int num_true
An `int`. The number of target classes per training example.
object seed
An `int`. An operation-specific seed. Default is 0.
string name
A name for the operation (optional).
Returns
object

object compute_accidental_hits_dyn(object true_classes, object sampled_candidates, object num_true, object seed, object name)

Compute the position ids in `sampled_candidates` matching `true_classes`.

In Candidate Sampling, this operation facilitates virtually removing sampled classes which happen to match target classes. This is done in Sampled Softmax and Sampled Logistic.

See our [Candidate Sampling Algorithms Reference](http://www.tensorflow.org/extras/candidate_sampling.pdf).

We presuppose that the `sampled_candidates` are unique.

We call it an 'accidental hit' when one of the target classes matches one of the sampled classes. This operation reports accidental hits as triples `(index, id, weight)`, where `index` represents the row number in `true_classes`, `id` represents the position in `sampled_candidates`, and weight is `-FLOAT_MAX`.

The result of this op should be passed through a `sparse_to_dense` operation, then added to the logits of the sampled classes. This removes the contradictory effect of accidentally sampling the true target classes as noise classes for the same example.
Parameters
object true_classes
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
object sampled_candidates
A tensor of type `int64` and shape `[num_sampled]`. The sampled_candidates output of CandidateSampler.
object num_true
An `int`. The number of target classes per training example.
object seed
An `int`. An operation-specific seed. Default is 0.
object name
A name for the operation (optional).
Returns
object

object compute_average_loss(IGraphNodeBase per_example_loss, IEnumerable<double> sample_weight, Nullable<int> global_batch_size)

Scales per-example losses with sample_weights and computes their average.

Usage with distribution strategy and custom training loop:
Parameters
IGraphNodeBase per_example_loss
Per-example loss.
IEnumerable<double> sample_weight
Optional weighting for each example.
Nullable<int> global_batch_size
Optional global batch size value. Defaults to (size of first dimension of `losses`) * (number of replicas).
Returns
object
Scalar loss value.
Show Example
with strategy.scope():
              def compute_loss(labels, predictions, sample_weight=None): 

# If you are using a `Loss` class instead, set reduction to `NONE` so that # we can do the reduction afterwards and divide by global batch size. per_example_loss = tf.keras.losses.sparse_categorical_crossentropy( labels, predictions)

# Compute loss that is scaled by sample_weight and by global batch size. return tf.compute_average_loss( per_example_loss, sample_weight=sample_weight, global_batch_size=GLOBAL_BATCH_SIZE)

object compute_average_loss(IEnumerable<int> per_example_loss, IEnumerable<double> sample_weight, Nullable<int> global_batch_size)

Scales per-example losses with sample_weights and computes their average.

Usage with distribution strategy and custom training loop:
Parameters
IEnumerable<int> per_example_loss
Per-example loss.
IEnumerable<double> sample_weight
Optional weighting for each example.
Nullable<int> global_batch_size
Optional global batch size value. Defaults to (size of first dimension of `losses`) * (number of replicas).
Returns
object
Scalar loss value.
Show Example
with strategy.scope():
              def compute_loss(labels, predictions, sample_weight=None): 

# If you are using a `Loss` class instead, set reduction to `NONE` so that # we can do the reduction afterwards and divide by global batch size. per_example_loss = tf.keras.losses.sparse_categorical_crossentropy( labels, predictions)

# Compute loss that is scaled by sample_weight and by global batch size. return tf.compute_average_loss( per_example_loss, sample_weight=sample_weight, global_batch_size=GLOBAL_BATCH_SIZE)

object compute_average_loss_dyn(object per_example_loss, object sample_weight, object global_batch_size)

Scales per-example losses with sample_weights and computes their average.

Usage with distribution strategy and custom training loop:
Parameters
object per_example_loss
Per-example loss.
object sample_weight
Optional weighting for each example.
object global_batch_size
Optional global batch size value. Defaults to (size of first dimension of `losses`) * (number of replicas).
Returns
object
Scalar loss value.
Show Example
with strategy.scope():
              def compute_loss(labels, predictions, sample_weight=None): 

# If you are using a `Loss` class instead, set reduction to `NONE` so that # we can do the reduction afterwards and divide by global batch size. per_example_loss = tf.keras.losses.sparse_categorical_crossentropy( labels, predictions)

# Compute loss that is scaled by sample_weight and by global batch size. return tf.compute_average_loss( per_example_loss, sample_weight=sample_weight, global_batch_size=GLOBAL_BATCH_SIZE)

Tensor conv_transpose(IGraphNodeBase input, IGraphNodeBase filters, IGraphNodeBase output_shape, string strides, string padding, object data_format, object dilations, string name)

The transpose of `convolution`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is actually the transpose (gradient) of `convolution` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
An N+2 dimensional `Tensor` of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC". It must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filters
An N+2 dimensional `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
IGraphNodeBase output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
string strides
An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
An int or list of `ints` that has length `1`, `N` or `N+2`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details.
string name
A name for the operation (optional). If not specified "conv_transpose" is used.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv_transpose(IGraphNodeBase input, int filters, IEnumerable<int> output_shape, int strides, string padding, object data_format, object dilations, string name)

The transpose of `convolution`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is actually the transpose (gradient) of `convolution` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
An N+2 dimensional `Tensor` of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC". It must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
int filters
An N+2 dimensional `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
IEnumerable<int> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
int strides
An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
An int or list of `ints` that has length `1`, `N` or `N+2`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details.
string name
A name for the operation (optional). If not specified "conv_transpose" is used.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv_transpose(IGraphNodeBase input, int filters, IEnumerable<int> output_shape, string strides, string padding, object data_format, object dilations, string name)

The transpose of `convolution`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is actually the transpose (gradient) of `convolution` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
An N+2 dimensional `Tensor` of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC". It must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
int filters
An N+2 dimensional `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
IEnumerable<int> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
string strides
An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
An int or list of `ints` that has length `1`, `N` or `N+2`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details.
string name
A name for the operation (optional). If not specified "conv_transpose" is used.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv_transpose(IGraphNodeBase input, int filters, IGraphNodeBase output_shape, int strides, string padding, object data_format, object dilations, string name)

The transpose of `convolution`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is actually the transpose (gradient) of `convolution` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
An N+2 dimensional `Tensor` of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC". It must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
int filters
An N+2 dimensional `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
IGraphNodeBase output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
int strides
An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
An int or list of `ints` that has length `1`, `N` or `N+2`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details.
string name
A name for the operation (optional). If not specified "conv_transpose" is used.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv_transpose(IGraphNodeBase input, IGraphNodeBase filters, IGraphNodeBase output_shape, int strides, string padding, object data_format, object dilations, string name)

The transpose of `convolution`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is actually the transpose (gradient) of `convolution` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
An N+2 dimensional `Tensor` of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC". It must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filters
An N+2 dimensional `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
IGraphNodeBase output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
int strides
An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
An int or list of `ints` that has length `1`, `N` or `N+2`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details.
string name
A name for the operation (optional). If not specified "conv_transpose" is used.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv_transpose(IGraphNodeBase input, int filters, IGraphNodeBase output_shape, string strides, string padding, object data_format, object dilations, string name)

The transpose of `convolution`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is actually the transpose (gradient) of `convolution` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
An N+2 dimensional `Tensor` of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC". It must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
int filters
An N+2 dimensional `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
IGraphNodeBase output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
string strides
An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
An int or list of `ints` that has length `1`, `N` or `N+2`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details.
string name
A name for the operation (optional). If not specified "conv_transpose" is used.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv_transpose(IGraphNodeBase input, IGraphNodeBase filters, IEnumerable<int> output_shape, int strides, string padding, object data_format, object dilations, string name)

The transpose of `convolution`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is actually the transpose (gradient) of `convolution` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
An N+2 dimensional `Tensor` of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC". It must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filters
An N+2 dimensional `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
IEnumerable<int> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
int strides
An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
An int or list of `ints` that has length `1`, `N` or `N+2`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details.
string name
A name for the operation (optional). If not specified "conv_transpose" is used.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv_transpose(IGraphNodeBase input, IGraphNodeBase filters, IEnumerable<int> output_shape, string strides, string padding, object data_format, object dilations, string name)

The transpose of `convolution`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is actually the transpose (gradient) of `convolution` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
An N+2 dimensional `Tensor` of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC". It must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filters
An N+2 dimensional `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
IEnumerable<int> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
string strides
An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
An int or list of `ints` that has length `1`, `N` or `N+2`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details.
string name
A name for the operation (optional). If not specified "conv_transpose" is used.
Returns
Tensor
A `Tensor` with the same type as `value`.

object conv_transpose_dyn(object input, object filters, object output_shape, object strides, ImplicitContainer<T> padding, object data_format, object dilations, object name)

The transpose of `convolution`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is actually the transpose (gradient) of `convolution` rather than an actual deconvolution.
Parameters
object input
An N+2 dimensional `Tensor` of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC". It must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
object filters
An N+2 dimensional `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
object output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
object strides
An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
ImplicitContainer<T> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
An int or list of `ints` that has length `1`, `N` or `N+2`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details.
object name
A name for the operation (optional). If not specified "conv_transpose" is used.
Returns
object
A `Tensor` with the same type as `value`.

Tensor conv1d(IEnumerable<IGraphNodeBase> value, IGraphNodeBase filters, IEnumerable<int> stride, string padding, Nullable<bool> use_cudnn_on_gpu, string data_format, string name, IGraphNodeBase input, object dilations)

Computes a 1-D convolution given 3-D input and filter tensors. (deprecated argument values) (deprecated argument values)

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(data_format='NCHW')`. They will be removed in a future version. Instructions for updating: `NCHW` for data_format is deprecated, use `NCW` instead

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(data_format='NHWC')`. They will be removed in a future version. Instructions for updating: `NHWC` for data_format is deprecated, use `NWC` instead

Given an input tensor of shape [batch, in_width, in_channels] if data_format is "NWC", or [batch, in_channels, in_width] if data_format is "NCW", and a filter / kernel tensor of shape [filter_width, in_channels, out_channels], this op reshapes the arguments to pass them to conv2d to perform the equivalent convolution operation.

Internally, this op reshapes the input tensors and invokes tf.nn.conv2d. For example, if `data_format` does not start with "NC", a tensor of shape [batch, in_width, in_channels] is reshaped to [batch, 1, in_width, in_channels], and the filter is reshaped to [1, filter_width, in_channels, out_channels]. The result is then reshaped back to [batch, out_width, out_channels] \(where out_width is a function of the stride and padding as in conv2d\) and returned to the caller.
Parameters
IEnumerable<IGraphNodeBase> value
A 3D `Tensor`. Must be of type `float16`, `float32`, or `float64`.
IGraphNodeBase filters
A 3D `Tensor`. Must have the same type as `value`.
IEnumerable<int> stride
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
'SAME' or 'VALID'
Nullable<bool> use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from `"NWC", "NCW"`. Defaults to `"NWC"`, the data is stored in the order of [batch, in_width, in_channels]. The `"NCW"` format stores data as [batch, in_channels, in_width].
string name
A name for the operation (optional).
IGraphNodeBase input
Alias for value.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
Returns
Tensor
A `Tensor`. Has the same type as input.

Tensor conv1d(IGraphNodeBase value, IGraphNodeBase filters, IEnumerable<int> stride, string padding, Nullable<bool> use_cudnn_on_gpu, string data_format, string name, IGraphNodeBase input, object dilations)

Computes a 1-D convolution given 3-D input and filter tensors. (deprecated argument values) (deprecated argument values)

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(data_format='NCHW')`. They will be removed in a future version. Instructions for updating: `NCHW` for data_format is deprecated, use `NCW` instead

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(data_format='NHWC')`. They will be removed in a future version. Instructions for updating: `NHWC` for data_format is deprecated, use `NWC` instead

Given an input tensor of shape [batch, in_width, in_channels] if data_format is "NWC", or [batch, in_channels, in_width] if data_format is "NCW", and a filter / kernel tensor of shape [filter_width, in_channels, out_channels], this op reshapes the arguments to pass them to conv2d to perform the equivalent convolution operation.

Internally, this op reshapes the input tensors and invokes tf.nn.conv2d. For example, if `data_format` does not start with "NC", a tensor of shape [batch, in_width, in_channels] is reshaped to [batch, 1, in_width, in_channels], and the filter is reshaped to [1, filter_width, in_channels, out_channels]. The result is then reshaped back to [batch, out_width, out_channels] \(where out_width is a function of the stride and padding as in conv2d\) and returned to the caller.
Parameters
IGraphNodeBase value
A 3D `Tensor`. Must be of type `float16`, `float32`, or `float64`.
IGraphNodeBase filters
A 3D `Tensor`. Must have the same type as `value`.
IEnumerable<int> stride
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
'SAME' or 'VALID'
Nullable<bool> use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from `"NWC", "NCW"`. Defaults to `"NWC"`, the data is stored in the order of [batch, in_width, in_channels]. The `"NCW"` format stores data as [batch, in_channels, in_width].
string name
A name for the operation (optional).
IGraphNodeBase input
Alias for value.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
Returns
Tensor
A `Tensor`. Has the same type as input.

Tensor conv1d(IGraphNodeBase value, IGraphNodeBase filters, int stride, string padding, Nullable<bool> use_cudnn_on_gpu, string data_format, string name, IGraphNodeBase input, object dilations)

Computes a 1-D convolution given 3-D input and filter tensors. (deprecated argument values) (deprecated argument values)

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(data_format='NCHW')`. They will be removed in a future version. Instructions for updating: `NCHW` for data_format is deprecated, use `NCW` instead

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(data_format='NHWC')`. They will be removed in a future version. Instructions for updating: `NHWC` for data_format is deprecated, use `NWC` instead

Given an input tensor of shape [batch, in_width, in_channels] if data_format is "NWC", or [batch, in_channels, in_width] if data_format is "NCW", and a filter / kernel tensor of shape [filter_width, in_channels, out_channels], this op reshapes the arguments to pass them to conv2d to perform the equivalent convolution operation.

Internally, this op reshapes the input tensors and invokes tf.nn.conv2d. For example, if `data_format` does not start with "NC", a tensor of shape [batch, in_width, in_channels] is reshaped to [batch, 1, in_width, in_channels], and the filter is reshaped to [1, filter_width, in_channels, out_channels]. The result is then reshaped back to [batch, out_width, out_channels] \(where out_width is a function of the stride and padding as in conv2d\) and returned to the caller.
Parameters
IGraphNodeBase value
A 3D `Tensor`. Must be of type `float16`, `float32`, or `float64`.
IGraphNodeBase filters
A 3D `Tensor`. Must have the same type as `value`.
int stride
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
'SAME' or 'VALID'
Nullable<bool> use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from `"NWC", "NCW"`. Defaults to `"NWC"`, the data is stored in the order of [batch, in_width, in_channels]. The `"NCW"` format stores data as [batch, in_channels, in_width].
string name
A name for the operation (optional).
IGraphNodeBase input
Alias for value.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
Returns
Tensor
A `Tensor`. Has the same type as input.

Tensor conv1d(IEnumerable<IGraphNodeBase> value, IGraphNodeBase filters, int stride, string padding, Nullable<bool> use_cudnn_on_gpu, string data_format, string name, IGraphNodeBase input, object dilations)

Computes a 1-D convolution given 3-D input and filter tensors. (deprecated argument values) (deprecated argument values)

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(data_format='NCHW')`. They will be removed in a future version. Instructions for updating: `NCHW` for data_format is deprecated, use `NCW` instead

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(data_format='NHWC')`. They will be removed in a future version. Instructions for updating: `NHWC` for data_format is deprecated, use `NWC` instead

Given an input tensor of shape [batch, in_width, in_channels] if data_format is "NWC", or [batch, in_channels, in_width] if data_format is "NCW", and a filter / kernel tensor of shape [filter_width, in_channels, out_channels], this op reshapes the arguments to pass them to conv2d to perform the equivalent convolution operation.

Internally, this op reshapes the input tensors and invokes tf.nn.conv2d. For example, if `data_format` does not start with "NC", a tensor of shape [batch, in_width, in_channels] is reshaped to [batch, 1, in_width, in_channels], and the filter is reshaped to [1, filter_width, in_channels, out_channels]. The result is then reshaped back to [batch, out_width, out_channels] \(where out_width is a function of the stride and padding as in conv2d\) and returned to the caller.
Parameters
IEnumerable<IGraphNodeBase> value
A 3D `Tensor`. Must be of type `float16`, `float32`, or `float64`.
IGraphNodeBase filters
A 3D `Tensor`. Must have the same type as `value`.
int stride
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
'SAME' or 'VALID'
Nullable<bool> use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from `"NWC", "NCW"`. Defaults to `"NWC"`, the data is stored in the order of [batch, in_width, in_channels]. The `"NCW"` format stores data as [batch, in_channels, in_width].
string name
A name for the operation (optional).
IGraphNodeBase input
Alias for value.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
Returns
Tensor
A `Tensor`. Has the same type as input.

object conv1d_dyn(object value, object filters, object stride, object padding, object use_cudnn_on_gpu, object data_format, object name, object input, object dilations)

Computes a 1-D convolution given 3-D input and filter tensors. (deprecated argument values) (deprecated argument values)

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(data_format='NCHW')`. They will be removed in a future version. Instructions for updating: `NCHW` for data_format is deprecated, use `NCW` instead

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(data_format='NHWC')`. They will be removed in a future version. Instructions for updating: `NHWC` for data_format is deprecated, use `NWC` instead

Given an input tensor of shape [batch, in_width, in_channels] if data_format is "NWC", or [batch, in_channels, in_width] if data_format is "NCW", and a filter / kernel tensor of shape [filter_width, in_channels, out_channels], this op reshapes the arguments to pass them to conv2d to perform the equivalent convolution operation.

Internally, this op reshapes the input tensors and invokes tf.nn.conv2d. For example, if `data_format` does not start with "NC", a tensor of shape [batch, in_width, in_channels] is reshaped to [batch, 1, in_width, in_channels], and the filter is reshaped to [1, filter_width, in_channels, out_channels]. The result is then reshaped back to [batch, out_width, out_channels] \(where out_width is a function of the stride and padding as in conv2d\) and returned to the caller.
Parameters
object value
A 3D `Tensor`. Must be of type `float16`, `float32`, or `float64`.
object filters
A 3D `Tensor`. Must have the same type as `value`.
object stride
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
object padding
'SAME' or 'VALID'
object use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
object data_format
An optional `string` from `"NWC", "NCW"`. Defaults to `"NWC"`, the data is stored in the order of [batch, in_width, in_channels]. The `"NCW"` format stores data as [batch, in_channels, in_width].
object name
A name for the operation (optional).
object input
Alias for value.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
Returns
object
A `Tensor`. Has the same type as input.

Tensor conv1d_transpose(IGraphNodeBase input, int filters, IEnumerable<int> output_shape, string strides, string padding, string data_format, object dilations, PythonFunctionContainer name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
int filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
string strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
PythonFunctionContainer name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, IGraphNodeBase filters, IGraphNodeBase output_shape, string strides, string padding, string data_format, object dilations, PythonFunctionContainer name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
IGraphNodeBase filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
string strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
PythonFunctionContainer name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, IGraphNodeBase filters, IGraphNodeBase output_shape, int strides, string padding, string data_format, object dilations, string name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
IGraphNodeBase filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
int strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, IGraphNodeBase filters, IEnumerable<int> output_shape, IEnumerable<int> strides, string padding, string data_format, object dilations, PythonFunctionContainer name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
IGraphNodeBase filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
PythonFunctionContainer name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, int filters, IEnumerable<int> output_shape, string strides, string padding, string data_format, object dilations, string name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
int filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
string strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, int filters, IGraphNodeBase output_shape, IEnumerable<int> strides, string padding, string data_format, object dilations, string name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
int filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, int filters, IGraphNodeBase output_shape, string strides, string padding, string data_format, object dilations, string name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
int filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
string strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, int filters, IGraphNodeBase output_shape, IEnumerable<int> strides, string padding, string data_format, object dilations, PythonFunctionContainer name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
int filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
PythonFunctionContainer name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, IGraphNodeBase filters, IGraphNodeBase output_shape, int strides, string padding, string data_format, object dilations, PythonFunctionContainer name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
IGraphNodeBase filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
int strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
PythonFunctionContainer name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, IGraphNodeBase filters, IEnumerable<int> output_shape, int strides, string padding, string data_format, object dilations, string name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
IGraphNodeBase filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
int strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, int filters, IEnumerable<int> output_shape, int strides, string padding, string data_format, object dilations, PythonFunctionContainer name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
int filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
int strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
PythonFunctionContainer name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, IGraphNodeBase filters, IGraphNodeBase output_shape, IEnumerable<int> strides, string padding, string data_format, object dilations, string name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
IGraphNodeBase filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, int filters, IEnumerable<int> output_shape, IEnumerable<int> strides, string padding, string data_format, object dilations, string name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
int filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, IGraphNodeBase filters, IGraphNodeBase output_shape, IEnumerable<int> strides, string padding, string data_format, object dilations, PythonFunctionContainer name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
IGraphNodeBase filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
PythonFunctionContainer name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, int filters, IGraphNodeBase output_shape, int strides, string padding, string data_format, object dilations, PythonFunctionContainer name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
int filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
int strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
PythonFunctionContainer name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, IGraphNodeBase filters, IEnumerable<int> output_shape, string strides, string padding, string data_format, object dilations, string name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
IGraphNodeBase filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
string strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, IGraphNodeBase filters, IEnumerable<int> output_shape, IEnumerable<int> strides, string padding, string data_format, object dilations, string name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
IGraphNodeBase filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, IGraphNodeBase filters, IEnumerable<int> output_shape, string strides, string padding, string data_format, object dilations, PythonFunctionContainer name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
IGraphNodeBase filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
string strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
PythonFunctionContainer name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, IGraphNodeBase filters, IEnumerable<int> output_shape, int strides, string padding, string data_format, object dilations, PythonFunctionContainer name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
IGraphNodeBase filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
int strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
PythonFunctionContainer name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, int filters, IEnumerable<int> output_shape, int strides, string padding, string data_format, object dilations, string name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
int filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
int strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, IGraphNodeBase filters, IGraphNodeBase output_shape, string strides, string padding, string data_format, object dilations, string name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
IGraphNodeBase filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
string strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, int filters, IGraphNodeBase output_shape, int strides, string padding, string data_format, object dilations, string name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
int filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
int strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
string name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, int filters, IEnumerable<int> output_shape, IEnumerable<int> strides, string padding, string data_format, object dilations, PythonFunctionContainer name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
int filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
PythonFunctionContainer name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv1d_transpose(IGraphNodeBase input, int filters, IGraphNodeBase output_shape, string strides, string padding, string data_format, object dilations, PythonFunctionContainer name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
int filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
string strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
PythonFunctionContainer name
Optional name for the returned tensor.
Returns
Tensor
A `Tensor` with the same type as `value`.

object conv1d_transpose_dyn(object input, object filters, object output_shape, object strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, object dilations, object name)

The transpose of `conv1d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv1d` rather than an actual deconvolution.
Parameters
object input
A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format.
object filters
A 3-D `Tensor` with the same type as `value` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
object output_shape
A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op.
object strides
An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step.
ImplicitContainer<T> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
ImplicitContainer<T> data_format
A string. `'NWC'` and `'NCW'` are supported.
object dilations
An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
object name
Optional name for the returned tensor.
Returns
object
A `Tensor` with the same type as `value`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, int strides, ValueTuple<IEnumerable<object>, object> padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
int strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ValueTuple<int, object> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, int strides, IEnumerable<object> padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
int strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
IEnumerable<object> padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object, object, object> strides, IEnumerable<object> padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
ValueTuple<int, object, object, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
IEnumerable<object> padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ValueTuple<int, object> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object, object, object> strides, IEnumerable<object> padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
ValueTuple<int, object, object, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
IEnumerable<object> padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, int strides, string padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
int strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
string padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object, object, object> strides, ValueTuple<IEnumerable<object>, object> padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
ValueTuple<int, object, object, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ValueTuple<int, object> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object, object, object> strides, ValueTuple<IEnumerable<object>, object> padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
ValueTuple<int, object, object, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object, object, object> strides, int padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
ValueTuple<int, object, object, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
int padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object, object, object> strides, double padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
ValueTuple<int, object, object, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
double padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ValueTuple<int, object> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object, object, object> strides, string padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
ValueTuple<int, object, object, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
string padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ValueTuple<int, object> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object, object, object> strides, string padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
ValueTuple<int, object, object, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
string padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, int strides, double padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
int strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
double padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ValueTuple<int, object> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, int strides, double padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
int strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
double padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, int strides, IEnumerable<object> padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
int strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
IEnumerable<object> padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ValueTuple<int, object> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object, object, object> strides, int padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
ValueTuple<int, object, object, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
int padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ValueTuple<int, object> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, int strides, string padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
int strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
string padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ValueTuple<int, object> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object, object, object> strides, double padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
ValueTuple<int, object, object, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
double padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, int padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
int padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, int strides, ValueTuple<IEnumerable<object>, object> padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
int strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, int strides, int padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
int strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
int padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ValueTuple<int, object> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, int strides, int padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
int strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
int padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, double padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
double padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ValueTuple<int, object> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, string padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
string padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, double padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
double padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, IEnumerable<object> padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
IEnumerable<object> padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ValueTuple<int, object> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, IEnumerable<object> padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
IEnumerable<object> padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, string padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
string padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ValueTuple<int, object> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, ValueTuple<IEnumerable<object>, object> padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ValueTuple<int, object> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, ValueTuple<IEnumerable<object>, object> padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, int padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
int padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ValueTuple<int, object> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d_backprop_filter(IGraphNodeBase input, IGraphNodeBase filter_sizes, IGraphNodeBase out_backprop, IEnumerable<int> strides, string padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name)

Computes the gradients of convolution with respect to the filter.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[batch, in_height, in_width, in_channels]`.
IGraphNodeBase filter_sizes
A `Tensor` of type `int32`. An integer vector representing the tensor shape of `filter`, where `filter` is a 4-D `[filter_height, filter_width, in_channels, out_channels]` tensor.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `input`. 4-D with shape `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
IEnumerable<int> strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
string padding
Either the `string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d_backprop_filter(IGraphNodeBase input, IGraphNodeBase filter_sizes, IGraphNodeBase out_backprop, IEnumerable<int> strides, ValueTuple<IEnumerable<object>, object> padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name)

Computes the gradients of convolution with respect to the filter.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[batch, in_height, in_width, in_channels]`.
IGraphNodeBase filter_sizes
A `Tensor` of type `int32`. An integer vector representing the tensor shape of `filter`, where `filter` is a 4-D `[filter_height, filter_width, in_channels, out_channels]` tensor.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `input`. 4-D with shape `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
IEnumerable<int> strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
ValueTuple<IEnumerable<object>, object> padding
Either the `string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d_backprop_filter(IGraphNodeBase input, IGraphNodeBase filter_sizes, IGraphNodeBase out_backprop, IEnumerable<int> strides, object padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name)

Computes the gradients of convolution with respect to the filter.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[batch, in_height, in_width, in_channels]`.
IGraphNodeBase filter_sizes
A `Tensor` of type `int32`. An integer vector representing the tensor shape of `filter`, where `filter` is a 4-D `[filter_height, filter_width, in_channels, out_channels]` tensor.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `input`. 4-D with shape `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
IEnumerable<int> strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
object padding
Either the `string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv2d_backprop_filter(IGraphNodeBase input, IGraphNodeBase filter_sizes, IGraphNodeBase out_backprop, IEnumerable<int> strides, IEnumerable<object> padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name)

Computes the gradients of convolution with respect to the filter.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[batch, in_height, in_width, in_channels]`.
IGraphNodeBase filter_sizes
A `Tensor` of type `int32`. An integer vector representing the tensor shape of `filter`, where `filter` is a 4-D `[filter_height, filter_width, in_channels, out_channels]` tensor.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `input`. 4-D with shape `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
IEnumerable<int> strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
IEnumerable<object> padding
Either the `string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

object conv2d_backprop_filter_dyn(object input, object filter_sizes, object out_backprop, object strides, object padding, ImplicitContainer<T> use_cudnn_on_gpu, ImplicitContainer<T> data_format, ImplicitContainer<T> dilations, object name)

Computes the gradients of convolution with respect to the filter.
Parameters
object input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[batch, in_height, in_width, in_channels]`.
object filter_sizes
A `Tensor` of type `int32`. An integer vector representing the tensor shape of `filter`, where `filter` is a 4-D `[filter_height, filter_width, in_channels, out_channels]` tensor.
object out_backprop
A `Tensor`. Must have the same type as `input`. 4-D with shape `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
object strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
object padding
Either the `string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
ImplicitContainer<T> use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
ImplicitContainer<T> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `input`.

Tensor conv2d_backprop_input(IGraphNodeBase input_sizes, IGraphNodeBase filter, IGraphNodeBase out_backprop, IEnumerable<int> strides, ValueTuple<IEnumerable<object>, object> padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes the gradients of convolution with respect to the input.
Parameters
IGraphNodeBase input_sizes
A `Tensor` of type `int32`. An integer vector representing the shape of `input`, where `input` is a 4-D `[batch, height, width, channels]` tensor.
IGraphNodeBase filter
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[filter_height, filter_width, in_channels, out_channels]`.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `filter`. 4-D with shape `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
IEnumerable<int> strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
ValueTuple<IEnumerable<object>, object> padding
Either the `string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
ValueTuple<int, object> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `filter`.

Tensor conv2d_backprop_input(IGraphNodeBase input_sizes, IGraphNodeBase filter, IGraphNodeBase out_backprop, IEnumerable<int> strides, string padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes the gradients of convolution with respect to the input.
Parameters
IGraphNodeBase input_sizes
A `Tensor` of type `int32`. An integer vector representing the shape of `input`, where `input` is a 4-D `[batch, height, width, channels]` tensor.
IGraphNodeBase filter
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[filter_height, filter_width, in_channels, out_channels]`.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `filter`. 4-D with shape `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
IEnumerable<int> strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
string padding
Either the `string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
ValueTuple<int, object> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `filter`.

Tensor conv2d_backprop_input(IGraphNodeBase input_sizes, IGraphNodeBase filter, IGraphNodeBase out_backprop, IEnumerable<int> strides, ValueTuple<IEnumerable<object>, object> padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes the gradients of convolution with respect to the input.
Parameters
IGraphNodeBase input_sizes
A `Tensor` of type `int32`. An integer vector representing the shape of `input`, where `input` is a 4-D `[batch, height, width, channels]` tensor.
IGraphNodeBase filter
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[filter_height, filter_width, in_channels, out_channels]`.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `filter`. 4-D with shape `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
IEnumerable<int> strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
ValueTuple<IEnumerable<object>, object> padding
Either the `string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `filter`.

Tensor conv2d_backprop_input(IGraphNodeBase input_sizes, IGraphNodeBase filter, IGraphNodeBase out_backprop, IEnumerable<int> strides, string padding, bool use_cudnn_on_gpu, string data_format, IEnumerable<object> dilations, string name, object filters)

Computes the gradients of convolution with respect to the input.
Parameters
IGraphNodeBase input_sizes
A `Tensor` of type `int32`. An integer vector representing the shape of `input`, where `input` is a 4-D `[batch, height, width, channels]` tensor.
IGraphNodeBase filter
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[filter_height, filter_width, in_channels, out_channels]`.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `filter`. 4-D with shape `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
IEnumerable<int> strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
string padding
Either the `string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
IEnumerable<object> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `filter`.

Tensor conv2d_backprop_input(IGraphNodeBase input_sizes, IGraphNodeBase filter, IGraphNodeBase out_backprop, IEnumerable<int> strides, IEnumerable<object> padding, bool use_cudnn_on_gpu, string data_format, IEnumerable<object> dilations, string name, object filters)

Computes the gradients of convolution with respect to the input.
Parameters
IGraphNodeBase input_sizes
A `Tensor` of type `int32`. An integer vector representing the shape of `input`, where `input` is a 4-D `[batch, height, width, channels]` tensor.
IGraphNodeBase filter
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[filter_height, filter_width, in_channels, out_channels]`.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `filter`. 4-D with shape `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
IEnumerable<int> strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
IEnumerable<object> padding
Either the `string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
IEnumerable<object> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `filter`.

Tensor conv2d_backprop_input(IGraphNodeBase input_sizes, IGraphNodeBase filter, IGraphNodeBase out_backprop, IEnumerable<int> strides, IEnumerable<object> padding, bool use_cudnn_on_gpu, string data_format, ValueTuple<int, object> dilations, string name, object filters)

Computes the gradients of convolution with respect to the input.
Parameters
IGraphNodeBase input_sizes
A `Tensor` of type `int32`. An integer vector representing the shape of `input`, where `input` is a 4-D `[batch, height, width, channels]` tensor.
IGraphNodeBase filter
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[filter_height, filter_width, in_channels, out_channels]`.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `filter`. 4-D with shape `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
IEnumerable<int> strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
IEnumerable<object> padding
Either the `string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
ValueTuple<int, object> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `filter`.

Tensor conv2d_backprop_input(IGraphNodeBase input_sizes, IGraphNodeBase filter, IGraphNodeBase out_backprop, IEnumerable<int> strides, IEnumerable<object> padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes the gradients of convolution with respect to the input.
Parameters
IGraphNodeBase input_sizes
A `Tensor` of type `int32`. An integer vector representing the shape of `input`, where `input` is a 4-D `[batch, height, width, channels]` tensor.
IGraphNodeBase filter
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[filter_height, filter_width, in_channels, out_channels]`.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `filter`. 4-D with shape `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
IEnumerable<int> strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
IEnumerable<object> padding
Either the `string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `filter`.

Tensor conv2d_backprop_input(IGraphNodeBase input_sizes, IGraphNodeBase filter, IGraphNodeBase out_backprop, IEnumerable<int> strides, ValueTuple<IEnumerable<object>, object> padding, bool use_cudnn_on_gpu, string data_format, IEnumerable<object> dilations, string name, object filters)

Computes the gradients of convolution with respect to the input.
Parameters
IGraphNodeBase input_sizes
A `Tensor` of type `int32`. An integer vector representing the shape of `input`, where `input` is a 4-D `[batch, height, width, channels]` tensor.
IGraphNodeBase filter
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[filter_height, filter_width, in_channels, out_channels]`.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `filter`. 4-D with shape `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
IEnumerable<int> strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
ValueTuple<IEnumerable<object>, object> padding
Either the `string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
IEnumerable<object> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `filter`.

Tensor conv2d_backprop_input(IGraphNodeBase input_sizes, IGraphNodeBase filter, IGraphNodeBase out_backprop, IEnumerable<int> strides, string padding, bool use_cudnn_on_gpu, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes the gradients of convolution with respect to the input.
Parameters
IGraphNodeBase input_sizes
A `Tensor` of type `int32`. An integer vector representing the shape of `input`, where `input` is a 4-D `[batch, height, width, channels]` tensor.
IGraphNodeBase filter
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[filter_height, filter_width, in_channels, out_channels]`.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `filter`. 4-D with shape `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
IEnumerable<int> strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
string padding
Either the `string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
bool use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
object filters
Alias for filter.
Returns
Tensor
A `Tensor`. Has the same type as `filter`.

object conv2d_backprop_input_dyn(object input_sizes, object filter, object out_backprop, object strides, object padding, ImplicitContainer<T> use_cudnn_on_gpu, ImplicitContainer<T> data_format, ImplicitContainer<T> dilations, object name, object filters)

Computes the gradients of convolution with respect to the input.
Parameters
object input_sizes
A `Tensor` of type `int32`. An integer vector representing the shape of `input`, where `input` is a 4-D `[batch, height, width, channels]` tensor.
object filter
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[filter_height, filter_width, in_channels, out_channels]`.
object out_backprop
A `Tensor`. Must have the same type as `filter`. 4-D with shape `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
object strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
object padding
Either the `string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
ImplicitContainer<T> use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
ImplicitContainer<T> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
object name
A name for the operation (optional).
object filters
Alias for filter.
Returns
object
A `Tensor`. Has the same type as `filter`.

object conv2d_dyn(object input, object filter, object strides, object padding, ImplicitContainer<T> use_cudnn_on_gpu, ImplicitContainer<T> data_format, ImplicitContainer<T> dilations, object name, object filters)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
object input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A 4-D tensor. The dimension order is interpreted according to the value of `data_format`, see below for details.
object filter
A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]`
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details.
object padding
Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.
ImplicitContainer<T> use_cudnn_on_gpu
An optional `bool`. Defaults to `True`.
ImplicitContainer<T> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
object name
A name for the operation (optional).
object filters
Alias for filter.
Returns
object
A `Tensor`. Has the same type as `input`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, object output_shape, IEnumerable<int> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
object output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IGraphNodeBase output_shape, ValueTuple<int, object> strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IGraphNodeBase output_shape, ValueTuple<int, object> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, object output_shape, ValueTuple<int, object> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
object output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, object output_shape, object strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
object output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, object output_shape, object strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
object output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IGraphNodeBase output_shape, object strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IGraphNodeBase output_shape, object strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, object output_shape, IEnumerable<int> strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
object output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, object output_shape, ValueTuple<int, object> strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
object output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IEnumerable<int> output_shape, IEnumerable<int> strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IEnumerable<int> output_shape, IEnumerable<int> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IEnumerable<int> output_shape, ValueTuple<int, object> strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IGraphNodeBase output_shape, IEnumerable<int> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IEnumerable<int> output_shape, ValueTuple<int, object> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IEnumerable<int> output_shape, object strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IEnumerable<int> output_shape, object strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape, IEnumerable<int> strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape, IEnumerable<int> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IGraphNodeBase output_shape, IEnumerable<int> strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape, ValueTuple<int, object> strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape, object strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape, ValueTuple<int, object> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv2d_transpose(IGraphNodeBase value, IGraphNodeBase filter, ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape, object strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
IGraphNodeBase filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
ValueTuple<IEnumerable<object>, PythonClassContainer> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
ValueTuple<IEnumerable<object>, object> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC' and 'NCHW' are supported.
string name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

object conv2d_transpose_dyn(object value, object filter, object output_shape, object strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, object name, object input, object filters, object dilations)

The transpose of `conv2d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv2d` rather than an actual deconvolution.
Parameters
object value
A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format.
object filter
A 4-D `Tensor` with the same type as `value` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
object output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details.
ImplicitContainer<T> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
ImplicitContainer<T> data_format
A string. 'NHWC' and 'NCHW' are supported.
object name
Optional name for the returned tensor.
object input
Alias for value.
object filters
Alias for filter.
object dilations
An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
Returns
object
A `Tensor` with the same type as `value`.

Tensor conv3d(object input, object filter, object strides, object padding, string data_format, ImplicitContainer<T> dilations, string name, object filters)

Computes a 3-D convolution given 5-D `input` and `filter` tensors.

In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product.

Our Conv3D implements a form of cross-correlation.
Parameters
object input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. Shape `[batch, in_depth, in_height, in_width, in_channels]`.
object filter
A `Tensor`. Must have the same type as `input`. Shape `[filter_depth, filter_height, filter_width, in_channels, out_channels]`. `in_channels` must match between `input` and `filter`.
object strides
A list of `ints` that has length `>= 5`. 1-D tensor of length 5. The stride of the sliding window for each dimension of `input`. Must have `strides[0] = strides[4] = 1`.
object padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`. 1-D tensor of length 5. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
object filters
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor conv3d_backprop_filter(IGraphNodeBase input, IGraphNodeBase filter_sizes, IGraphNodeBase out_backprop, IEnumerable<int> strides, string padding, string data_format, ImplicitContainer<T> dilations, string name)

Computes the gradients of 3-D convolution with respect to the filter.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. Shape `[batch, depth, rows, cols, in_channels]`.
IGraphNodeBase filter_sizes
A `Tensor` of type `int32`. An integer vector representing the tensor shape of `filter`, where `filter` is a 5-D `[filter_depth, filter_height, filter_width, in_channels, out_channels]` tensor.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `input`. Backprop signal of shape `[batch, out_depth, out_rows, out_cols, out_channels]`.
IEnumerable<int> strides
A list of `ints` that has length `>= 5`. 1-D tensor of length 5. The stride of the sliding window for each dimension of `input`. Must have `strides[0] = strides[4] = 1`.
string padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`. 1-D tensor of length 5. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

object conv3d_backprop_filter_dyn(object input, object filter_sizes, object out_backprop, object strides, object padding, ImplicitContainer<T> data_format, ImplicitContainer<T> dilations, object name)

Computes the gradients of 3-D convolution with respect to the filter.
Parameters
object input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. Shape `[batch, depth, rows, cols, in_channels]`.
object filter_sizes
A `Tensor` of type `int32`. An integer vector representing the tensor shape of `filter`, where `filter` is a 5-D `[filter_depth, filter_height, filter_width, in_channels, out_channels]` tensor.
object out_backprop
A `Tensor`. Must have the same type as `input`. Backprop signal of shape `[batch, out_depth, out_rows, out_cols, out_channels]`.
object strides
A list of `ints` that has length `>= 5`. 1-D tensor of length 5. The stride of the sliding window for each dimension of `input`. Must have `strides[0] = strides[4] = 1`.
object padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
ImplicitContainer<T> data_format
An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`. 1-D tensor of length 5. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `input`.

object conv3d_dyn(object input, object filter, object strides, object padding, ImplicitContainer<T> data_format, ImplicitContainer<T> dilations, object name, object filters)

Computes a 3-D convolution given 5-D `input` and `filter` tensors.

In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product.

Our Conv3D implements a form of cross-correlation.
Parameters
object input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. Shape `[batch, in_depth, in_height, in_width, in_channels]`.
object filter
A `Tensor`. Must have the same type as `input`. Shape `[filter_depth, filter_height, filter_width, in_channels, out_channels]`. `in_channels` must match between `input` and `filter`.
object strides
A list of `ints` that has length `>= 5`. 1-D tensor of length 5. The stride of the sliding window for each dimension of `input`. Must have `strides[0] = strides[4] = 1`.
object padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
ImplicitContainer<T> data_format
An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`. 1-D tensor of length 5. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
object name
A name for the operation (optional).
object filters
Returns
object
A `Tensor`. Has the same type as `input`.

Tensor conv3d_transpose(IEnumerable<IGraphNodeBase> value, IGraphNodeBase filter, IEnumerable<int> output_shape, ValueTuple<int, object, object> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IEnumerable<IGraphNodeBase> value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object, object> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv3d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IGraphNodeBase output_shape, ValueTuple<int, object, object> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object, object> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv3d_transpose(IEnumerable<IGraphNodeBase> value, IGraphNodeBase filter, IEnumerable<int> output_shape, IEnumerable<int> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IEnumerable<IGraphNodeBase> value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv3d_transpose(IEnumerable<IGraphNodeBase> value, IGraphNodeBase filter, ValueTuple<object, IEnumerable<object>> output_shape, IEnumerable<int> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IEnumerable<IGraphNodeBase> value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
ValueTuple<object, IEnumerable<object>> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv3d_transpose(IEnumerable<IGraphNodeBase> value, IGraphNodeBase filter, ValueTuple<object, IEnumerable<object>> output_shape, ValueTuple<int, object, object> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IEnumerable<IGraphNodeBase> value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
ValueTuple<object, IEnumerable<object>> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object, object> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv3d_transpose(IEnumerable<IGraphNodeBase> value, IGraphNodeBase filter, ValueTuple<object> output_shape, IEnumerable<int> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IEnumerable<IGraphNodeBase> value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
ValueTuple<object> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv3d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IGraphNodeBase output_shape, IEnumerable<int> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv3d_transpose(IGraphNodeBase value, IGraphNodeBase filter, ValueTuple<object> output_shape, ValueTuple<int, object, object> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
ValueTuple<object> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object, object> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv3d_transpose(IGraphNodeBase value, IGraphNodeBase filter, ValueTuple<object> output_shape, IEnumerable<int> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
ValueTuple<object> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv3d_transpose(IGraphNodeBase value, IGraphNodeBase filter, ValueTuple<object, IEnumerable<object>> output_shape, ValueTuple<int, object, object> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
ValueTuple<object, IEnumerable<object>> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object, object> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv3d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IEnumerable<int> output_shape, ValueTuple<int, object, object> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object, object> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv3d_transpose(IGraphNodeBase value, IGraphNodeBase filter, ValueTuple<object, IEnumerable<object>> output_shape, IEnumerable<int> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
ValueTuple<object, IEnumerable<object>> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv3d_transpose(IEnumerable<IGraphNodeBase> value, IGraphNodeBase filter, IGraphNodeBase output_shape, ValueTuple<int, object, object> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IEnumerable<IGraphNodeBase> value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object, object> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv3d_transpose(IEnumerable<IGraphNodeBase> value, IGraphNodeBase filter, IGraphNodeBase output_shape, IEnumerable<int> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IEnumerable<IGraphNodeBase> value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IGraphNodeBase output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv3d_transpose(IEnumerable<IGraphNodeBase> value, IGraphNodeBase filter, ValueTuple<object> output_shape, ValueTuple<int, object, object> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IEnumerable<IGraphNodeBase> value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
ValueTuple<object> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
ValueTuple<int, object, object> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

Tensor conv3d_transpose(IGraphNodeBase value, IGraphNodeBase filter, IEnumerable<int> output_shape, IEnumerable<int> strides, string padding, string data_format, string name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
IGraphNodeBase value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
IGraphNodeBase filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
IEnumerable<int> output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
IEnumerable<int> strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
string name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
Tensor
A `Tensor` with the same type as `value`.

object conv3d_transpose_dyn(object value, object filter, object output_shape, object strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, object name, object input, object filters, object dilations)

The transpose of `conv3d`.

This operation is sometimes called "deconvolution" after [Deconvolutional Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
Parameters
object value
A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]`.
object filter
A 5-D `Tensor` with the same type as `value` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `value`.
object output_shape
A 1-D `Tensor` representing the output shape of the deconvolution op.
object strides
A list of ints. The stride of the sliding window for each dimension of the input tensor.
ImplicitContainer<T> padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
ImplicitContainer<T> data_format
A string, either `'NDHWC'` or `'NCDHW`' specifying the layout of the input and output tensors. Defaults to `'NDHWC'`.
object name
Optional name for the returned tensor.
object input
Alias of value.
object filters
Alias of filter.
object dilations
An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
Returns
object
A `Tensor` with the same type as `value`.

object convolution(IEnumerable<IGraphNodeBase> input, IGraphNodeBase filter, string padding, IEnumerable<int> strides, IEnumerable<int> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
IEnumerable<int> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IEnumerable<IGraphNodeBase> input, IGraphNodeBase filter, string padding, IEnumerable<int> strides, int dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
int dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IEnumerable<IGraphNodeBase> input, IGraphNodeBase filter, string padding, int strides, IEnumerable<int> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
IEnumerable<int> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IEnumerable<IGraphNodeBase> input, IGraphNodeBase filter, string padding, IEnumerable<int> strides, ValueTuple<int, object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<int, object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IEnumerable<IGraphNodeBase> input, IGraphNodeBase filter, string padding, IEnumerable<int> strides, ValueTuple<object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IEnumerable<IGraphNodeBase> input, ndarray filter, string padding, int strides, ValueTuple<int, object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<int, object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IEnumerable<IGraphNodeBase> input, ndarray filter, string padding, int strides, int dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
int dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, IGraphNodeBase filter, string padding, int strides, int dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
int dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, IGraphNodeBase filter, string padding, int strides, ValueTuple<object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, IGraphNodeBase filter, string padding, int strides, ValueTuple<int, object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<int, object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, IGraphNodeBase filter, string padding, int strides, IEnumerable<int> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
IEnumerable<int> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, IGraphNodeBase filter, string padding, IEnumerable<int> strides, int dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
int dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, IGraphNodeBase filter, string padding, IEnumerable<int> strides, ValueTuple<object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, IGraphNodeBase filter, string padding, IEnumerable<int> strides, ValueTuple<int, object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<int, object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, IGraphNodeBase filter, string padding, IEnumerable<int> strides, IEnumerable<int> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
IEnumerable<int> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, ndarray filter, string padding, int strides, int dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
int dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, ndarray filter, string padding, int strides, ValueTuple<object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, ndarray filter, string padding, int strides, ValueTuple<int, object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<int, object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IEnumerable<IGraphNodeBase> input, ndarray filter, string padding, IEnumerable<int> strides, IEnumerable<int> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
IEnumerable<int> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IEnumerable<IGraphNodeBase> input, ndarray filter, string padding, IEnumerable<int> strides, ValueTuple<int, object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<int, object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IEnumerable<IGraphNodeBase> input, ndarray filter, string padding, IEnumerable<int> strides, ValueTuple<object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IEnumerable<IGraphNodeBase> input, ndarray filter, string padding, IEnumerable<int> strides, int dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
int dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IEnumerable<IGraphNodeBase> input, ndarray filter, string padding, int strides, IEnumerable<int> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
IEnumerable<int> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IEnumerable<IGraphNodeBase> input, ndarray filter, string padding, int strides, ValueTuple<object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IEnumerable<IGraphNodeBase> input, IGraphNodeBase filter, string padding, int strides, ValueTuple<int, object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<int, object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, ndarray filter, string padding, IEnumerable<int> strides, int dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
int dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, ndarray filter, string padding, int strides, IEnumerable<int> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
IEnumerable<int> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, ndarray filter, string padding, IEnumerable<int> strides, ValueTuple<int, object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<int, object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IEnumerable<IGraphNodeBase> input, IGraphNodeBase filter, string padding, int strides, ValueTuple<object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IEnumerable<IGraphNodeBase> input, IGraphNodeBase filter, string padding, int strides, int dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IEnumerable<IGraphNodeBase> input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
IGraphNodeBase filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
int strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
int dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, ndarray filter, string padding, IEnumerable<int> strides, IEnumerable<int> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
IEnumerable<int> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution(IGraphNodeBase input, ndarray filter, string padding, IEnumerable<int> strides, ValueTuple<object> dilation_rate, string name, string data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
IGraphNodeBase input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
ndarray filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
string padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
ValueTuple<object> dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
string name
Optional name for the returned tensor.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

object convolution_dyn(object input, object filter, object padding, object strides, object dilation_rate, object name, object data_format, object filters, object dilations)

Computes sums of N-D convolutions (actually cross-correlation).

This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilation_rate` parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) `filter` Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaulting [1]*N), this computes for each N-D spatial output position (x[0],..., x[N-1]):

``` output[b, x[0],..., x[N-1], k] = sum_{z[0],..., z[N-1], q} filter[z[0],..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] ``` where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides` as described in the [comment here](https://tensorflow.org/api_guides/python/nn#Convolution).

In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filter`) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.
Parameters
object input
An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data_format starts with "NC".
object filter
An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`.
object padding
A string, either `"VALID"` or `"SAME"`. The padding algorithm.
object strides
Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
object dilation_rate
Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
object name
Optional name for the returned tensor.
object data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object filters
Alias of filter.
object dilations
Alias of dilation_rate.
Returns
object
A `Tensor` with the same type as `input` of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

Tensor crelu(ndarray features, string name, int axis)

Computes Concatenated ReLU.

Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the *negative* part of the activation. Note that as a result this non-linearity doubles the depth of the activations. Source: [Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units. W. Shang, et al.](https://arxiv.org/abs/1603.05201)
Parameters
ndarray features
A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`, `int16`, or `int8`.
string name
A name for the operation (optional).
int axis
The axis that the output values are concatenated along. Default is -1.
Returns
Tensor
A `Tensor` with the same type as `features`.

Tensor crelu(IGraphNodeBase features, string name, int axis)

Computes Concatenated ReLU.

Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the *negative* part of the activation. Note that as a result this non-linearity doubles the depth of the activations. Source: [Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units. W. Shang, et al.](https://arxiv.org/abs/1603.05201)
Parameters
IGraphNodeBase features
A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`, `int16`, or `int8`.
string name
A name for the operation (optional).
int axis
The axis that the output values are concatenated along. Default is -1.
Returns
Tensor
A `Tensor` with the same type as `features`.

object crelu_dyn(object features, object name, ImplicitContainer<T> axis)

Computes Concatenated ReLU.

Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the *negative* part of the activation. Note that as a result this non-linearity doubles the depth of the activations. Source: [Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units. W. Shang, et al.](https://arxiv.org/abs/1603.05201)
Parameters
object features
A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`, `int16`, or `int8`.
object name
A name for the operation (optional).
ImplicitContainer<T> axis
The axis that the output values are concatenated along. Default is -1.
Returns
object
A `Tensor` with the same type as `features`.

ValueTuple<IList<SparseTensor>, object> ctc_beam_search_decoder(IGraphNodeBase inputs, IGraphNodeBase sequence_length, int beam_width, int top_paths, bool merge_repeated)

Performs beam search decoding on the logits given in input.

**Note** The `ctc_greedy_decoder` is a special case of the `ctc_beam_search_decoder` with `top_paths=1` and `beam_width=1` (but that decoder is faster for this special case).

If `merge_repeated` is `True`, merge repeated classes in the output beams. This means that if consecutive entries in a beam are the same, only the first of these is emitted. That is, when the sequence is `A B B * B * B` (where '*' is the blank label), the return value is:

* `A B` if `merge_repeated = True`. * `A B B B` if `merge_repeated = False`.
Parameters
IGraphNodeBase inputs
3-D `float` `Tensor`, size `[max_time x batch_size x num_classes]`. The logits.
IGraphNodeBase sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
int beam_width
An int scalar >= 0 (beam search beam width).
int top_paths
An int scalar >= 0, <= beam_width (controls output size).
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, log_probabilities)` where

ValueTuple<IList<SparseTensor>, object> ctc_beam_search_decoder(IGraphNodeBase inputs, IndexedSlices sequence_length, int beam_width, int top_paths, bool merge_repeated)

Performs beam search decoding on the logits given in input.

**Note** The `ctc_greedy_decoder` is a special case of the `ctc_beam_search_decoder` with `top_paths=1` and `beam_width=1` (but that decoder is faster for this special case).

If `merge_repeated` is `True`, merge repeated classes in the output beams. This means that if consecutive entries in a beam are the same, only the first of these is emitted. That is, when the sequence is `A B B * B * B` (where '*' is the blank label), the return value is:

* `A B` if `merge_repeated = True`. * `A B B B` if `merge_repeated = False`.
Parameters
IGraphNodeBase inputs
3-D `float` `Tensor`, size `[max_time x batch_size x num_classes]`. The logits.
IndexedSlices sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
int beam_width
An int scalar >= 0 (beam search beam width).
int top_paths
An int scalar >= 0, <= beam_width (controls output size).
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, log_probabilities)` where

ValueTuple<IList<SparseTensor>, object> ctc_beam_search_decoder(IGraphNodeBase inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, int beam_width, int top_paths, bool merge_repeated)

Performs beam search decoding on the logits given in input.

**Note** The `ctc_greedy_decoder` is a special case of the `ctc_beam_search_decoder` with `top_paths=1` and `beam_width=1` (but that decoder is faster for this special case).

If `merge_repeated` is `True`, merge repeated classes in the output beams. This means that if consecutive entries in a beam are the same, only the first of these is emitted. That is, when the sequence is `A B B * B * B` (where '*' is the blank label), the return value is:

* `A B` if `merge_repeated = True`. * `A B B B` if `merge_repeated = False`.
Parameters
IGraphNodeBase inputs
3-D `float` `Tensor`, size `[max_time x batch_size x num_classes]`. The logits.
ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
int beam_width
An int scalar >= 0 (beam search beam width).
int top_paths
An int scalar >= 0, <= beam_width (controls output size).
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, log_probabilities)` where

ValueTuple<IList<SparseTensor>, object> ctc_beam_search_decoder(IGraphNodeBase inputs, ndarray sequence_length, int beam_width, int top_paths, bool merge_repeated)

Performs beam search decoding on the logits given in input.

**Note** The `ctc_greedy_decoder` is a special case of the `ctc_beam_search_decoder` with `top_paths=1` and `beam_width=1` (but that decoder is faster for this special case).

If `merge_repeated` is `True`, merge repeated classes in the output beams. This means that if consecutive entries in a beam are the same, only the first of these is emitted. That is, when the sequence is `A B B * B * B` (where '*' is the blank label), the return value is:

* `A B` if `merge_repeated = True`. * `A B B B` if `merge_repeated = False`.
Parameters
IGraphNodeBase inputs
3-D `float` `Tensor`, size `[max_time x batch_size x num_classes]`. The logits.
ndarray sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
int beam_width
An int scalar >= 0 (beam search beam width).
int top_paths
An int scalar >= 0, <= beam_width (controls output size).
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, log_probabilities)` where

ValueTuple<IList<SparseTensor>, object> ctc_beam_search_decoder(IEnumerable<ndarray> inputs, IGraphNodeBase sequence_length, int beam_width, int top_paths, bool merge_repeated)

Performs beam search decoding on the logits given in input.

**Note** The `ctc_greedy_decoder` is a special case of the `ctc_beam_search_decoder` with `top_paths=1` and `beam_width=1` (but that decoder is faster for this special case).

If `merge_repeated` is `True`, merge repeated classes in the output beams. This means that if consecutive entries in a beam are the same, only the first of these is emitted. That is, when the sequence is `A B B * B * B` (where '*' is the blank label), the return value is:

* `A B` if `merge_repeated = True`. * `A B B B` if `merge_repeated = False`.
Parameters
IEnumerable<ndarray> inputs
3-D `float` `Tensor`, size `[max_time x batch_size x num_classes]`. The logits.
IGraphNodeBase sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
int beam_width
An int scalar >= 0 (beam search beam width).
int top_paths
An int scalar >= 0, <= beam_width (controls output size).
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, log_probabilities)` where

ValueTuple<IList<SparseTensor>, object> ctc_beam_search_decoder(IEnumerable<ndarray> inputs, IndexedSlices sequence_length, int beam_width, int top_paths, bool merge_repeated)

Performs beam search decoding on the logits given in input.

**Note** The `ctc_greedy_decoder` is a special case of the `ctc_beam_search_decoder` with `top_paths=1` and `beam_width=1` (but that decoder is faster for this special case).

If `merge_repeated` is `True`, merge repeated classes in the output beams. This means that if consecutive entries in a beam are the same, only the first of these is emitted. That is, when the sequence is `A B B * B * B` (where '*' is the blank label), the return value is:

* `A B` if `merge_repeated = True`. * `A B B B` if `merge_repeated = False`.
Parameters
IEnumerable<ndarray> inputs
3-D `float` `Tensor`, size `[max_time x batch_size x num_classes]`. The logits.
IndexedSlices sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
int beam_width
An int scalar >= 0 (beam search beam width).
int top_paths
An int scalar >= 0, <= beam_width (controls output size).
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, log_probabilities)` where

ValueTuple<IList<SparseTensor>, object> ctc_beam_search_decoder(IEnumerable<ndarray> inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, int beam_width, int top_paths, bool merge_repeated)

Performs beam search decoding on the logits given in input.

**Note** The `ctc_greedy_decoder` is a special case of the `ctc_beam_search_decoder` with `top_paths=1` and `beam_width=1` (but that decoder is faster for this special case).

If `merge_repeated` is `True`, merge repeated classes in the output beams. This means that if consecutive entries in a beam are the same, only the first of these is emitted. That is, when the sequence is `A B B * B * B` (where '*' is the blank label), the return value is:

* `A B` if `merge_repeated = True`. * `A B B B` if `merge_repeated = False`.
Parameters
IEnumerable<ndarray> inputs
3-D `float` `Tensor`, size `[max_time x batch_size x num_classes]`. The logits.
ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
int beam_width
An int scalar >= 0 (beam search beam width).
int top_paths
An int scalar >= 0, <= beam_width (controls output size).
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, log_probabilities)` where

ValueTuple<IList<SparseTensor>, object> ctc_beam_search_decoder(IEnumerable<ndarray> inputs, ndarray sequence_length, int beam_width, int top_paths, bool merge_repeated)

Performs beam search decoding on the logits given in input.

**Note** The `ctc_greedy_decoder` is a special case of the `ctc_beam_search_decoder` with `top_paths=1` and `beam_width=1` (but that decoder is faster for this special case).

If `merge_repeated` is `True`, merge repeated classes in the output beams. This means that if consecutive entries in a beam are the same, only the first of these is emitted. That is, when the sequence is `A B B * B * B` (where '*' is the blank label), the return value is:

* `A B` if `merge_repeated = True`. * `A B B B` if `merge_repeated = False`.
Parameters
IEnumerable<ndarray> inputs
3-D `float` `Tensor`, size `[max_time x batch_size x num_classes]`. The logits.
ndarray sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
int beam_width
An int scalar >= 0 (beam search beam width).
int top_paths
An int scalar >= 0, <= beam_width (controls output size).
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, log_probabilities)` where

object ctc_beam_search_decoder_dyn(object inputs, object sequence_length, ImplicitContainer<T> beam_width, ImplicitContainer<T> top_paths, ImplicitContainer<T> merge_repeated)

Performs beam search decoding on the logits given in input.

**Note** The `ctc_greedy_decoder` is a special case of the `ctc_beam_search_decoder` with `top_paths=1` and `beam_width=1` (but that decoder is faster for this special case).

If `merge_repeated` is `True`, merge repeated classes in the output beams. This means that if consecutive entries in a beam are the same, only the first of these is emitted. That is, when the sequence is `A B B * B * B` (where '*' is the blank label), the return value is:

* `A B` if `merge_repeated = True`. * `A B B B` if `merge_repeated = False`.
Parameters
object inputs
3-D `float` `Tensor`, size `[max_time x batch_size x num_classes]`. The logits.
object sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
ImplicitContainer<T> beam_width
An int scalar >= 0 (beam search beam width).
ImplicitContainer<T> top_paths
An int scalar >= 0, <= beam_width (controls output size).
ImplicitContainer<T> merge_repeated
Boolean. Default: True.
Returns
object
A tuple `(decoded, log_probabilities)` where

ValueTuple<IList<SparseTensor>, object> ctc_beam_search_decoder_v2(object inputs, object sequence_length, int beam_width, int top_paths)

Performs beam search decoding on the logits given in input.

**Note** The `ctc_greedy_decoder` is a special case of the `ctc_beam_search_decoder` with `top_paths=1` and `beam_width=1` (but that decoder is faster for this special case).
Parameters
object inputs
3-D `float` `Tensor`, size `[max_time, batch_size, num_classes]`. The logits.
object sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
int beam_width
An int scalar >= 0 (beam search beam width).
int top_paths
An int scalar >= 0, <= beam_width (controls output size).
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, log_probabilities)` where

object ctc_beam_search_decoder_v2_dyn(object inputs, object sequence_length, ImplicitContainer<T> beam_width, ImplicitContainer<T> top_paths)

Performs beam search decoding on the logits given in input.

**Note** The `ctc_greedy_decoder` is a special case of the `ctc_beam_search_decoder` with `top_paths=1` and `beam_width=1` (but that decoder is faster for this special case).
Parameters
object inputs
3-D `float` `Tensor`, size `[max_time, batch_size, num_classes]`. The logits.
object sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
ImplicitContainer<T> beam_width
An int scalar >= 0 (beam search beam width).
ImplicitContainer<T> top_paths
An int scalar >= 0, <= beam_width (controls output size).
Returns
object
A tuple `(decoded, log_probabilities)` where

ValueTuple<IList<SparseTensor>, object> ctc_greedy_decoder(IGraphNodeBase inputs, IGraphNodeBase sequence_length, bool merge_repeated)

Performs greedy decoding on the logits given in input (best path).

Note: Regardless of the value of merge_repeated, if the maximum index of a given time and batch corresponds to the blank index `(num_classes - 1)`, no new element is emitted.

If `merge_repeated` is `True`, merge repeated classes in output. This means that if consecutive logits' maximum indices are the same, only the first of these is emitted. The sequence `A B B * B * B` (where '*' is the blank label) becomes

* `A B B B` if `merge_repeated=True`. * `A B B B B` if `merge_repeated=False`.
Parameters
IGraphNodeBase inputs
3-D `float` `Tensor` sized `[max_time, batch_size, num_classes]`. The logits.
IGraphNodeBase sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, neg_sum_logits)` where

ValueTuple<IList<SparseTensor>, object> ctc_greedy_decoder(IGraphNodeBase inputs, IndexedSlices sequence_length, bool merge_repeated)

Performs greedy decoding on the logits given in input (best path).

Note: Regardless of the value of merge_repeated, if the maximum index of a given time and batch corresponds to the blank index `(num_classes - 1)`, no new element is emitted.

If `merge_repeated` is `True`, merge repeated classes in output. This means that if consecutive logits' maximum indices are the same, only the first of these is emitted. The sequence `A B B * B * B` (where '*' is the blank label) becomes

* `A B B B` if `merge_repeated=True`. * `A B B B B` if `merge_repeated=False`.
Parameters
IGraphNodeBase inputs
3-D `float` `Tensor` sized `[max_time, batch_size, num_classes]`. The logits.
IndexedSlices sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, neg_sum_logits)` where

ValueTuple<IList<SparseTensor>, object> ctc_greedy_decoder(IGraphNodeBase inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, bool merge_repeated)

Performs greedy decoding on the logits given in input (best path).

Note: Regardless of the value of merge_repeated, if the maximum index of a given time and batch corresponds to the blank index `(num_classes - 1)`, no new element is emitted.

If `merge_repeated` is `True`, merge repeated classes in output. This means that if consecutive logits' maximum indices are the same, only the first of these is emitted. The sequence `A B B * B * B` (where '*' is the blank label) becomes

* `A B B B` if `merge_repeated=True`. * `A B B B B` if `merge_repeated=False`.
Parameters
IGraphNodeBase inputs
3-D `float` `Tensor` sized `[max_time, batch_size, num_classes]`. The logits.
ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, neg_sum_logits)` where

ValueTuple<IList<SparseTensor>, object> ctc_greedy_decoder(IGraphNodeBase inputs, ndarray sequence_length, bool merge_repeated)

Performs greedy decoding on the logits given in input (best path).

Note: Regardless of the value of merge_repeated, if the maximum index of a given time and batch corresponds to the blank index `(num_classes - 1)`, no new element is emitted.

If `merge_repeated` is `True`, merge repeated classes in output. This means that if consecutive logits' maximum indices are the same, only the first of these is emitted. The sequence `A B B * B * B` (where '*' is the blank label) becomes

* `A B B B` if `merge_repeated=True`. * `A B B B B` if `merge_repeated=False`.
Parameters
IGraphNodeBase inputs
3-D `float` `Tensor` sized `[max_time, batch_size, num_classes]`. The logits.
ndarray sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, neg_sum_logits)` where

ValueTuple<IList<SparseTensor>, object> ctc_greedy_decoder(IEnumerable<ndarray> inputs, IGraphNodeBase sequence_length, bool merge_repeated)

Performs greedy decoding on the logits given in input (best path).

Note: Regardless of the value of merge_repeated, if the maximum index of a given time and batch corresponds to the blank index `(num_classes - 1)`, no new element is emitted.

If `merge_repeated` is `True`, merge repeated classes in output. This means that if consecutive logits' maximum indices are the same, only the first of these is emitted. The sequence `A B B * B * B` (where '*' is the blank label) becomes

* `A B B B` if `merge_repeated=True`. * `A B B B B` if `merge_repeated=False`.
Parameters
IEnumerable<ndarray> inputs
3-D `float` `Tensor` sized `[max_time, batch_size, num_classes]`. The logits.
IGraphNodeBase sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, neg_sum_logits)` where

ValueTuple<IList<SparseTensor>, object> ctc_greedy_decoder(IEnumerable<ndarray> inputs, IndexedSlices sequence_length, bool merge_repeated)

Performs greedy decoding on the logits given in input (best path).

Note: Regardless of the value of merge_repeated, if the maximum index of a given time and batch corresponds to the blank index `(num_classes - 1)`, no new element is emitted.

If `merge_repeated` is `True`, merge repeated classes in output. This means that if consecutive logits' maximum indices are the same, only the first of these is emitted. The sequence `A B B * B * B` (where '*' is the blank label) becomes

* `A B B B` if `merge_repeated=True`. * `A B B B B` if `merge_repeated=False`.
Parameters
IEnumerable<ndarray> inputs
3-D `float` `Tensor` sized `[max_time, batch_size, num_classes]`. The logits.
IndexedSlices sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, neg_sum_logits)` where

ValueTuple<IList<SparseTensor>, object> ctc_greedy_decoder(IEnumerable<ndarray> inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, bool merge_repeated)

Performs greedy decoding on the logits given in input (best path).

Note: Regardless of the value of merge_repeated, if the maximum index of a given time and batch corresponds to the blank index `(num_classes - 1)`, no new element is emitted.

If `merge_repeated` is `True`, merge repeated classes in output. This means that if consecutive logits' maximum indices are the same, only the first of these is emitted. The sequence `A B B * B * B` (where '*' is the blank label) becomes

* `A B B B` if `merge_repeated=True`. * `A B B B B` if `merge_repeated=False`.
Parameters
IEnumerable<ndarray> inputs
3-D `float` `Tensor` sized `[max_time, batch_size, num_classes]`. The logits.
ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, neg_sum_logits)` where

ValueTuple<IList<SparseTensor>, object> ctc_greedy_decoder(IEnumerable<ndarray> inputs, ndarray sequence_length, bool merge_repeated)

Performs greedy decoding on the logits given in input (best path).

Note: Regardless of the value of merge_repeated, if the maximum index of a given time and batch corresponds to the blank index `(num_classes - 1)`, no new element is emitted.

If `merge_repeated` is `True`, merge repeated classes in output. This means that if consecutive logits' maximum indices are the same, only the first of these is emitted. The sequence `A B B * B * B` (where '*' is the blank label) becomes

* `A B B B` if `merge_repeated=True`. * `A B B B B` if `merge_repeated=False`.
Parameters
IEnumerable<ndarray> inputs
3-D `float` `Tensor` sized `[max_time, batch_size, num_classes]`. The logits.
ndarray sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
bool merge_repeated
Boolean. Default: True.
Returns
ValueTuple<IList<SparseTensor>, object>
A tuple `(decoded, neg_sum_logits)` where

object ctc_greedy_decoder_dyn(object inputs, object sequence_length, ImplicitContainer<T> merge_repeated)

Performs greedy decoding on the logits given in input (best path).

Note: Regardless of the value of merge_repeated, if the maximum index of a given time and batch corresponds to the blank index `(num_classes - 1)`, no new element is emitted.

If `merge_repeated` is `True`, merge repeated classes in output. This means that if consecutive logits' maximum indices are the same, only the first of these is emitted. The sequence `A B B * B * B` (where '*' is the blank label) becomes

* `A B B B` if `merge_repeated=True`. * `A B B B B` if `merge_repeated=False`.
Parameters
object inputs
3-D `float` `Tensor` sized `[max_time, batch_size, num_classes]`. The logits.
object sequence_length
1-D `int32` vector containing sequence lengths, having size `[batch_size]`.
ImplicitContainer<T> merge_repeated
Boolean. Default: True.
Returns
object
A tuple `(decoded, neg_sum_logits)` where

object ctc_loss(IndexedSlices labels, IGraphNodeBase inputs, IEnumerable<int> sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IndexedSlices labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
IGraphNodeBase inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IEnumerable<int> sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IndexedSlices labels, IGraphNodeBase inputs, ndarray sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IndexedSlices labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
IGraphNodeBase inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
ndarray sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IndexedSlices labels, ndarray inputs, IGraphNodeBase sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IndexedSlices labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
ndarray inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IGraphNodeBase sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IndexedSlices labels, ndarray inputs, IndexedSlices sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IndexedSlices labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
ndarray inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IndexedSlices sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IndexedSlices labels, ndarray inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IndexedSlices labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
ndarray inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IndexedSlices labels, ndarray inputs, IEnumerable<int> sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IndexedSlices labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
ndarray inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IEnumerable<int> sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IndexedSlices labels, ndarray inputs, ndarray sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IndexedSlices labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
ndarray inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
ndarray sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase inputs, IGraphNodeBase sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
IGraphNodeBase inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IGraphNodeBase sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase inputs, ndarray sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
IGraphNodeBase inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
ndarray sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
IGraphNodeBase inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase inputs, IEnumerable<int> sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
IGraphNodeBase inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IEnumerable<int> sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IndexedSlices labels, IGraphNodeBase inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IndexedSlices labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
IGraphNodeBase inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(ValueTuple<PythonClassContainer, PythonClassContainer> labels, ndarray inputs, IGraphNodeBase sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
ndarray inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IGraphNodeBase sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(ValueTuple<PythonClassContainer, PythonClassContainer> labels, ndarray inputs, IndexedSlices sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
ndarray inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IndexedSlices sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(ValueTuple<PythonClassContainer, PythonClassContainer> labels, ndarray inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
ndarray inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(ValueTuple<PythonClassContainer, PythonClassContainer> labels, ndarray inputs, IEnumerable<int> sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
ndarray inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IEnumerable<int> sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(ValueTuple<PythonClassContainer, PythonClassContainer> labels, ndarray inputs, ndarray sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
ndarray inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
ndarray sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase inputs, IndexedSlices sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
IGraphNodeBase inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IndexedSlices sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IndexedSlices labels, IGraphNodeBase inputs, IndexedSlices sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IndexedSlices labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
IGraphNodeBase inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IndexedSlices sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IGraphNodeBase labels, ndarray inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IGraphNodeBase labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
ndarray inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IGraphNodeBase labels, ndarray inputs, ndarray sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IGraphNodeBase labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
ndarray inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
ndarray sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IGraphNodeBase labels, IGraphNodeBase inputs, IGraphNodeBase sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IGraphNodeBase labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
IGraphNodeBase inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IGraphNodeBase sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IGraphNodeBase labels, IGraphNodeBase inputs, IndexedSlices sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IGraphNodeBase labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
IGraphNodeBase inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IndexedSlices sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IGraphNodeBase labels, IGraphNodeBase inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IGraphNodeBase labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
IGraphNodeBase inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IndexedSlices labels, IGraphNodeBase inputs, IGraphNodeBase sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IndexedSlices labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
IGraphNodeBase inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IGraphNodeBase sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IGraphNodeBase labels, IGraphNodeBase inputs, IEnumerable<int> sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IGraphNodeBase labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
IGraphNodeBase inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IEnumerable<int> sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IGraphNodeBase labels, ndarray inputs, IGraphNodeBase sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IGraphNodeBase labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
ndarray inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IGraphNodeBase sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IGraphNodeBase labels, ndarray inputs, IEnumerable<int> sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IGraphNodeBase labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
ndarray inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IEnumerable<int> sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IGraphNodeBase labels, IGraphNodeBase inputs, ndarray sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IGraphNodeBase labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
IGraphNodeBase inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
ndarray sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss(IGraphNodeBase labels, ndarray inputs, IndexedSlices sequence_length, bool preprocess_collapse_repeated, bool ctc_merge_repeated, bool ignore_longer_outputs_than_inputs, bool time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
IGraphNodeBase labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
ndarray inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
IndexedSlices sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
bool preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
bool ctc_merge_repeated
Boolean. Default: True.
bool ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
bool time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss_dyn(object labels, object inputs, object sequence_length, ImplicitContainer<T> preprocess_collapse_repeated, ImplicitContainer<T> ctc_merge_repeated, ImplicitContainer<T> ignore_longer_outputs_than_inputs, ImplicitContainer<T> time_major, object logits)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The `inputs` Tensor's innermost dimension size, `num_classes`, represents `num_labels + 1` classes, where num_labels is the number of true labels, and the largest value `(num_classes - 1)` is reserved for the blank label.

For example, for a vocabulary containing 3 labels `[a, b, c]`, `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.

Regarding the arguments `preprocess_collapse_repeated` and `ctc_merge_repeated`:

If `preprocess_collapse_repeated` is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If `ctc_merge_repeated` is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`

Never learns to output repeated classes, as they are collapsed in the input labels before training.

* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`

Untested. Very likely will not learn to output repeated classes.

The `ignore_longer_outputs_than_inputs` option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training.
Parameters
object labels
An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details.
object inputs
3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits.
object sequence_length
1-D `int32` vector, size `[batch_size]`. The sequence lengths.
ImplicitContainer<T> preprocess_collapse_repeated
Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation.
ImplicitContainer<T> ctc_merge_repeated
Boolean. Default: True.
ImplicitContainer<T> ignore_longer_outputs_than_inputs
Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored.
ImplicitContainer<T> time_major
The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.
object logits
Alias for inputs.
Returns
object
A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.

object ctc_loss_v2(IGraphNodeBase labels, IGraphNodeBase logits, IGraphNodeBase label_length, IEnumerable<int> logit_length, bool logits_time_major, object unique, Nullable<int> blank_index, string name)

Computes CTC (Connectionist Temporal Classification) loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Notes:

- Same as the "Classic CTC" in TensorFlow 1.x's tf.compat.v1.nn.ctc_loss setting of preprocess_collapse_repeated=False, ctc_merge_repeated=True - Labels may be supplied as either a dense, zero-padded tensor with a vector of label sequence lengths OR as a SparseTensor. - On TPU and GPU: Only dense padded labels are supported. - On CPU: Caller may use SparseTensor or dense padded labels but calling with a SparseTensor will be significantly faster. - Default blank label is 0 rather num_classes - 1, unless overridden by blank_index.
Parameters
IGraphNodeBase labels
tensor of shape [batch_size, max_label_seq_length] or SparseTensor
IGraphNodeBase logits
tensor of shape [frames, batch_size, num_labels], if logits_time_major == False, shape is [batch_size, frames, num_labels].
IGraphNodeBase label_length
tensor of shape [batch_size], None if labels is SparseTensor Length of reference label sequence in labels.
IEnumerable<int> logit_length
tensor of shape [batch_size] Length of input sequence in logits.
bool logits_time_major
(optional) If True (default), logits is shaped [time, batch, logits]. If False, shape is [batch, time, logits]
object unique
(optional) Unique label indices as computed by ctc_unique_labels(labels). If supplied, enable a faster, memory efficient implementation on TPU.
Nullable<int> blank_index
(optional) Set the class index to use for the blank label. Negative values will start from num_classes, ie, -1 will reproduce the ctc_loss behavior of using num_classes - 1 for the blank symbol. There is some memory/performance overhead to switching from the default of 0 as an additional shifted copy of the logits may be created.
string name
A name for this `Op`. Defaults to "ctc_loss_dense".
Returns
object

object ctc_loss_v2(IGraphNodeBase labels, IGraphNodeBase logits, IGraphNodeBase label_length, ndarray logit_length, bool logits_time_major, object unique, Nullable<int> blank_index, string name)

Computes CTC (Connectionist Temporal Classification) loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Notes:

- Same as the "Classic CTC" in TensorFlow 1.x's tf.compat.v1.nn.ctc_loss setting of preprocess_collapse_repeated=False, ctc_merge_repeated=True - Labels may be supplied as either a dense, zero-padded tensor with a vector of label sequence lengths OR as a SparseTensor. - On TPU and GPU: Only dense padded labels are supported. - On CPU: Caller may use SparseTensor or dense padded labels but calling with a SparseTensor will be significantly faster. - Default blank label is 0 rather num_classes - 1, unless overridden by blank_index.
Parameters
IGraphNodeBase labels
tensor of shape [batch_size, max_label_seq_length] or SparseTensor
IGraphNodeBase logits
tensor of shape [frames, batch_size, num_labels], if logits_time_major == False, shape is [batch_size, frames, num_labels].
IGraphNodeBase label_length
tensor of shape [batch_size], None if labels is SparseTensor Length of reference label sequence in labels.
ndarray logit_length
tensor of shape [batch_size] Length of input sequence in logits.
bool logits_time_major
(optional) If True (default), logits is shaped [time, batch, logits]. If False, shape is [batch, time, logits]
object unique
(optional) Unique label indices as computed by ctc_unique_labels(labels). If supplied, enable a faster, memory efficient implementation on TPU.
Nullable<int> blank_index
(optional) Set the class index to use for the blank label. Negative values will start from num_classes, ie, -1 will reproduce the ctc_loss behavior of using num_classes - 1 for the blank symbol. There is some memory/performance overhead to switching from the default of 0 as an additional shifted copy of the logits may be created.
string name
A name for this `Op`. Defaults to "ctc_loss_dense".
Returns
object

object ctc_loss_v2_dyn(object labels, object logits, object label_length, object logit_length, ImplicitContainer<T> logits_time_major, object unique, object blank_index, object name)

Computes CTC (Connectionist Temporal Classification) loss.

This op implements the CTC loss as presented in the article:

[A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.](http://www.cs.toronto.edu/~graves/icml_2006.pdf)

Notes:

- Same as the "Classic CTC" in TensorFlow 1.x's tf.compat.v1.nn.ctc_loss setting of preprocess_collapse_repeated=False, ctc_merge_repeated=True - Labels may be supplied as either a dense, zero-padded tensor with a vector of label sequence lengths OR as a SparseTensor. - On TPU and GPU: Only dense padded labels are supported. - On CPU: Caller may use SparseTensor or dense padded labels but calling with a SparseTensor will be significantly faster. - Default blank label is 0 rather num_classes - 1, unless overridden by blank_index.
Parameters
object labels
tensor of shape [batch_size, max_label_seq_length] or SparseTensor
object logits
tensor of shape [frames, batch_size, num_labels], if logits_time_major == False, shape is [batch_size, frames, num_labels].
object label_length
tensor of shape [batch_size], None if labels is SparseTensor Length of reference label sequence in labels.
object logit_length
tensor of shape [batch_size] Length of input sequence in logits.
ImplicitContainer<T> logits_time_major
(optional) If True (default), logits is shaped [time, batch, logits]. If False, shape is [batch, time, logits]
object unique
(optional) Unique label indices as computed by ctc_unique_labels(labels). If supplied, enable a faster, memory efficient implementation on TPU.
object blank_index
(optional) Set the class index to use for the blank label. Negative values will start from num_classes, ie, -1 will reproduce the ctc_loss behavior of using num_classes - 1 for the blank symbol. There is some memory/performance overhead to switching from the default of 0 as an additional shifted copy of the logits may be created.
object name
A name for this `Op`. Defaults to "ctc_loss_dense".
Returns
object

object ctc_unique_labels(IGraphNodeBase labels, string name)

Get unique labels and indices for batched labels for tf.nn.ctc_loss.

For use with tf.nn.ctc_loss optional argument `unique`: This op can be used to preprocess labels in input pipeline to for better speed/memory use computing the ctc loss on TPU.

Example: ctc_unique_labels([[3, 4, 4, 3]]) -> unique labels padded with 0: [[3, 4, 0, 0]] indices of original labels in unique: [0, 1, 1, 0]
Parameters
IGraphNodeBase labels
tensor of shape [batch_size, max_label_length] padded with 0.
string name
A name for this `Op`. Defaults to "ctc_unique_labels".
Returns
object
tuple of - unique labels, tensor of shape `[batch_size, max_label_length]` - indices into unique labels, shape `[batch_size, max_label_length]`

object ctc_unique_labels(IEnumerable<object> labels, string name)

Get unique labels and indices for batched labels for tf.nn.ctc_loss.

For use with tf.nn.ctc_loss optional argument `unique`: This op can be used to preprocess labels in input pipeline to for better speed/memory use computing the ctc loss on TPU.

Example: ctc_unique_labels([[3, 4, 4, 3]]) -> unique labels padded with 0: [[3, 4, 0, 0]] indices of original labels in unique: [0, 1, 1, 0]
Parameters
IEnumerable<object> labels
tensor of shape [batch_size, max_label_length] padded with 0.
string name
A name for this `Op`. Defaults to "ctc_unique_labels".
Returns
object
tuple of - unique labels, tensor of shape `[batch_size, max_label_length]` - indices into unique labels, shape `[batch_size, max_label_length]`

object ctc_unique_labels_dyn(object labels, object name)

Get unique labels and indices for batched labels for tf.nn.ctc_loss.

For use with tf.nn.ctc_loss optional argument `unique`: This op can be used to preprocess labels in input pipeline to for better speed/memory use computing the ctc loss on TPU.

Example: ctc_unique_labels([[3, 4, 4, 3]]) -> unique labels padded with 0: [[3, 4, 0, 0]] indices of original labels in unique: [0, 1, 1, 0]
Parameters
object labels
tensor of shape [batch_size, max_label_length] padded with 0.
object name
A name for this `Op`. Defaults to "ctc_unique_labels".
Returns
object
tuple of - unique labels, tensor of shape `[batch_size, max_label_length]` - indices into unique labels, shape `[batch_size, max_label_length]`

Tensor depthwise_conv2d(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
IGraphNodeBase filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int, object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(IGraphNodeBase input, ndarray filter, object strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
ndarray filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
object strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(IGraphNodeBase input, object filter, object strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
object filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
object strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
IGraphNodeBase filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
IEnumerable<object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, ndarray filter, ValueTuple<int, object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
ndarray filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int, object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, ndarray filter, IEnumerable<object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
ndarray filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
IEnumerable<object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(IGraphNodeBase input, object filter, IEnumerable<object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
object filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
IEnumerable<object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(IGraphNodeBase input, IGraphNodeBase filter, object strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
IGraphNodeBase filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
object strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, ndarray filter, ValueTuple<int> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
ndarray filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(IGraphNodeBase input, object filter, ValueTuple<int, object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
object filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int, object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(IGraphNodeBase input, string filter, object strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
string filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
object strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(IGraphNodeBase input, string filter, ValueTuple<int> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
string filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(IGraphNodeBase input, object filter, ValueTuple<int> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
object filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(IGraphNodeBase input, string filter, ValueTuple<int, object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
string filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int, object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
IGraphNodeBase filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, ndarray filter, object strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
ndarray filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
object strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(IGraphNodeBase input, string filter, IEnumerable<object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
string filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
IEnumerable<object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, object filter, object strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
object filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
object strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, object filter, ValueTuple<int, object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
object filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int, object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, IGraphNodeBase filter, ValueTuple<int, object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
IGraphNodeBase filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int, object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, string filter, object strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
string filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
object strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, IGraphNodeBase filter, ValueTuple<int> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
IGraphNodeBase filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, string filter, ValueTuple<int> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
string filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, IGraphNodeBase filter, object strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
IGraphNodeBase filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
object strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, string filter, ValueTuple<int, object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
string filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int, object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, IGraphNodeBase filter, IEnumerable<object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
IGraphNodeBase filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
IEnumerable<object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(IGraphNodeBase input, ndarray filter, ValueTuple<int> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
ndarray filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, string filter, IEnumerable<object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
string filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
IEnumerable<object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, object filter, IEnumerable<object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
object filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
IEnumerable<object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(IGraphNodeBase input, ndarray filter, IEnumerable<object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
ndarray filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
IEnumerable<object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(IGraphNodeBase input, ndarray filter, ValueTuple<int, object> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D with shape according to `data_format`.
ndarray filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int, object> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d(ValueTuple<PythonClassContainer, PythonClassContainer> input, object filter, ValueTuple<int> strides, object padding, object rate, string name, string data_format, IEnumerable<int> dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
4-D with shape according to `data_format`.
object filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
ValueTuple<int> strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
IEnumerable<int> dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d_backprop_filter(IGraphNodeBase input, IGraphNodeBase filter_sizes, IGraphNodeBase out_backprop, IEnumerable<int> strides, object padding, string data_format, ImplicitContainer<T> dilations, string name)

Computes the gradients of depthwise convolution with respect to the filter.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape based on `data_format`. For example, if `data_format` is 'NHWC' then `input` is a 4-D `[batch, in_height, in_width, in_channels]` tensor.
IGraphNodeBase filter_sizes
A `Tensor` of type `int32`. An integer vector representing the tensor shape of `filter`, where `filter` is a 4-D `[filter_height, filter_width, in_channels, depthwise_multiplier]` tensor.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `input`. 4-D with shape based on `data_format`. For example, if `data_format` is 'NHWC' then out_backprop shape is `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
IEnumerable<int> strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution.
object padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

object depthwise_conv2d_backprop_filter_dyn(object input, object filter_sizes, object out_backprop, object strides, object padding, ImplicitContainer<T> data_format, ImplicitContainer<T> dilations, object name)

Computes the gradients of depthwise convolution with respect to the filter.
Parameters
object input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape based on `data_format`. For example, if `data_format` is 'NHWC' then `input` is a 4-D `[batch, in_height, in_width, in_channels]` tensor.
object filter_sizes
A `Tensor` of type `int32`. An integer vector representing the tensor shape of `filter`, where `filter` is a 4-D `[filter_height, filter_width, in_channels, depthwise_multiplier]` tensor.
object out_backprop
A `Tensor`. Must have the same type as `input`. 4-D with shape based on `data_format`. For example, if `data_format` is 'NHWC' then out_backprop shape is `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
object strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution.
object padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
ImplicitContainer<T> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_backprop_input(IGraphNodeBase input_sizes, IGraphNodeBase filter, IGraphNodeBase out_backprop, IEnumerable<int> strides, object padding, string data_format, ImplicitContainer<T> dilations, string name)

Computes the gradients of depthwise convolution with respect to the input.
Parameters
IGraphNodeBase input_sizes
A `Tensor` of type `int32`. An integer vector representing the shape of `input`, based on `data_format`. For example, if `data_format` is 'NHWC' then `input` is a 4-D `[batch, height, width, channels]` tensor.
IGraphNodeBase filter
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[filter_height, filter_width, in_channels, depthwise_multiplier]`.
IGraphNodeBase out_backprop
A `Tensor`. Must have the same type as `filter`. 4-D with shape based on `data_format`. For example, if `data_format` is 'NHWC' then out_backprop shape is `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
IEnumerable<int> strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution.
object padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `filter`.

object depthwise_conv2d_backprop_input_dyn(object input_sizes, object filter, object out_backprop, object strides, object padding, ImplicitContainer<T> data_format, ImplicitContainer<T> dilations, object name)

Computes the gradients of depthwise convolution with respect to the input.
Parameters
object input_sizes
A `Tensor` of type `int32`. An integer vector representing the shape of `input`, based on `data_format`. For example, if `data_format` is 'NHWC' then `input` is a 4-D `[batch, height, width, channels]` tensor.
object filter
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[filter_height, filter_width, in_channels, depthwise_multiplier]`.
object out_backprop
A `Tensor`. Must have the same type as `filter`. 4-D with shape based on `data_format`. For example, if `data_format` is 'NHWC' then out_backprop shape is `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.
object strides
A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution.
object padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
ImplicitContainer<T> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `filter`.

object depthwise_conv2d_dyn(object input, object filter, object strides, object padding, object rate, object name, object data_format, object dilations)

Depthwise 2-D convolution.

Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.

In detail, with the default NHWC format,

output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
object input
4-D with shape according to `data_format`.
object filter
4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`.
object strides
1-D of size 4. The stride of the sliding window for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
object name
A name for this operation (optional).
object data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
object
A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].`

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, object strides, string padding, IEnumerable<int> data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
object strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
string padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
IEnumerable<int> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, object strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
object strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
ValueTuple<IEnumerable<object>, object> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, ValueTuple<IEnumerable<object>, object> padding, IEnumerable<int> data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
IEnumerable<int> strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
ValueTuple<IEnumerable<object>, object> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
IEnumerable<int> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, IEnumerable<int> padding, IEnumerable<int> data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
IEnumerable<int> strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
IEnumerable<int> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
IEnumerable<int> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
IEnumerable<int> strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
ValueTuple<IEnumerable<object>, object> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, string padding, IEnumerable<int> data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
IEnumerable<int> strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
string padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
IEnumerable<int> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, IEnumerable<int> padding, string data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
IEnumerable<int> strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
IEnumerable<int> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, object strides, IEnumerable<int> padding, string data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
object strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
IEnumerable<int> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, object strides, string padding, string data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
object strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
string padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object> strides, IEnumerable<int> padding, IEnumerable<int> data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
ValueTuple<int, object> strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
IEnumerable<int> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
IEnumerable<int> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, object strides, ValueTuple<IEnumerable<object>, object> padding, IEnumerable<int> data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
object strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
ValueTuple<IEnumerable<object>, object> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
IEnumerable<int> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, int strides, string padding, string data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
int strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
string padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object> strides, string padding, string data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
ValueTuple<int, object> strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
string padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, int strides, IEnumerable<int> padding, IEnumerable<int> data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
int strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
IEnumerable<int> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
IEnumerable<int> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, object strides, IEnumerable<int> padding, IEnumerable<int> data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
object strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
IEnumerable<int> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
IEnumerable<int> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, int strides, IEnumerable<int> padding, string data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
int strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
IEnumerable<int> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object> strides, string padding, IEnumerable<int> data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
ValueTuple<int, object> strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
string padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
IEnumerable<int> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object> strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
ValueTuple<int, object> strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
ValueTuple<IEnumerable<object>, object> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object> strides, ValueTuple<IEnumerable<object>, object> padding, IEnumerable<int> data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
ValueTuple<int, object> strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
ValueTuple<IEnumerable<object>, object> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
IEnumerable<int> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, ValueTuple<int, object> strides, IEnumerable<int> padding, string data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
ValueTuple<int, object> strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
IEnumerable<int> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, int strides, ValueTuple<IEnumerable<object>, object> padding, string data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
int strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
ValueTuple<IEnumerable<object>, object> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, int strides, string padding, IEnumerable<int> data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
int strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
string padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
IEnumerable<int> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, IEnumerable<int> strides, string padding, string data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
IEnumerable<int> strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
string padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor depthwise_conv2d_native(IGraphNodeBase input, IGraphNodeBase filter, int strides, ValueTuple<IEnumerable<object>, object> padding, IEnumerable<int> data_format, ImplicitContainer<T> dilations, string name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
IGraphNodeBase filter
A `Tensor`. Must have the same type as `input`.
int strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
ValueTuple<IEnumerable<object>, object> padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
IEnumerable<int> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

object depthwise_conv2d_native_dyn(object input, object filter, object strides, object padding, ImplicitContainer<T> data_format, ImplicitContainer<T> dilations, object name)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

``` for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] ```

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
Parameters
object input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
object filter
A `Tensor`. Must have the same type as `input`.
object strides
A list of `ints`. 1-D of length 4. The stride of the sliding window for each dimension of `input`.
object padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
ImplicitContainer<T> data_format
An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
ImplicitContainer<T> dilations
An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `input`.

Tensor dilation2d(object input, object filter, object strides, object rates, object padding, string name, object filters, object dilations)

Computes the grayscale dilation of 4-D `input` and 3-D `filter` tensors.

The `input` tensor has shape `[batch, in_height, in_width, depth]` and the `filter` tensor has shape `[filter_height, filter_width, depth]`, i.e., each input channel is processed independently of the others with its own structuring function. The `output` tensor has shape `[batch, out_height, out_width, depth]`. The spatial dimensions of the output tensor depend on the `padding` algorithm. We currently only support the default "NHWC" `data_format`.

In detail, the grayscale morphological 2-D dilation is the max-sum correlation (for consistency with `conv2d`, we use unmirrored filters):

output[b, y, x, c] = max_{dy, dx} input[b, strides[1] * y + rates[1] * dy, strides[2] * x + rates[2] * dx, c] + filter[dy, dx, c]

Max-pooling is a special case when the filter has size equal to the pooling kernel size and contains all zeros.

Note on duality: The dilation of `input` by the `filter` is equal to the negation of the erosion of `-input` by the reflected `filter`.
Parameters
object input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. 4-D with shape `[batch, in_height, in_width, depth]`.
object filter
A `Tensor`. Must have the same type as `input`. 3-D with shape `[filter_height, filter_width, depth]`.
object strides
A list of `ints` that has length `>= 4`. The stride of the sliding window for each dimension of the input tensor. Must be: `[1, stride_height, stride_width, 1]`.
object rates
A list of `ints` that has length `>= 4`. The input stride for atrous morphological dilation. Must be: `[1, rate_height, rate_width, 1]`.
object padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string name
A name for the operation (optional).
object filters
object dilations
Returns
Tensor
A `Tensor`. Has the same type as `input`.

object dilation2d_dyn(object input, object filter, object strides, object rates, object padding, object name, object filters, object dilations)

Computes the grayscale dilation of 4-D `input` and 3-D `filter` tensors.

The `input` tensor has shape `[batch, in_height, in_width, depth]` and the `filter` tensor has shape `[filter_height, filter_width, depth]`, i.e., each input channel is processed independently of the others with its own structuring function. The `output` tensor has shape `[batch, out_height, out_width, depth]`. The spatial dimensions of the output tensor depend on the `padding` algorithm. We currently only support the default "NHWC" `data_format`.

In detail, the grayscale morphological 2-D dilation is the max-sum correlation (for consistency with `conv2d`, we use unmirrored filters):

output[b, y, x, c] = max_{dy, dx} input[b, strides[1] * y + rates[1] * dy, strides[2] * x + rates[2] * dx, c] + filter[dy, dx, c]

Max-pooling is a special case when the filter has size equal to the pooling kernel size and contains all zeros.

Note on duality: The dilation of `input` by the `filter` is equal to the negation of the erosion of `-input` by the reflected `filter`.
Parameters
object input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. 4-D with shape `[batch, in_height, in_width, depth]`.
object filter
A `Tensor`. Must have the same type as `input`. 3-D with shape `[filter_height, filter_width, depth]`.
object strides
A list of `ints` that has length `>= 4`. The stride of the sliding window for each dimension of the input tensor. Must be: `[1, stride_height, stride_width, 1]`.
object rates
A list of `ints` that has length `>= 4`. The input stride for atrous morphological dilation. Must be: `[1, rate_height, rate_width, 1]`.
object padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
object name
A name for the operation (optional).
object filters
object dilations
Returns
object
A `Tensor`. Has the same type as `input`.

object dropout(PythonClassContainer x, double keep_prob, IGraphNodeBase noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
PythonClassContainer x
A floating point tensor.
double keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IGraphNodeBase noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(IGraphNodeBase x, IGraphNodeBase keep_prob, IGraphNodeBase noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
IGraphNodeBase x
A floating point tensor.
IGraphNodeBase keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IGraphNodeBase noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(PythonClassContainer x, IEnumerable<double> keep_prob, IEnumerable<int> noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
PythonClassContainer x
A floating point tensor.
IEnumerable<double> keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IEnumerable<int> noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(PythonClassContainer x, double keep_prob, IEnumerable<int> noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
PythonClassContainer x
A floating point tensor.
double keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IEnumerable<int> noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(PythonClassContainer x, IEnumerable<double> keep_prob, IGraphNodeBase noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
PythonClassContainer x
A floating point tensor.
IEnumerable<double> keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IGraphNodeBase noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(IGraphNodeBase x, IGraphNodeBase keep_prob, IEnumerable<int> noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
IGraphNodeBase x
A floating point tensor.
IGraphNodeBase keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IEnumerable<int> noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(PythonClassContainer x, IGraphNodeBase keep_prob, IGraphNodeBase noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
PythonClassContainer x
A floating point tensor.
IGraphNodeBase keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IGraphNodeBase noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(IGraphNodeBase x, IEnumerable<double> keep_prob, IEnumerable<int> noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
IGraphNodeBase x
A floating point tensor.
IEnumerable<double> keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IEnumerable<int> noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(IEnumerable<IGraphNodeBase> x, IEnumerable<double> keep_prob, IGraphNodeBase noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
IEnumerable<IGraphNodeBase> x
A floating point tensor.
IEnumerable<double> keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IGraphNodeBase noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(IEnumerable<IGraphNodeBase> x, IEnumerable<double> keep_prob, IEnumerable<int> noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
IEnumerable<IGraphNodeBase> x
A floating point tensor.
IEnumerable<double> keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IEnumerable<int> noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(IGraphNodeBase x, double keep_prob, IGraphNodeBase noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
IGraphNodeBase x
A floating point tensor.
double keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IGraphNodeBase noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(IGraphNodeBase x, double keep_prob, IEnumerable<int> noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
IGraphNodeBase x
A floating point tensor.
double keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IEnumerable<int> noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(IEnumerable<IGraphNodeBase> x, double keep_prob, IGraphNodeBase noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
IEnumerable<IGraphNodeBase> x
A floating point tensor.
double keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IGraphNodeBase noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(IEnumerable<IGraphNodeBase> x, IGraphNodeBase keep_prob, IGraphNodeBase noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
IEnumerable<IGraphNodeBase> x
A floating point tensor.
IGraphNodeBase keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IGraphNodeBase noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(IEnumerable<IGraphNodeBase> x, IGraphNodeBase keep_prob, IEnumerable<int> noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
IEnumerable<IGraphNodeBase> x
A floating point tensor.
IGraphNodeBase keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IEnumerable<int> noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(IEnumerable<IGraphNodeBase> x, double keep_prob, IEnumerable<int> noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
IEnumerable<IGraphNodeBase> x
A floating point tensor.
double keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IEnumerable<int> noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(PythonClassContainer x, IGraphNodeBase keep_prob, IEnumerable<int> noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
PythonClassContainer x
A floating point tensor.
IGraphNodeBase keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IEnumerable<int> noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout(IGraphNodeBase x, IEnumerable<double> keep_prob, IGraphNodeBase noise_shape, Nullable<int> seed, string name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
IGraphNodeBase x
A floating point tensor.
IEnumerable<double> keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
IGraphNodeBase noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
Nullable<int> seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
string name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

object dropout_dyn(object x, object keep_prob, object noise_shape, object seed, object name, object rate)

Computes dropout. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

For each element of `x`, with probability `rate`, outputs `0`, and otherwise scales up the input by `1 / (1-rate)`. The scaling is such that the expected sum is unchanged.

By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. For example, if `shape(x) = [k, l, m, n]` and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Parameters
object x
A floating point tensor.
object keep_prob
(deprecated) A deprecated alias for `(1-rate)`.
object noise_shape
A 1-D `Tensor` of type `int32`, representing the shape for randomly generated keep/drop flags.
object seed
A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
object name
A name for this operation (optional).
object rate
A scalar `Tensor` with the same type as `x`. The probability that each element of `x` is discarded.
Returns
object
A Tensor of the same shape of `x`.

ValueTuple<object, object> dynamic_rnn(object cell, object inputs, IndexedSlices sequence_length, object initial_state, DType dtype, Nullable<int> parallel_iterations, bool swap_memory, Nullable<bool> time_major, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell)`, which is equivalent to this API

Performs fully dynamic unrolling of `inputs`.

Example:

Parameters
object cell
An instance of RNNCell.
object inputs
The RNN inputs. If `time_major == False` (default), this must be a `Tensor` of shape: `[batch_size, max_time,...]`, or a nested tuple of such elements. If `time_major == True`, this must be a `Tensor` of shape: `[max_time, batch_size,...]`, or a nested tuple of such elements. This may also be a (possibly nested) tuple of Tensors satisfying this property. The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ. In this case, input to `cell` at each time-step will replicate the structure of these tuples, except for the time dimension (from which the time is taken). The input to `cell` at each time step will be a `Tensor` or (possibly nested) tuple of Tensors each with dimensions `[batch_size,...]`.
IndexedSlices sequence_length
(optional) An int32/int64 vector sized `[batch_size]`. Used to copy-through state and zero-out outputs when past a batch element's sequence length. This parameter enables users to extract the last valid state and properly padded outputs, so it is provided for correctness.
object initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
Nullable<int> parallel_iterations
(Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
bool swap_memory
Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
Nullable<bool> time_major
The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<object, object>
A pair (outputs, state) where:
Show Example
# create a BasicRNNCell
            rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size) 

# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]

# defining initial state initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)

# 'state' is a tensor of shape [batch_size, cell_state_size] outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data, initial_state=initial_state, dtype=tf.float32)

ValueTuple<object, object> dynamic_rnn(object cell, object inputs, IGraphNodeBase sequence_length, object initial_state, DType dtype, Nullable<int> parallel_iterations, bool swap_memory, Nullable<bool> time_major, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell)`, which is equivalent to this API

Performs fully dynamic unrolling of `inputs`.

Example:

Parameters
object cell
An instance of RNNCell.
object inputs
The RNN inputs. If `time_major == False` (default), this must be a `Tensor` of shape: `[batch_size, max_time,...]`, or a nested tuple of such elements. If `time_major == True`, this must be a `Tensor` of shape: `[max_time, batch_size,...]`, or a nested tuple of such elements. This may also be a (possibly nested) tuple of Tensors satisfying this property. The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ. In this case, input to `cell` at each time-step will replicate the structure of these tuples, except for the time dimension (from which the time is taken). The input to `cell` at each time step will be a `Tensor` or (possibly nested) tuple of Tensors each with dimensions `[batch_size,...]`.
IGraphNodeBase sequence_length
(optional) An int32/int64 vector sized `[batch_size]`. Used to copy-through state and zero-out outputs when past a batch element's sequence length. This parameter enables users to extract the last valid state and properly padded outputs, so it is provided for correctness.
object initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
Nullable<int> parallel_iterations
(Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
bool swap_memory
Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
Nullable<bool> time_major
The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<object, object>
A pair (outputs, state) where:
Show Example
# create a BasicRNNCell
            rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size) 

# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]

# defining initial state initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)

# 'state' is a tensor of shape [batch_size, cell_state_size] outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data, initial_state=initial_state, dtype=tf.float32)

ValueTuple<object, object> dynamic_rnn(object cell, object inputs, IndexedSlices sequence_length, object initial_state, DType dtype, Nullable<int> parallel_iterations, bool swap_memory, Nullable<bool> time_major, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell)`, which is equivalent to this API

Performs fully dynamic unrolling of `inputs`.

Example:

Parameters
object cell
An instance of RNNCell.
object inputs
The RNN inputs. If `time_major == False` (default), this must be a `Tensor` of shape: `[batch_size, max_time,...]`, or a nested tuple of such elements. If `time_major == True`, this must be a `Tensor` of shape: `[max_time, batch_size,...]`, or a nested tuple of such elements. This may also be a (possibly nested) tuple of Tensors satisfying this property. The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ. In this case, input to `cell` at each time-step will replicate the structure of these tuples, except for the time dimension (from which the time is taken). The input to `cell` at each time step will be a `Tensor` or (possibly nested) tuple of Tensors each with dimensions `[batch_size,...]`.
IndexedSlices sequence_length
(optional) An int32/int64 vector sized `[batch_size]`. Used to copy-through state and zero-out outputs when past a batch element's sequence length. This parameter enables users to extract the last valid state and properly padded outputs, so it is provided for correctness.
object initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
Nullable<int> parallel_iterations
(Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
bool swap_memory
Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
Nullable<bool> time_major
The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<object, object>
A pair (outputs, state) where:
Show Example
# create a BasicRNNCell
            rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size) 

# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]

# defining initial state initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)

# 'state' is a tensor of shape [batch_size, cell_state_size] outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data, initial_state=initial_state, dtype=tf.float32)

ValueTuple<object, object> dynamic_rnn(object cell, object inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, object initial_state, DType dtype, Nullable<int> parallel_iterations, bool swap_memory, Nullable<bool> time_major, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell)`, which is equivalent to this API

Performs fully dynamic unrolling of `inputs`.

Example:

Parameters
object cell
An instance of RNNCell.
object inputs
The RNN inputs. If `time_major == False` (default), this must be a `Tensor` of shape: `[batch_size, max_time,...]`, or a nested tuple of such elements. If `time_major == True`, this must be a `Tensor` of shape: `[max_time, batch_size,...]`, or a nested tuple of such elements. This may also be a (possibly nested) tuple of Tensors satisfying this property. The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ. In this case, input to `cell` at each time-step will replicate the structure of these tuples, except for the time dimension (from which the time is taken). The input to `cell` at each time step will be a `Tensor` or (possibly nested) tuple of Tensors each with dimensions `[batch_size,...]`.
ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length
(optional) An int32/int64 vector sized `[batch_size]`. Used to copy-through state and zero-out outputs when past a batch element's sequence length. This parameter enables users to extract the last valid state and properly padded outputs, so it is provided for correctness.
object initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
Nullable<int> parallel_iterations
(Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
bool swap_memory
Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
Nullable<bool> time_major
The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<object, object>
A pair (outputs, state) where:
Show Example
# create a BasicRNNCell
            rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size) 

# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]

# defining initial state initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)

# 'state' is a tensor of shape [batch_size, cell_state_size] outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data, initial_state=initial_state, dtype=tf.float32)

ValueTuple<object, object> dynamic_rnn(object cell, object inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, object initial_state, DType dtype, Nullable<int> parallel_iterations, bool swap_memory, Nullable<bool> time_major, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell)`, which is equivalent to this API

Performs fully dynamic unrolling of `inputs`.

Example:

Parameters
object cell
An instance of RNNCell.
object inputs
The RNN inputs. If `time_major == False` (default), this must be a `Tensor` of shape: `[batch_size, max_time,...]`, or a nested tuple of such elements. If `time_major == True`, this must be a `Tensor` of shape: `[max_time, batch_size,...]`, or a nested tuple of such elements. This may also be a (possibly nested) tuple of Tensors satisfying this property. The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ. In this case, input to `cell` at each time-step will replicate the structure of these tuples, except for the time dimension (from which the time is taken). The input to `cell` at each time step will be a `Tensor` or (possibly nested) tuple of Tensors each with dimensions `[batch_size,...]`.
ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length
(optional) An int32/int64 vector sized `[batch_size]`. Used to copy-through state and zero-out outputs when past a batch element's sequence length. This parameter enables users to extract the last valid state and properly padded outputs, so it is provided for correctness.
object initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
Nullable<int> parallel_iterations
(Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
bool swap_memory
Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
Nullable<bool> time_major
The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<object, object>
A pair (outputs, state) where:
Show Example
# create a BasicRNNCell
            rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size) 

# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]

# defining initial state initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)

# 'state' is a tensor of shape [batch_size, cell_state_size] outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data, initial_state=initial_state, dtype=tf.float32)

ValueTuple<object, object> dynamic_rnn(object cell, object inputs, IEnumerable<int> sequence_length, object initial_state, DType dtype, Nullable<int> parallel_iterations, bool swap_memory, Nullable<bool> time_major, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell)`, which is equivalent to this API

Performs fully dynamic unrolling of `inputs`.

Example:

Parameters
object cell
An instance of RNNCell.
object inputs
The RNN inputs. If `time_major == False` (default), this must be a `Tensor` of shape: `[batch_size, max_time,...]`, or a nested tuple of such elements. If `time_major == True`, this must be a `Tensor` of shape: `[max_time, batch_size,...]`, or a nested tuple of such elements. This may also be a (possibly nested) tuple of Tensors satisfying this property. The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ. In this case, input to `cell` at each time-step will replicate the structure of these tuples, except for the time dimension (from which the time is taken). The input to `cell` at each time step will be a `Tensor` or (possibly nested) tuple of Tensors each with dimensions `[batch_size,...]`.
IEnumerable<int> sequence_length
(optional) An int32/int64 vector sized `[batch_size]`. Used to copy-through state and zero-out outputs when past a batch element's sequence length. This parameter enables users to extract the last valid state and properly padded outputs, so it is provided for correctness.
object initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
Nullable<int> parallel_iterations
(Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
bool swap_memory
Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
Nullable<bool> time_major
The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<object, object>
A pair (outputs, state) where:
Show Example
# create a BasicRNNCell
            rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size) 

# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]

# defining initial state initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)

# 'state' is a tensor of shape [batch_size, cell_state_size] outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data, initial_state=initial_state, dtype=tf.float32)

ValueTuple<object, object> dynamic_rnn(object cell, object inputs, IGraphNodeBase sequence_length, object initial_state, DType dtype, Nullable<int> parallel_iterations, bool swap_memory, Nullable<bool> time_major, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell)`, which is equivalent to this API

Performs fully dynamic unrolling of `inputs`.

Example:

Parameters
object cell
An instance of RNNCell.
object inputs
The RNN inputs. If `time_major == False` (default), this must be a `Tensor` of shape: `[batch_size, max_time,...]`, or a nested tuple of such elements. If `time_major == True`, this must be a `Tensor` of shape: `[max_time, batch_size,...]`, or a nested tuple of such elements. This may also be a (possibly nested) tuple of Tensors satisfying this property. The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ. In this case, input to `cell` at each time-step will replicate the structure of these tuples, except for the time dimension (from which the time is taken). The input to `cell` at each time step will be a `Tensor` or (possibly nested) tuple of Tensors each with dimensions `[batch_size,...]`.
IGraphNodeBase sequence_length
(optional) An int32/int64 vector sized `[batch_size]`. Used to copy-through state and zero-out outputs when past a batch element's sequence length. This parameter enables users to extract the last valid state and properly padded outputs, so it is provided for correctness.
object initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
Nullable<int> parallel_iterations
(Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
bool swap_memory
Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
Nullable<bool> time_major
The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<object, object>
A pair (outputs, state) where:
Show Example
# create a BasicRNNCell
            rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size) 

# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]

# defining initial state initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)

# 'state' is a tensor of shape [batch_size, cell_state_size] outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data, initial_state=initial_state, dtype=tf.float32)

ValueTuple<object, object> dynamic_rnn(object cell, object inputs, IEnumerable<int> sequence_length, object initial_state, DType dtype, Nullable<int> parallel_iterations, bool swap_memory, Nullable<bool> time_major, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell)`, which is equivalent to this API

Performs fully dynamic unrolling of `inputs`.

Example:

Parameters
object cell
An instance of RNNCell.
object inputs
The RNN inputs. If `time_major == False` (default), this must be a `Tensor` of shape: `[batch_size, max_time,...]`, or a nested tuple of such elements. If `time_major == True`, this must be a `Tensor` of shape: `[max_time, batch_size,...]`, or a nested tuple of such elements. This may also be a (possibly nested) tuple of Tensors satisfying this property. The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ. In this case, input to `cell` at each time-step will replicate the structure of these tuples, except for the time dimension (from which the time is taken). The input to `cell` at each time step will be a `Tensor` or (possibly nested) tuple of Tensors each with dimensions `[batch_size,...]`.
IEnumerable<int> sequence_length
(optional) An int32/int64 vector sized `[batch_size]`. Used to copy-through state and zero-out outputs when past a batch element's sequence length. This parameter enables users to extract the last valid state and properly padded outputs, so it is provided for correctness.
object initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
Nullable<int> parallel_iterations
(Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
bool swap_memory
Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
Nullable<bool> time_major
The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<object, object>
A pair (outputs, state) where:
Show Example
# create a BasicRNNCell
            rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size) 

# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]

# defining initial state initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)

# 'state' is a tensor of shape [batch_size, cell_state_size] outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data, initial_state=initial_state, dtype=tf.float32)

Tensor elu(IGraphNodeBase features, string name)

Computes exponential linear: `exp(features) - 1` if < 0, `features` otherwise.

See [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) ](http://arxiv.org/abs/1511.07289)
Parameters
IGraphNodeBase features
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `features`.

object elu_dyn(object features, object name)

Computes exponential linear: `exp(features) - 1` if < 0, `features` otherwise.

See [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) ](http://arxiv.org/abs/1511.07289)
Parameters
object features
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `features`.

Tensor embedding_lookup(IEnumerable<IGraphNodeBase> params, ValueTuple<PythonClassContainer, PythonClassContainer> ids, string partition_strategy, string name, bool validate_indices, Nullable<double> max_norm)

Looks up `ids` in a list of embedding tensors.

This function is used to perform parallel lookups on the list of tensors in `params`. It is a generalization of tf.gather, where `params` is interpreted as a partitioning of a large embedding tensor. `params` may be a `PartitionedVariable` as returned by using `tf.compat.v1.get_variable()` with a partitioner.

If `len(params) > 1`, each element `id` of `ids` is partitioned between the elements of `params` according to the `partition_strategy`. In all strategies, if the id space does not evenly divide the number of partitions, each of the first `(max_id + 1) % len(params)` partitions will be assigned one more id.

If `partition_strategy` is `"mod"`, we assign each id to partition `p = id % len(params)`. For instance, 13 ids are split across 5 partitions as: `[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`

If `partition_strategy` is `"div"`, we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as: `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`

The results of the lookup are concatenated into a dense tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
Parameters
IEnumerable<IGraphNodeBase> params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
ValueTuple<PythonClassContainer, PythonClassContainer> ids
A `Tensor` with type `int32` or `int64` containing the ids to be looked up in `params`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`.
string name
A name for the operation (optional).
bool validate_indices
DEPRECATED. If this operation is assigned to CPU, values in `indices` are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.
Nullable<double> max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value.
Returns
Tensor
A `Tensor` with the same type as the tensors in `params`.

Tensor embedding_lookup(IEnumerable<IGraphNodeBase> params, IndexedSlices ids, string partition_strategy, string name, bool validate_indices, Nullable<double> max_norm)

Looks up `ids` in a list of embedding tensors.

This function is used to perform parallel lookups on the list of tensors in `params`. It is a generalization of tf.gather, where `params` is interpreted as a partitioning of a large embedding tensor. `params` may be a `PartitionedVariable` as returned by using `tf.compat.v1.get_variable()` with a partitioner.

If `len(params) > 1`, each element `id` of `ids` is partitioned between the elements of `params` according to the `partition_strategy`. In all strategies, if the id space does not evenly divide the number of partitions, each of the first `(max_id + 1) % len(params)` partitions will be assigned one more id.

If `partition_strategy` is `"mod"`, we assign each id to partition `p = id % len(params)`. For instance, 13 ids are split across 5 partitions as: `[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`

If `partition_strategy` is `"div"`, we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as: `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`

The results of the lookup are concatenated into a dense tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
Parameters
IEnumerable<IGraphNodeBase> params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
IndexedSlices ids
A `Tensor` with type `int32` or `int64` containing the ids to be looked up in `params`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`.
string name
A name for the operation (optional).
bool validate_indices
DEPRECATED. If this operation is assigned to CPU, values in `indices` are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.
Nullable<double> max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value.
Returns
Tensor
A `Tensor` with the same type as the tensors in `params`.

Tensor embedding_lookup(object params, IndexedSlices ids, string partition_strategy, string name, bool validate_indices, Nullable<double> max_norm)

Looks up `ids` in a list of embedding tensors.

This function is used to perform parallel lookups on the list of tensors in `params`. It is a generalization of tf.gather, where `params` is interpreted as a partitioning of a large embedding tensor. `params` may be a `PartitionedVariable` as returned by using `tf.compat.v1.get_variable()` with a partitioner.

If `len(params) > 1`, each element `id` of `ids` is partitioned between the elements of `params` according to the `partition_strategy`. In all strategies, if the id space does not evenly divide the number of partitions, each of the first `(max_id + 1) % len(params)` partitions will be assigned one more id.

If `partition_strategy` is `"mod"`, we assign each id to partition `p = id % len(params)`. For instance, 13 ids are split across 5 partitions as: `[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`

If `partition_strategy` is `"div"`, we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as: `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`

The results of the lookup are concatenated into a dense tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
Parameters
object params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
IndexedSlices ids
A `Tensor` with type `int32` or `int64` containing the ids to be looked up in `params`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`.
string name
A name for the operation (optional).
bool validate_indices
DEPRECATED. If this operation is assigned to CPU, values in `indices` are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.
Nullable<double> max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value.
Returns
Tensor
A `Tensor` with the same type as the tensors in `params`.

Tensor embedding_lookup(IEnumerable<IGraphNodeBase> params, IGraphNodeBase ids, string partition_strategy, string name, bool validate_indices, Nullable<double> max_norm)

Looks up `ids` in a list of embedding tensors.

This function is used to perform parallel lookups on the list of tensors in `params`. It is a generalization of tf.gather, where `params` is interpreted as a partitioning of a large embedding tensor. `params` may be a `PartitionedVariable` as returned by using `tf.compat.v1.get_variable()` with a partitioner.

If `len(params) > 1`, each element `id` of `ids` is partitioned between the elements of `params` according to the `partition_strategy`. In all strategies, if the id space does not evenly divide the number of partitions, each of the first `(max_id + 1) % len(params)` partitions will be assigned one more id.

If `partition_strategy` is `"mod"`, we assign each id to partition `p = id % len(params)`. For instance, 13 ids are split across 5 partitions as: `[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`

If `partition_strategy` is `"div"`, we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as: `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`

The results of the lookup are concatenated into a dense tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
Parameters
IEnumerable<IGraphNodeBase> params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
IGraphNodeBase ids
A `Tensor` with type `int32` or `int64` containing the ids to be looked up in `params`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`.
string name
A name for the operation (optional).
bool validate_indices
DEPRECATED. If this operation is assigned to CPU, values in `indices` are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.
Nullable<double> max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value.
Returns
Tensor
A `Tensor` with the same type as the tensors in `params`.

Tensor embedding_lookup(object params, IGraphNodeBase ids, string partition_strategy, string name, bool validate_indices, Nullable<double> max_norm)

Looks up `ids` in a list of embedding tensors.

This function is used to perform parallel lookups on the list of tensors in `params`. It is a generalization of tf.gather, where `params` is interpreted as a partitioning of a large embedding tensor. `params` may be a `PartitionedVariable` as returned by using `tf.compat.v1.get_variable()` with a partitioner.

If `len(params) > 1`, each element `id` of `ids` is partitioned between the elements of `params` according to the `partition_strategy`. In all strategies, if the id space does not evenly divide the number of partitions, each of the first `(max_id + 1) % len(params)` partitions will be assigned one more id.

If `partition_strategy` is `"mod"`, we assign each id to partition `p = id % len(params)`. For instance, 13 ids are split across 5 partitions as: `[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`

If `partition_strategy` is `"div"`, we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as: `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`

The results of the lookup are concatenated into a dense tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
Parameters
object params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
IGraphNodeBase ids
A `Tensor` with type `int32` or `int64` containing the ids to be looked up in `params`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`.
string name
A name for the operation (optional).
bool validate_indices
DEPRECATED. If this operation is assigned to CPU, values in `indices` are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.
Nullable<double> max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value.
Returns
Tensor
A `Tensor` with the same type as the tensors in `params`.

Tensor embedding_lookup(object params, object ids, string partition_strategy, string name, bool validate_indices, Nullable<double> max_norm)

Looks up `ids` in a list of embedding tensors.

This function is used to perform parallel lookups on the list of tensors in `params`. It is a generalization of tf.gather, where `params` is interpreted as a partitioning of a large embedding tensor. `params` may be a `PartitionedVariable` as returned by using `tf.compat.v1.get_variable()` with a partitioner.

If `len(params) > 1`, each element `id` of `ids` is partitioned between the elements of `params` according to the `partition_strategy`. In all strategies, if the id space does not evenly divide the number of partitions, each of the first `(max_id + 1) % len(params)` partitions will be assigned one more id.

If `partition_strategy` is `"mod"`, we assign each id to partition `p = id % len(params)`. For instance, 13 ids are split across 5 partitions as: `[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`

If `partition_strategy` is `"div"`, we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as: `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`

The results of the lookup are concatenated into a dense tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
Parameters
object params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
object ids
A `Tensor` with type `int32` or `int64` containing the ids to be looked up in `params`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`.
string name
A name for the operation (optional).
bool validate_indices
DEPRECATED. If this operation is assigned to CPU, values in `indices` are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.
Nullable<double> max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value.
Returns
Tensor
A `Tensor` with the same type as the tensors in `params`.

Tensor embedding_lookup(object params, IEnumerable<int> ids, string partition_strategy, string name, bool validate_indices, Nullable<double> max_norm)

Looks up `ids` in a list of embedding tensors.

This function is used to perform parallel lookups on the list of tensors in `params`. It is a generalization of tf.gather, where `params` is interpreted as a partitioning of a large embedding tensor. `params` may be a `PartitionedVariable` as returned by using `tf.compat.v1.get_variable()` with a partitioner.

If `len(params) > 1`, each element `id` of `ids` is partitioned between the elements of `params` according to the `partition_strategy`. In all strategies, if the id space does not evenly divide the number of partitions, each of the first `(max_id + 1) % len(params)` partitions will be assigned one more id.

If `partition_strategy` is `"mod"`, we assign each id to partition `p = id % len(params)`. For instance, 13 ids are split across 5 partitions as: `[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`

If `partition_strategy` is `"div"`, we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as: `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`

The results of the lookup are concatenated into a dense tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
Parameters
object params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
IEnumerable<int> ids
A `Tensor` with type `int32` or `int64` containing the ids to be looked up in `params`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`.
string name
A name for the operation (optional).
bool validate_indices
DEPRECATED. If this operation is assigned to CPU, values in `indices` are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.
Nullable<double> max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value.
Returns
Tensor
A `Tensor` with the same type as the tensors in `params`.

Tensor embedding_lookup(IEnumerable<IGraphNodeBase> params, object ids, string partition_strategy, string name, bool validate_indices, Nullable<double> max_norm)

Looks up `ids` in a list of embedding tensors.

This function is used to perform parallel lookups on the list of tensors in `params`. It is a generalization of tf.gather, where `params` is interpreted as a partitioning of a large embedding tensor. `params` may be a `PartitionedVariable` as returned by using `tf.compat.v1.get_variable()` with a partitioner.

If `len(params) > 1`, each element `id` of `ids` is partitioned between the elements of `params` according to the `partition_strategy`. In all strategies, if the id space does not evenly divide the number of partitions, each of the first `(max_id + 1) % len(params)` partitions will be assigned one more id.

If `partition_strategy` is `"mod"`, we assign each id to partition `p = id % len(params)`. For instance, 13 ids are split across 5 partitions as: `[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`

If `partition_strategy` is `"div"`, we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as: `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`

The results of the lookup are concatenated into a dense tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
Parameters
IEnumerable<IGraphNodeBase> params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
object ids
A `Tensor` with type `int32` or `int64` containing the ids to be looked up in `params`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`.
string name
A name for the operation (optional).
bool validate_indices
DEPRECATED. If this operation is assigned to CPU, values in `indices` are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.
Nullable<double> max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value.
Returns
Tensor
A `Tensor` with the same type as the tensors in `params`.

Tensor embedding_lookup(object params, ValueTuple<PythonClassContainer, PythonClassContainer> ids, string partition_strategy, string name, bool validate_indices, Nullable<double> max_norm)

Looks up `ids` in a list of embedding tensors.

This function is used to perform parallel lookups on the list of tensors in `params`. It is a generalization of tf.gather, where `params` is interpreted as a partitioning of a large embedding tensor. `params` may be a `PartitionedVariable` as returned by using `tf.compat.v1.get_variable()` with a partitioner.

If `len(params) > 1`, each element `id` of `ids` is partitioned between the elements of `params` according to the `partition_strategy`. In all strategies, if the id space does not evenly divide the number of partitions, each of the first `(max_id + 1) % len(params)` partitions will be assigned one more id.

If `partition_strategy` is `"mod"`, we assign each id to partition `p = id % len(params)`. For instance, 13 ids are split across 5 partitions as: `[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`

If `partition_strategy` is `"div"`, we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as: `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`

The results of the lookup are concatenated into a dense tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
Parameters
object params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
ValueTuple<PythonClassContainer, PythonClassContainer> ids
A `Tensor` with type `int32` or `int64` containing the ids to be looked up in `params`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`.
string name
A name for the operation (optional).
bool validate_indices
DEPRECATED. If this operation is assigned to CPU, values in `indices` are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.
Nullable<double> max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value.
Returns
Tensor
A `Tensor` with the same type as the tensors in `params`.

Tensor embedding_lookup(IEnumerable<IGraphNodeBase> params, IEnumerable<int> ids, string partition_strategy, string name, bool validate_indices, Nullable<double> max_norm)

Looks up `ids` in a list of embedding tensors.

This function is used to perform parallel lookups on the list of tensors in `params`. It is a generalization of tf.gather, where `params` is interpreted as a partitioning of a large embedding tensor. `params` may be a `PartitionedVariable` as returned by using `tf.compat.v1.get_variable()` with a partitioner.

If `len(params) > 1`, each element `id` of `ids` is partitioned between the elements of `params` according to the `partition_strategy`. In all strategies, if the id space does not evenly divide the number of partitions, each of the first `(max_id + 1) % len(params)` partitions will be assigned one more id.

If `partition_strategy` is `"mod"`, we assign each id to partition `p = id % len(params)`. For instance, 13 ids are split across 5 partitions as: `[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`

If `partition_strategy` is `"div"`, we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as: `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`

The results of the lookup are concatenated into a dense tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
Parameters
IEnumerable<IGraphNodeBase> params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
IEnumerable<int> ids
A `Tensor` with type `int32` or `int64` containing the ids to be looked up in `params`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`.
string name
A name for the operation (optional).
bool validate_indices
DEPRECATED. If this operation is assigned to CPU, values in `indices` are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.
Nullable<double> max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value.
Returns
Tensor
A `Tensor` with the same type as the tensors in `params`.

object embedding_lookup_dyn(object params, object ids, ImplicitContainer<T> partition_strategy, object name, ImplicitContainer<T> validate_indices, object max_norm)

Looks up `ids` in a list of embedding tensors.

This function is used to perform parallel lookups on the list of tensors in `params`. It is a generalization of tf.gather, where `params` is interpreted as a partitioning of a large embedding tensor. `params` may be a `PartitionedVariable` as returned by using `tf.compat.v1.get_variable()` with a partitioner.

If `len(params) > 1`, each element `id` of `ids` is partitioned between the elements of `params` according to the `partition_strategy`. In all strategies, if the id space does not evenly divide the number of partitions, each of the first `(max_id + 1) % len(params)` partitions will be assigned one more id.

If `partition_strategy` is `"mod"`, we assign each id to partition `p = id % len(params)`. For instance, 13 ids are split across 5 partitions as: `[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`

If `partition_strategy` is `"div"`, we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as: `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`

The results of the lookup are concatenated into a dense tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
Parameters
object params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
object ids
A `Tensor` with type `int32` or `int64` containing the ids to be looked up in `params`.
ImplicitContainer<T> partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`.
object name
A name for the operation (optional).
ImplicitContainer<T> validate_indices
DEPRECATED. If this operation is assigned to CPU, values in `indices` are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.
object max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value.
Returns
object
A `Tensor` with the same type as the tensors in `params`.

object embedding_lookup_sparse(object params, IGraphNodeBase sp_ids, IGraphNodeBase sp_weights, string partition_strategy, string name, string combiner, object max_norm)

Computes embeddings for the given ids and weights.

This op assumes that there is at least one id for each row in the dense tensor represented by sp_ids (i.e. there are no rows with empty features), and that all the indices of sp_ids are in canonical row-major order.

It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0.
Parameters
object params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
IGraphNodeBase sp_ids
N x M `SparseTensor` of int64 ids where N is typically batch size and M is arbitrary.
IGraphNodeBase sp_weights
either a `SparseTensor` of float / double weights, or `None` to indicate all weights should be taken to be 1. If specified, `sp_weights` must have exactly the same shape and indices as `sp_ids`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
string name
Optional name for the op.
string combiner
A string specifying the reduction op. Currently "mean", "sqrtn" and "sum" are supported. "sum" computes the weighted sum of the embedding results for each row. "mean" is the weighted sum divided by the total weight. "sqrtn" is the weighted sum divided by the square root of the sum of the squares of the weights.
object max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value, before combining.
Returns
object
A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by `sp_ids`, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified.

In other words, if

`shape(combined params) = [p0, p1,..., pm]`

and

`shape(sp_ids) = shape(sp_weights) = [d0, d1,..., dn]`

then

`shape(output) = [d0, d1,..., dn-1, p1,..., pm]`.

For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are

```python [0, 0]: id 1, weight 2.0 [0, 1]: id 3, weight 0.5 [1, 0]: id 0, weight 1.0 [2, 3]: id 1, weight 3.0 ```

with `combiner`="mean", then the output will be a 3x20 matrix where

```python output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = (params[0, :] * 1.0) / 1.0 output[2, :] = (params[1, :] * 3.0) / 3.0 ```

object embedding_lookup_sparse(object params, IGraphNodeBase sp_ids, IGraphNodeBase sp_weights, string partition_strategy, PythonFunctionContainer name, string combiner, object max_norm)

Computes embeddings for the given ids and weights.

This op assumes that there is at least one id for each row in the dense tensor represented by sp_ids (i.e. there are no rows with empty features), and that all the indices of sp_ids are in canonical row-major order.

It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0.
Parameters
object params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
IGraphNodeBase sp_ids
N x M `SparseTensor` of int64 ids where N is typically batch size and M is arbitrary.
IGraphNodeBase sp_weights
either a `SparseTensor` of float / double weights, or `None` to indicate all weights should be taken to be 1. If specified, `sp_weights` must have exactly the same shape and indices as `sp_ids`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
PythonFunctionContainer name
Optional name for the op.
string combiner
A string specifying the reduction op. Currently "mean", "sqrtn" and "sum" are supported. "sum" computes the weighted sum of the embedding results for each row. "mean" is the weighted sum divided by the total weight. "sqrtn" is the weighted sum divided by the square root of the sum of the squares of the weights.
object max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value, before combining.
Returns
object
A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by `sp_ids`, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified.

In other words, if

`shape(combined params) = [p0, p1,..., pm]`

and

`shape(sp_ids) = shape(sp_weights) = [d0, d1,..., dn]`

then

`shape(output) = [d0, d1,..., dn-1, p1,..., pm]`.

For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are

```python [0, 0]: id 1, weight 2.0 [0, 1]: id 3, weight 0.5 [1, 0]: id 0, weight 1.0 [2, 3]: id 1, weight 3.0 ```

with `combiner`="mean", then the output will be a 3x20 matrix where

```python output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = (params[0, :] * 1.0) / 1.0 output[2, :] = (params[1, :] * 3.0) / 3.0 ```

object embedding_lookup_sparse(PartitionedVariable params, IGraphNodeBase sp_ids, IGraphNodeBase sp_weights, string partition_strategy, string name, string combiner, object max_norm)

Computes embeddings for the given ids and weights.

This op assumes that there is at least one id for each row in the dense tensor represented by sp_ids (i.e. there are no rows with empty features), and that all the indices of sp_ids are in canonical row-major order.

It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0.
Parameters
PartitionedVariable params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
IGraphNodeBase sp_ids
N x M `SparseTensor` of int64 ids where N is typically batch size and M is arbitrary.
IGraphNodeBase sp_weights
either a `SparseTensor` of float / double weights, or `None` to indicate all weights should be taken to be 1. If specified, `sp_weights` must have exactly the same shape and indices as `sp_ids`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
string name
Optional name for the op.
string combiner
A string specifying the reduction op. Currently "mean", "sqrtn" and "sum" are supported. "sum" computes the weighted sum of the embedding results for each row. "mean" is the weighted sum divided by the total weight. "sqrtn" is the weighted sum divided by the square root of the sum of the squares of the weights.
object max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value, before combining.
Returns
object
A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by `sp_ids`, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified.

In other words, if

`shape(combined params) = [p0, p1,..., pm]`

and

`shape(sp_ids) = shape(sp_weights) = [d0, d1,..., dn]`

then

`shape(output) = [d0, d1,..., dn-1, p1,..., pm]`.

For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are

```python [0, 0]: id 1, weight 2.0 [0, 1]: id 3, weight 0.5 [1, 0]: id 0, weight 1.0 [2, 3]: id 1, weight 3.0 ```

with `combiner`="mean", then the output will be a 3x20 matrix where

```python output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = (params[0, :] * 1.0) / 1.0 output[2, :] = (params[1, :] * 3.0) / 3.0 ```

object embedding_lookup_sparse(PartitionedVariable params, IGraphNodeBase sp_ids, IGraphNodeBase sp_weights, string partition_strategy, PythonFunctionContainer name, string combiner, object max_norm)

Computes embeddings for the given ids and weights.

This op assumes that there is at least one id for each row in the dense tensor represented by sp_ids (i.e. there are no rows with empty features), and that all the indices of sp_ids are in canonical row-major order.

It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0.
Parameters
PartitionedVariable params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
IGraphNodeBase sp_ids
N x M `SparseTensor` of int64 ids where N is typically batch size and M is arbitrary.
IGraphNodeBase sp_weights
either a `SparseTensor` of float / double weights, or `None` to indicate all weights should be taken to be 1. If specified, `sp_weights` must have exactly the same shape and indices as `sp_ids`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
PythonFunctionContainer name
Optional name for the op.
string combiner
A string specifying the reduction op. Currently "mean", "sqrtn" and "sum" are supported. "sum" computes the weighted sum of the embedding results for each row. "mean" is the weighted sum divided by the total weight. "sqrtn" is the weighted sum divided by the square root of the sum of the squares of the weights.
object max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value, before combining.
Returns
object
A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by `sp_ids`, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified.

In other words, if

`shape(combined params) = [p0, p1,..., pm]`

and

`shape(sp_ids) = shape(sp_weights) = [d0, d1,..., dn]`

then

`shape(output) = [d0, d1,..., dn-1, p1,..., pm]`.

For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are

```python [0, 0]: id 1, weight 2.0 [0, 1]: id 3, weight 0.5 [1, 0]: id 0, weight 1.0 [2, 3]: id 1, weight 3.0 ```

with `combiner`="mean", then the output will be a 3x20 matrix where

```python output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = (params[0, :] * 1.0) / 1.0 output[2, :] = (params[1, :] * 3.0) / 3.0 ```

object embedding_lookup_sparse(Variable params, IGraphNodeBase sp_ids, IGraphNodeBase sp_weights, string partition_strategy, PythonFunctionContainer name, string combiner, object max_norm)

Computes embeddings for the given ids and weights.

This op assumes that there is at least one id for each row in the dense tensor represented by sp_ids (i.e. there are no rows with empty features), and that all the indices of sp_ids are in canonical row-major order.

It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0.
Parameters
Variable params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
IGraphNodeBase sp_ids
N x M `SparseTensor` of int64 ids where N is typically batch size and M is arbitrary.
IGraphNodeBase sp_weights
either a `SparseTensor` of float / double weights, or `None` to indicate all weights should be taken to be 1. If specified, `sp_weights` must have exactly the same shape and indices as `sp_ids`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
PythonFunctionContainer name
Optional name for the op.
string combiner
A string specifying the reduction op. Currently "mean", "sqrtn" and "sum" are supported. "sum" computes the weighted sum of the embedding results for each row. "mean" is the weighted sum divided by the total weight. "sqrtn" is the weighted sum divided by the square root of the sum of the squares of the weights.
object max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value, before combining.
Returns
object
A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by `sp_ids`, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified.

In other words, if

`shape(combined params) = [p0, p1,..., pm]`

and

`shape(sp_ids) = shape(sp_weights) = [d0, d1,..., dn]`

then

`shape(output) = [d0, d1,..., dn-1, p1,..., pm]`.

For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are

```python [0, 0]: id 1, weight 2.0 [0, 1]: id 3, weight 0.5 [1, 0]: id 0, weight 1.0 [2, 3]: id 1, weight 3.0 ```

with `combiner`="mean", then the output will be a 3x20 matrix where

```python output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = (params[0, :] * 1.0) / 1.0 output[2, :] = (params[1, :] * 3.0) / 3.0 ```

object embedding_lookup_sparse(IEnumerable<object> params, IGraphNodeBase sp_ids, IGraphNodeBase sp_weights, string partition_strategy, string name, string combiner, object max_norm)

Computes embeddings for the given ids and weights.

This op assumes that there is at least one id for each row in the dense tensor represented by sp_ids (i.e. there are no rows with empty features), and that all the indices of sp_ids are in canonical row-major order.

It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0.
Parameters
IEnumerable<object> params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
IGraphNodeBase sp_ids
N x M `SparseTensor` of int64 ids where N is typically batch size and M is arbitrary.
IGraphNodeBase sp_weights
either a `SparseTensor` of float / double weights, or `None` to indicate all weights should be taken to be 1. If specified, `sp_weights` must have exactly the same shape and indices as `sp_ids`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
string name
Optional name for the op.
string combiner
A string specifying the reduction op. Currently "mean", "sqrtn" and "sum" are supported. "sum" computes the weighted sum of the embedding results for each row. "mean" is the weighted sum divided by the total weight. "sqrtn" is the weighted sum divided by the square root of the sum of the squares of the weights.
object max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value, before combining.
Returns
object
A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by `sp_ids`, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified.

In other words, if

`shape(combined params) = [p0, p1,..., pm]`

and

`shape(sp_ids) = shape(sp_weights) = [d0, d1,..., dn]`

then

`shape(output) = [d0, d1,..., dn-1, p1,..., pm]`.

For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are

```python [0, 0]: id 1, weight 2.0 [0, 1]: id 3, weight 0.5 [1, 0]: id 0, weight 1.0 [2, 3]: id 1, weight 3.0 ```

with `combiner`="mean", then the output will be a 3x20 matrix where

```python output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = (params[0, :] * 1.0) / 1.0 output[2, :] = (params[1, :] * 3.0) / 3.0 ```

object embedding_lookup_sparse(IEnumerable<object> params, IGraphNodeBase sp_ids, IGraphNodeBase sp_weights, string partition_strategy, PythonFunctionContainer name, string combiner, object max_norm)

Computes embeddings for the given ids and weights.

This op assumes that there is at least one id for each row in the dense tensor represented by sp_ids (i.e. there are no rows with empty features), and that all the indices of sp_ids are in canonical row-major order.

It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0.
Parameters
IEnumerable<object> params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
IGraphNodeBase sp_ids
N x M `SparseTensor` of int64 ids where N is typically batch size and M is arbitrary.
IGraphNodeBase sp_weights
either a `SparseTensor` of float / double weights, or `None` to indicate all weights should be taken to be 1. If specified, `sp_weights` must have exactly the same shape and indices as `sp_ids`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
PythonFunctionContainer name
Optional name for the op.
string combiner
A string specifying the reduction op. Currently "mean", "sqrtn" and "sum" are supported. "sum" computes the weighted sum of the embedding results for each row. "mean" is the weighted sum divided by the total weight. "sqrtn" is the weighted sum divided by the square root of the sum of the squares of the weights.
object max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value, before combining.
Returns
object
A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by `sp_ids`, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified.

In other words, if

`shape(combined params) = [p0, p1,..., pm]`

and

`shape(sp_ids) = shape(sp_weights) = [d0, d1,..., dn]`

then

`shape(output) = [d0, d1,..., dn-1, p1,..., pm]`.

For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are

```python [0, 0]: id 1, weight 2.0 [0, 1]: id 3, weight 0.5 [1, 0]: id 0, weight 1.0 [2, 3]: id 1, weight 3.0 ```

with `combiner`="mean", then the output will be a 3x20 matrix where

```python output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = (params[0, :] * 1.0) / 1.0 output[2, :] = (params[1, :] * 3.0) / 3.0 ```

object embedding_lookup_sparse(Variable params, IGraphNodeBase sp_ids, IGraphNodeBase sp_weights, string partition_strategy, string name, string combiner, object max_norm)

Computes embeddings for the given ids and weights.

This op assumes that there is at least one id for each row in the dense tensor represented by sp_ids (i.e. there are no rows with empty features), and that all the indices of sp_ids are in canonical row-major order.

It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0.
Parameters
Variable params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
IGraphNodeBase sp_ids
N x M `SparseTensor` of int64 ids where N is typically batch size and M is arbitrary.
IGraphNodeBase sp_weights
either a `SparseTensor` of float / double weights, or `None` to indicate all weights should be taken to be 1. If specified, `sp_weights` must have exactly the same shape and indices as `sp_ids`.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
string name
Optional name for the op.
string combiner
A string specifying the reduction op. Currently "mean", "sqrtn" and "sum" are supported. "sum" computes the weighted sum of the embedding results for each row. "mean" is the weighted sum divided by the total weight. "sqrtn" is the weighted sum divided by the square root of the sum of the squares of the weights.
object max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value, before combining.
Returns
object
A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by `sp_ids`, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified.

In other words, if

`shape(combined params) = [p0, p1,..., pm]`

and

`shape(sp_ids) = shape(sp_weights) = [d0, d1,..., dn]`

then

`shape(output) = [d0, d1,..., dn-1, p1,..., pm]`.

For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are

```python [0, 0]: id 1, weight 2.0 [0, 1]: id 3, weight 0.5 [1, 0]: id 0, weight 1.0 [2, 3]: id 1, weight 3.0 ```

with `combiner`="mean", then the output will be a 3x20 matrix where

```python output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = (params[0, :] * 1.0) / 1.0 output[2, :] = (params[1, :] * 3.0) / 3.0 ```

object embedding_lookup_sparse_dyn(object params, object sp_ids, object sp_weights, ImplicitContainer<T> partition_strategy, object name, object combiner, object max_norm)

Computes embeddings for the given ids and weights.

This op assumes that there is at least one id for each row in the dense tensor represented by sp_ids (i.e. there are no rows with empty features), and that all the indices of sp_ids are in canonical row-major order.

It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0.
Parameters
object params
A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.
object sp_ids
N x M `SparseTensor` of int64 ids where N is typically batch size and M is arbitrary.
object sp_weights
either a `SparseTensor` of float / double weights, or `None` to indicate all weights should be taken to be 1. If specified, `sp_weights` must have exactly the same shape and indices as `sp_ids`.
ImplicitContainer<T> partition_strategy
A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
object name
Optional name for the op.
object combiner
A string specifying the reduction op. Currently "mean", "sqrtn" and "sum" are supported. "sum" computes the weighted sum of the embedding results for each row. "mean" is the weighted sum divided by the total weight. "sqrtn" is the weighted sum divided by the square root of the sum of the squares of the weights.
object max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value, before combining.
Returns
object
A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by `sp_ids`, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified.

In other words, if

`shape(combined params) = [p0, p1,..., pm]`

and

`shape(sp_ids) = shape(sp_weights) = [d0, d1,..., dn]`

then

`shape(output) = [d0, d1,..., dn-1, p1,..., pm]`.

For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are

```python [0, 0]: id 1, weight 2.0 [0, 1]: id 3, weight 0.5 [1, 0]: id 0, weight 1.0 [2, 3]: id 1, weight 3.0 ```

with `combiner`="mean", then the output will be a 3x20 matrix where

```python output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = (params[0, :] * 1.0) / 1.0 output[2, :] = (params[1, :] * 3.0) / 3.0 ```

Tensor erosion2d(IGraphNodeBase value, IGraphNodeBase kernel, IEnumerable<int> strides, IEnumerable<int> rates, string padding, string name)

Computes the grayscale erosion of 4-D `value` and 3-D `kernel` tensors.

The `value` tensor has shape `[batch, in_height, in_width, depth]` and the `kernel` tensor has shape `[kernel_height, kernel_width, depth]`, i.e., each input channel is processed independently of the others with its own structuring function. The `output` tensor has shape `[batch, out_height, out_width, depth]`. The spatial dimensions of the output tensor depend on the `padding` algorithm. We currently only support the default "NHWC" `data_format`.

In detail, the grayscale morphological 2-D erosion is given by:

output[b, y, x, c] = min_{dy, dx} value[b, strides[1] * y - rates[1] * dy, strides[2] * x - rates[2] * dx, c] - kernel[dy, dx, c]

Duality: The erosion of `value` by the `kernel` is equal to the negation of the dilation of `-value` by the reflected `kernel`.
Parameters
IGraphNodeBase value
A `Tensor`. 4-D with shape `[batch, in_height, in_width, depth]`.
IGraphNodeBase kernel
A `Tensor`. Must have the same type as `value`. 3-D with shape `[kernel_height, kernel_width, depth]`.
IEnumerable<int> strides
A list of `ints` that has length `>= 4`. 1-D of length 4. The stride of the sliding window for each dimension of the input tensor. Must be: `[1, stride_height, stride_width, 1]`.
IEnumerable<int> rates
A list of `ints` that has length `>= 4`. 1-D of length 4. The input stride for atrous morphological dilation. Must be: `[1, rate_height, rate_width, 1]`.
string padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string name
A name for the operation (optional). If not specified "erosion2d" is used.
Returns
Tensor
A `Tensor`. Has the same type as `value`. 4-D with shape `[batch, out_height, out_width, depth]`.

object erosion2d_dyn(object value, object kernel, object strides, object rates, object padding, object name)

Computes the grayscale erosion of 4-D `value` and 3-D `kernel` tensors.

The `value` tensor has shape `[batch, in_height, in_width, depth]` and the `kernel` tensor has shape `[kernel_height, kernel_width, depth]`, i.e., each input channel is processed independently of the others with its own structuring function. The `output` tensor has shape `[batch, out_height, out_width, depth]`. The spatial dimensions of the output tensor depend on the `padding` algorithm. We currently only support the default "NHWC" `data_format`.

In detail, the grayscale morphological 2-D erosion is given by:

output[b, y, x, c] = min_{dy, dx} value[b, strides[1] * y - rates[1] * dy, strides[2] * x - rates[2] * dx, c] - kernel[dy, dx, c]

Duality: The erosion of `value` by the `kernel` is equal to the negation of the dilation of `-value` by the reflected `kernel`.
Parameters
object value
A `Tensor`. 4-D with shape `[batch, in_height, in_width, depth]`.
object kernel
A `Tensor`. Must have the same type as `value`. 3-D with shape `[kernel_height, kernel_width, depth]`.
object strides
A list of `ints` that has length `>= 4`. 1-D of length 4. The stride of the sliding window for each dimension of the input tensor. Must be: `[1, stride_height, stride_width, 1]`.
object rates
A list of `ints` that has length `>= 4`. 1-D of length 4. The input stride for atrous morphological dilation. Must be: `[1, rate_height, rate_width, 1]`.
object padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
object name
A name for the operation (optional). If not specified "erosion2d" is used.
Returns
object
A `Tensor`. Has the same type as `value`. 4-D with shape `[batch, out_height, out_width, depth]`.

object fixed_unigram_candidate_sampler(IGraphNodeBase true_classes, object num_true, object num_sampled, object unique, object range_max, string vocab_file, double distortion, int num_reserved_ids, int num_shards, int shard, ValueTuple<object> unigrams, object seed, string name)

Samples a set of classes using the provided (fixed) base distribution.

This operation randomly samples a tensor of sampled classes (`sampled_candidates`) from the range of integers `[0, range_max)`.

The elements of `sampled_candidates` are drawn without replacement (if `unique=True`) or with replacement (if `unique=False`) from the base distribution.

The base distribution is read from a file or passed in as an in-memory array. There is also an option to skew the distribution by applying a distortion power to the weights.

In addition, this operation returns tensors `true_expected_count` and `sampled_expected_count` representing the number of times each of the target classes (`true_classes`) and the sampled classes (`sampled_candidates`) is expected to occur in an average tensor of sampled classes. These values correspond to `Q(y|x)` defined in [this document](http://www.tensorflow.org/extras/candidate_sampling.pdf). If `unique=True`, then these are post-rejection probabilities and we compute them approximately.
Parameters
IGraphNodeBase true_classes
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
object num_true
An `int`. The number of target classes per training example.
object num_sampled
An `int`. The number of classes to randomly sample.
object unique
A `bool`. Determines whether all sampled classes in a batch are unique.
object range_max
An `int`. The number of possible classes.
string vocab_file
Each valid line in this file (which should have a CSV-like format) corresponds to a valid word ID. IDs are in sequential order, starting from num_reserved_ids. The last entry in each line is expected to be a value corresponding to the count or relative probability. Exactly one of `vocab_file` and `unigrams` needs to be passed to this operation.
double distortion
The distortion is used to skew the unigram probability distribution. Each weight is first raised to the distortion's power before adding to the internal unigram distribution. As a result, `distortion = 1.0` gives regular unigram sampling (as defined by the vocab file), and `distortion = 0.0` gives a uniform distribution.
int num_reserved_ids
Optionally some reserved IDs can be added in the range `[0, num_reserved_ids)` by the users. One use case is that a special unknown word token is used as ID 0. These IDs will have a sampling probability of 0.
int num_shards
A sampler can be used to sample from a subset of the original range in order to speed up the whole computation through parallelism. This parameter (together with `shard`) indicates the number of partitions that are being used in the overall computation.
int shard
A sampler can be used to sample from a subset of the original range in order to speed up the whole computation through parallelism. This parameter (together with `num_shards`) indicates the particular partition number of the operation, when partitioning is being used.
ValueTuple<object> unigrams
A list of unigram counts or probabilities, one per ID in sequential order. Exactly one of `vocab_file` and `unigrams` should be passed to this operation.
object seed
An `int`. An operation-specific seed. Default is 0.
string name
A name for the operation (optional).
Returns
object

object fixed_unigram_candidate_sampler_dyn(object true_classes, object num_true, object num_sampled, object unique, object range_max, ImplicitContainer<T> vocab_file, ImplicitContainer<T> distortion, ImplicitContainer<T> num_reserved_ids, ImplicitContainer<T> num_shards, ImplicitContainer<T> shard, ImplicitContainer<T> unigrams, object seed, object name)

Samples a set of classes using the provided (fixed) base distribution.

This operation randomly samples a tensor of sampled classes (`sampled_candidates`) from the range of integers `[0, range_max)`.

The elements of `sampled_candidates` are drawn without replacement (if `unique=True`) or with replacement (if `unique=False`) from the base distribution.

The base distribution is read from a file or passed in as an in-memory array. There is also an option to skew the distribution by applying a distortion power to the weights.

In addition, this operation returns tensors `true_expected_count` and `sampled_expected_count` representing the number of times each of the target classes (`true_classes`) and the sampled classes (`sampled_candidates`) is expected to occur in an average tensor of sampled classes. These values correspond to `Q(y|x)` defined in [this document](http://www.tensorflow.org/extras/candidate_sampling.pdf). If `unique=True`, then these are post-rejection probabilities and we compute them approximately.
Parameters
object true_classes
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
object num_true
An `int`. The number of target classes per training example.
object num_sampled
An `int`. The number of classes to randomly sample.
object unique
A `bool`. Determines whether all sampled classes in a batch are unique.
object range_max
An `int`. The number of possible classes.
ImplicitContainer<T> vocab_file
Each valid line in this file (which should have a CSV-like format) corresponds to a valid word ID. IDs are in sequential order, starting from num_reserved_ids. The last entry in each line is expected to be a value corresponding to the count or relative probability. Exactly one of `vocab_file` and `unigrams` needs to be passed to this operation.
ImplicitContainer<T> distortion
The distortion is used to skew the unigram probability distribution. Each weight is first raised to the distortion's power before adding to the internal unigram distribution. As a result, `distortion = 1.0` gives regular unigram sampling (as defined by the vocab file), and `distortion = 0.0` gives a uniform distribution.
ImplicitContainer<T> num_reserved_ids
Optionally some reserved IDs can be added in the range `[0, num_reserved_ids)` by the users. One use case is that a special unknown word token is used as ID 0. These IDs will have a sampling probability of 0.
ImplicitContainer<T> num_shards
A sampler can be used to sample from a subset of the original range in order to speed up the whole computation through parallelism. This parameter (together with `shard`) indicates the number of partitions that are being used in the overall computation.
ImplicitContainer<T> shard
A sampler can be used to sample from a subset of the original range in order to speed up the whole computation through parallelism. This parameter (together with `num_shards`) indicates the particular partition number of the operation, when partitioning is being used.
ImplicitContainer<T> unigrams
A list of unigram counts or probabilities, one per ID in sequential order. Exactly one of `vocab_file` and `unigrams` should be passed to this operation.
object seed
An `int`. An operation-specific seed. Default is 0.
object name
A name for the operation (optional).
Returns
object

object fractional_avg_pool(IGraphNodeBase value, object pooling_ratio, bool pseudo_random, bool overlapping, bool deterministic, int seed, int seed2, string name)

Performs fractional average pooling on the input. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: `seed2` and `deterministic` args are deprecated. Use fractional_avg_pool_v2.

This is a deprecated version of `fractional_avg_pool`.

Fractional average pooling is similar to Fractional max pooling in the pooling region generation step. The only difference is that after pooling regions are generated, a mean operation is performed instead of a max operation in each pooling region.
Parameters
IGraphNodeBase value
A `Tensor`. 4-D with shape `[batch, height, width, channels]`.
object pooling_ratio
A list of `floats` that has length >= 4. Pooling ratio for each dimension of `value`, currently only supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively.
bool pseudo_random
An optional `bool`. Defaults to `False`. When set to `True`, generates the pooling sequence in a pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071) for difference between pseudorandom and random.
bool overlapping
An optional `bool`. Defaults to `False`. When set to `True`, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example: `index 0 1 2 3 4` `value 20 5 16 3 7` If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [20, 16] for fractional avg pooling.
bool deterministic
An optional `bool`. Deprecated; use `fractional_avg_pool_v2` instead.
int seed
An optional `int`. Defaults to `0`. If set to be non-zero, the random number generator is seeded by the given seed. Otherwise it is seeded by a random seed.
int seed2
An optional `int`. Deprecated; use `fractional_avg_pool_v2` instead.
string name
A name for the operation (optional).
Returns
object
A tuple of `Tensor` objects (`output`, `row_pooling_sequence`, `col_pooling_sequence`).

object fractional_max_pool(IGraphNodeBase value, object pooling_ratio, bool pseudo_random, bool overlapping, bool deterministic, int seed, int seed2, string name)

Performs fractional max pooling on the input. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: `seed2` and `deterministic` args are deprecated. Use fractional_max_pool_v2.

This is a deprecated version of `fractional_max_pool`.

Fractional max pooling is slightly different than regular max pooling. In regular max pooling, you downsize an input set by taking the maximum value of smaller N x N subsections of the set (often 2x2), and try to reduce the set by a factor of N, where N is an integer. Fractional max pooling, as you might expect from the word "fractional", means that the overall reduction ratio N does not have to be an integer.

The sizes of the pooling regions are generated randomly but are fairly uniform. For example, let's look at the height dimension, and the constraints on the list of rows that will be pool boundaries.

First we define the following:

1. input_row_length : the number of rows from the input set 2. output_row_length : which will be smaller than the input 3. alpha = input_row_length / output_row_length : our reduction ratio 4. K = floor(alpha) 5. row_pooling_sequence : this is the result list of pool boundary rows

Then, row_pooling_sequence should satisfy:

1. a[0] = 0 : the first value of the sequence is 0 2. a[end] = input_row_length : the last value of the sequence is the size 3. K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size 4. length(row_pooling_sequence) = output_row_length+1

For more details on fractional max pooling, see this paper: [Benjamin Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071)
Parameters
IGraphNodeBase value
A `Tensor`. 4-D with shape `[batch, height, width, channels]`.
object pooling_ratio
A list of `floats` that has length >= 4. Pooling ratio for each dimension of `value`, currently only supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively.
bool pseudo_random
An optional `bool`. Defaults to `False`. When set to `True`, generates the pooling sequence in a pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071) for difference between pseudorandom and random.
bool overlapping
An optional `bool`. Defaults to `False`. When set to `True`, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example: `index 0 1 2 3 4` `value 20 5 16 3 7` If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [20, 16] for fractional max pooling.
bool deterministic
An optional `bool`. Deprecated; use `fractional_max_pool_v2` instead.
int seed
An optional `int`. Defaults to `0`. If set to be non-zero, the random number generator is seeded by the given seed. Otherwise it is seeded by a random seed.
int seed2
An optional `int`. Deprecated; use `fractional_max_pool_v2` instead.
string name
A name for the operation (optional).
Returns
object
A tuple of `Tensor` objects (`output`, `row_pooling_sequence`, `col_pooling_sequence`).

ValueTuple<object, object, object> fused_batch_norm(IEnumerable<IGraphNodeBase> x, IGraphNodeBase scale, IGraphNodeBase offset, IGraphNodeBase mean, IGraphNodeBase variance, double epsilon, string data_format, bool is_training, string name)

Batch normalization.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
IEnumerable<IGraphNodeBase> x
Input `Tensor` of 4 dimensions.
IGraphNodeBase scale
A `Tensor` of 1 dimension for scaling.
IGraphNodeBase offset
A `Tensor` of 1 dimension for bias.
IGraphNodeBase mean
A `Tensor` of 1 dimension for population mean used for inference.
IGraphNodeBase variance
A `Tensor` of 1 dimension for population variance used for inference.
double epsilon
A small float number added to the variance of x.
string data_format
The data format for x. Either "NHWC" (default) or "NCHW".
bool is_training
A bool value to specify if the operation is used for training or inference.
string name
A name for this operation (optional).
Returns
ValueTuple<object, object, object>

ValueTuple<object, object, object> fused_batch_norm(IGraphNodeBase x, IGraphNodeBase scale, IGraphNodeBase offset, IGraphNodeBase mean, IGraphNodeBase variance, double epsilon, string data_format, bool is_training, string name)

Batch normalization.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
IGraphNodeBase x
Input `Tensor` of 4 dimensions.
IGraphNodeBase scale
A `Tensor` of 1 dimension for scaling.
IGraphNodeBase offset
A `Tensor` of 1 dimension for bias.
IGraphNodeBase mean
A `Tensor` of 1 dimension for population mean used for inference.
IGraphNodeBase variance
A `Tensor` of 1 dimension for population variance used for inference.
double epsilon
A small float number added to the variance of x.
string data_format
The data format for x. Either "NHWC" (default) or "NCHW".
bool is_training
A bool value to specify if the operation is used for training or inference.
string name
A name for this operation (optional).
Returns
ValueTuple<object, object, object>

object fused_batch_norm_dyn(object x, object scale, object offset, object mean, object variance, ImplicitContainer<T> epsilon, ImplicitContainer<T> data_format, ImplicitContainer<T> is_training, object name)

Batch normalization.

See Source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).
Parameters
object x
Input `Tensor` of 4 dimensions.
object scale
A `Tensor` of 1 dimension for scaling.
object offset
A `Tensor` of 1 dimension for bias.
object mean
A `Tensor` of 1 dimension for population mean used for inference.
object variance
A `Tensor` of 1 dimension for population variance used for inference.
ImplicitContainer<T> epsilon
A small float number added to the variance of x.
ImplicitContainer<T> data_format
The data format for x. Either "NHWC" (default) or "NCHW".
ImplicitContainer<T> is_training
A bool value to specify if the operation is used for training or inference.
object name
A name for this operation (optional).
Returns
object

Tensor l2_loss(IGraphNodeBase t, string name)

L2 Loss.

Computes half the L2 norm of a tensor without the `sqrt`:

output = sum(t ** 2) / 2
Parameters
IGraphNodeBase t
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. Typically 2-D, but may have any dimensions.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `t`.

object l2_loss_dyn(object t, object name)

L2 Loss.

Computes half the L2 norm of a tensor without the `sqrt`:

output = sum(t ** 2) / 2
Parameters
object t
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. Typically 2-D, but may have any dimensions.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `t`.

Tensor l2_normalize(IGraphNodeBase x, Nullable<int> axis, double epsilon, string name, Nullable<int> dim)

Normalizes along dimension `axis` using an L2 norm. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

For a 1-D tensor with `axis = 0`, computes

output = x / sqrt(max(sum(x**2), epsilon))

For `x` with more dimensions, independently normalizes each 1-D slice along dimension `axis`.
Parameters
IGraphNodeBase x
A `Tensor`.
Nullable<int> axis
Dimension along which to normalize. A scalar or a vector of integers.
double epsilon
A lower bound value for the norm. Will use `sqrt(epsilon)` as the divisor if `norm < sqrt(epsilon)`.
string name
A name for this operation (optional).
Nullable<int> dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` with the same shape as `x`.

object l2_normalize_dyn(object x, object axis, ImplicitContainer<T> epsilon, object name, object dim)

Normalizes along dimension `axis` using an L2 norm. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

For a 1-D tensor with `axis = 0`, computes

output = x / sqrt(max(sum(x**2), epsilon))

For `x` with more dimensions, independently normalizes each 1-D slice along dimension `axis`.
Parameters
object x
A `Tensor`.
object axis
Dimension along which to normalize. A scalar or a vector of integers.
ImplicitContainer<T> epsilon
A lower bound value for the norm. Will use `sqrt(epsilon)` as the divisor if `norm < sqrt(epsilon)`.
object name
A name for this operation (optional).
object dim
Deprecated alias for axis.
Returns
object
A `Tensor` with the same shape as `x`.

object learned_unigram_candidate_sampler(IGraphNodeBase true_classes, int num_true, int num_sampled, bool unique, int range_max, object seed, string name)

Samples a set of classes from a distribution learned during training.

This operation randomly samples a tensor of sampled classes (`sampled_candidates`) from the range of integers `[0, range_max)`.

The elements of `sampled_candidates` are drawn without replacement (if `unique=True`) or with replacement (if `unique=False`) from the base distribution.

The base distribution for this operation is constructed on the fly during training. It is a unigram distribution over the target classes seen so far during training. Every integer in `[0, range_max)` begins with a weight of 1, and is incremented by 1 each time it is seen as a target class. The base distribution is not saved to checkpoints, so it is reset when the model is reloaded.

In addition, this operation returns tensors `true_expected_count` and `sampled_expected_count` representing the number of times each of the target classes (`true_classes`) and the sampled classes (`sampled_candidates`) is expected to occur in an average tensor of sampled classes. These values correspond to `Q(y|x)` defined in [this document](http://www.tensorflow.org/extras/candidate_sampling.pdf). If `unique=True`, then these are post-rejection probabilities and we compute them approximately.
Parameters
IGraphNodeBase true_classes
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
int num_true
An `int`. The number of target classes per training example.
int num_sampled
An `int`. The number of classes to randomly sample.
bool unique
A `bool`. Determines whether all sampled classes in a batch are unique.
int range_max
An `int`. The number of possible classes.
object seed
An `int`. An operation-specific seed. Default is 0.
string name
A name for the operation (optional).
Returns
object

object learned_unigram_candidate_sampler_dyn(object true_classes, object num_true, object num_sampled, object unique, object range_max, object seed, object name)

Samples a set of classes from a distribution learned during training.

This operation randomly samples a tensor of sampled classes (`sampled_candidates`) from the range of integers `[0, range_max)`.

The elements of `sampled_candidates` are drawn without replacement (if `unique=True`) or with replacement (if `unique=False`) from the base distribution.

The base distribution for this operation is constructed on the fly during training. It is a unigram distribution over the target classes seen so far during training. Every integer in `[0, range_max)` begins with a weight of 1, and is incremented by 1 each time it is seen as a target class. The base distribution is not saved to checkpoints, so it is reset when the model is reloaded.

In addition, this operation returns tensors `true_expected_count` and `sampled_expected_count` representing the number of times each of the target classes (`true_classes`) and the sampled classes (`sampled_candidates`) is expected to occur in an average tensor of sampled classes. These values correspond to `Q(y|x)` defined in [this document](http://www.tensorflow.org/extras/candidate_sampling.pdf). If `unique=True`, then these are post-rejection probabilities and we compute them approximately.
Parameters
object true_classes
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
object num_true
An `int`. The number of target classes per training example.
object num_sampled
An `int`. The number of classes to randomly sample.
object unique
A `bool`. Determines whether all sampled classes in a batch are unique.
object range_max
An `int`. The number of possible classes.
object seed
An `int`. An operation-specific seed. Default is 0.
object name
A name for the operation (optional).
Returns
object

Tensor log_poisson_loss(IGraphNodeBase targets, IDictionary<object, object> log_input, bool compute_full_loss, string name)

Computes log Poisson loss given `log_input`.

Gives the log-likelihood loss between the prediction and the target under the assumption that the target has a Poisson distribution. Caveat: By default, this is not the exact loss, but the loss minus a constant term [log(z!)]. That has no effect for optimization, but does not play well with relative loss comparisons. To compute an approximation of the log factorial term, specify compute_full_loss=True to enable Stirling's Approximation.

For brevity, let `c = log(x) = log_input`, `z = targets`. The log Poisson loss is

-log(exp(-x) * (x^z) / z!) = -log(exp(-x) * (x^z)) + log(z!) ~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)] [ Note the second term is the Stirling's Approximation for log(z!). It is invariant to x and does not affect optimization, though important for correct relative loss comparisons. It is only computed when compute_full_loss == True. ] = x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)] = exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
Parameters
IGraphNodeBase targets
A `Tensor` of the same type and shape as `log_input`.
IDictionary<object, object> log_input
A `Tensor` of type `float32` or `float64`.
bool compute_full_loss
whether to compute the full loss. If false, a constant term is dropped in favor of more efficient optimization.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `log_input` with the componentwise logistic losses.

Tensor log_poisson_loss(IGraphNodeBase targets, ValueTuple<PythonClassContainer, PythonClassContainer> log_input, bool compute_full_loss, string name)

Computes log Poisson loss given `log_input`.

Gives the log-likelihood loss between the prediction and the target under the assumption that the target has a Poisson distribution. Caveat: By default, this is not the exact loss, but the loss minus a constant term [log(z!)]. That has no effect for optimization, but does not play well with relative loss comparisons. To compute an approximation of the log factorial term, specify compute_full_loss=True to enable Stirling's Approximation.

For brevity, let `c = log(x) = log_input`, `z = targets`. The log Poisson loss is

-log(exp(-x) * (x^z) / z!) = -log(exp(-x) * (x^z)) + log(z!) ~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)] [ Note the second term is the Stirling's Approximation for log(z!). It is invariant to x and does not affect optimization, though important for correct relative loss comparisons. It is only computed when compute_full_loss == True. ] = x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)] = exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
Parameters
IGraphNodeBase targets
A `Tensor` of the same type and shape as `log_input`.
ValueTuple<PythonClassContainer, PythonClassContainer> log_input
A `Tensor` of type `float32` or `float64`.
bool compute_full_loss
whether to compute the full loss. If false, a constant term is dropped in favor of more efficient optimization.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `log_input` with the componentwise logistic losses.

Tensor log_poisson_loss(IGraphNodeBase targets, IGraphNodeBase log_input, bool compute_full_loss, string name)

Computes log Poisson loss given `log_input`.

Gives the log-likelihood loss between the prediction and the target under the assumption that the target has a Poisson distribution. Caveat: By default, this is not the exact loss, but the loss minus a constant term [log(z!)]. That has no effect for optimization, but does not play well with relative loss comparisons. To compute an approximation of the log factorial term, specify compute_full_loss=True to enable Stirling's Approximation.

For brevity, let `c = log(x) = log_input`, `z = targets`. The log Poisson loss is

-log(exp(-x) * (x^z) / z!) = -log(exp(-x) * (x^z)) + log(z!) ~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)] [ Note the second term is the Stirling's Approximation for log(z!). It is invariant to x and does not affect optimization, though important for correct relative loss comparisons. It is only computed when compute_full_loss == True. ] = x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)] = exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
Parameters
IGraphNodeBase targets
A `Tensor` of the same type and shape as `log_input`.
IGraphNodeBase log_input
A `Tensor` of type `float32` or `float64`.
bool compute_full_loss
whether to compute the full loss. If false, a constant term is dropped in favor of more efficient optimization.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `log_input` with the componentwise logistic losses.

Tensor log_poisson_loss(IGraphNodeBase targets, IndexedSlices log_input, bool compute_full_loss, string name)

Computes log Poisson loss given `log_input`.

Gives the log-likelihood loss between the prediction and the target under the assumption that the target has a Poisson distribution. Caveat: By default, this is not the exact loss, but the loss minus a constant term [log(z!)]. That has no effect for optimization, but does not play well with relative loss comparisons. To compute an approximation of the log factorial term, specify compute_full_loss=True to enable Stirling's Approximation.

For brevity, let `c = log(x) = log_input`, `z = targets`. The log Poisson loss is

-log(exp(-x) * (x^z) / z!) = -log(exp(-x) * (x^z)) + log(z!) ~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)] [ Note the second term is the Stirling's Approximation for log(z!). It is invariant to x and does not affect optimization, though important for correct relative loss comparisons. It is only computed when compute_full_loss == True. ] = x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)] = exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
Parameters
IGraphNodeBase targets
A `Tensor` of the same type and shape as `log_input`.
IndexedSlices log_input
A `Tensor` of type `float32` or `float64`.
bool compute_full_loss
whether to compute the full loss. If false, a constant term is dropped in favor of more efficient optimization.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `log_input` with the componentwise logistic losses.

object log_poisson_loss_dyn(object targets, object log_input, ImplicitContainer<T> compute_full_loss, object name)

Computes log Poisson loss given `log_input`.

Gives the log-likelihood loss between the prediction and the target under the assumption that the target has a Poisson distribution. Caveat: By default, this is not the exact loss, but the loss minus a constant term [log(z!)]. That has no effect for optimization, but does not play well with relative loss comparisons. To compute an approximation of the log factorial term, specify compute_full_loss=True to enable Stirling's Approximation.

For brevity, let `c = log(x) = log_input`, `z = targets`. The log Poisson loss is

-log(exp(-x) * (x^z) / z!) = -log(exp(-x) * (x^z)) + log(z!) ~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)] [ Note the second term is the Stirling's Approximation for log(z!). It is invariant to x and does not affect optimization, though important for correct relative loss comparisons. It is only computed when compute_full_loss == True. ] = x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)] = exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
Parameters
object targets
A `Tensor` of the same type and shape as `log_input`.
object log_input
A `Tensor` of type `float32` or `float64`.
ImplicitContainer<T> compute_full_loss
whether to compute the full loss. If false, a constant term is dropped in favor of more efficient optimization.
object name
A name for the operation (optional).
Returns
object
A `Tensor` of the same shape as `log_input` with the componentwise logistic losses.

Tensor log_softmax(float32 logits, Nullable<int> axis, string name, object dim)

Computes log softmax activations. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

For each batch `i` and class `j` we have

logsoftmax = logits - log(reduce_sum(exp(logits), axis))
Parameters
float32 logits
A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
Nullable<int> axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for `axis`.
Returns
Tensor
A `Tensor`. Has the same type as `logits`. Same shape as `logits`.

Tensor log_softmax(ndarray logits, Nullable<int> axis, string name, object dim)

Computes log softmax activations. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

For each batch `i` and class `j` we have

logsoftmax = logits - log(reduce_sum(exp(logits), axis))
Parameters
ndarray logits
A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
Nullable<int> axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for `axis`.
Returns
Tensor
A `Tensor`. Has the same type as `logits`. Same shape as `logits`.

Tensor log_softmax(IEnumerable<double> logits, Nullable<int> axis, string name, object dim)

Computes log softmax activations. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

For each batch `i` and class `j` we have

logsoftmax = logits - log(reduce_sum(exp(logits), axis))
Parameters
IEnumerable<double> logits
A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
Nullable<int> axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for `axis`.
Returns
Tensor
A `Tensor`. Has the same type as `logits`. Same shape as `logits`.

Tensor log_softmax(IndexedSlices logits, Nullable<int> axis, string name, object dim)

Computes log softmax activations. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

For each batch `i` and class `j` we have

logsoftmax = logits - log(reduce_sum(exp(logits), axis))
Parameters
IndexedSlices logits
A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
Nullable<int> axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for `axis`.
Returns
Tensor
A `Tensor`. Has the same type as `logits`. Same shape as `logits`.

Tensor log_softmax(IGraphNodeBase logits, Nullable<int> axis, string name, object dim)

Computes log softmax activations. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

For each batch `i` and class `j` we have

logsoftmax = logits - log(reduce_sum(exp(logits), axis))
Parameters
IGraphNodeBase logits
A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
Nullable<int> axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for `axis`.
Returns
Tensor
A `Tensor`. Has the same type as `logits`. Same shape as `logits`.

Tensor log_softmax(PythonClassContainer logits, Nullable<int> axis, string name, object dim)

Computes log softmax activations. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

For each batch `i` and class `j` we have

logsoftmax = logits - log(reduce_sum(exp(logits), axis))
Parameters
PythonClassContainer logits
A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
Nullable<int> axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for `axis`.
Returns
Tensor
A `Tensor`. Has the same type as `logits`. Same shape as `logits`.

Tensor log_softmax(double logits, Nullable<int> axis, string name, object dim)

Computes log softmax activations. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

For each batch `i` and class `j` we have

logsoftmax = logits - log(reduce_sum(exp(logits), axis))
Parameters
double logits
A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
Nullable<int> axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for `axis`.
Returns
Tensor
A `Tensor`. Has the same type as `logits`. Same shape as `logits`.

object log_softmax_dyn(object logits, object axis, object name, object dim)

Computes log softmax activations. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

For each batch `i` and class `j` we have

logsoftmax = logits - log(reduce_sum(exp(logits), axis))
Parameters
object logits
A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
object axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
object name
A name for the operation (optional).
object dim
Deprecated alias for `axis`.
Returns
object
A `Tensor`. Has the same type as `logits`. Same shape as `logits`.

object log_uniform_candidate_sampler(IGraphNodeBase true_classes, int num_true, int num_sampled, bool unique, int range_max, Nullable<int> seed, string name)

Samples a set of classes using a log-uniform (Zipfian) base distribution.

This operation randomly samples a tensor of sampled classes (`sampled_candidates`) from the range of integers `[0, range_max)`.

The elements of `sampled_candidates` are drawn without replacement (if `unique=True`) or with replacement (if `unique=False`) from the base distribution.

The base distribution for this operation is an approximately log-uniform or Zipfian distribution:

`P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)`

This sampler is useful when the target classes approximately follow such a distribution - for example, if the classes represent words in a lexicon sorted in decreasing order of frequency. If your classes are not ordered by decreasing frequency, do not use this op.

In addition, this operation returns tensors `true_expected_count` and `sampled_expected_count` representing the number of times each of the target classes (`true_classes`) and the sampled classes (`sampled_candidates`) is expected to occur in an average tensor of sampled classes. These values correspond to `Q(y|x)` defined in [this document](http://www.tensorflow.org/extras/candidate_sampling.pdf). If `unique=True`, then these are post-rejection probabilities and we compute them approximately.
Parameters
IGraphNodeBase true_classes
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
int num_true
An `int`. The number of target classes per training example.
int num_sampled
An `int`. The number of classes to randomly sample.
bool unique
A `bool`. Determines whether all sampled classes in a batch are unique.
int range_max
An `int`. The number of possible classes.
Nullable<int> seed
An `int`. An operation-specific seed. Default is 0.
string name
A name for the operation (optional).
Returns
object

object log_uniform_candidate_sampler_dyn(object true_classes, object num_true, object num_sampled, object unique, object range_max, object seed, object name)

Samples a set of classes using a log-uniform (Zipfian) base distribution.

This operation randomly samples a tensor of sampled classes (`sampled_candidates`) from the range of integers `[0, range_max)`.

The elements of `sampled_candidates` are drawn without replacement (if `unique=True`) or with replacement (if `unique=False`) from the base distribution.

The base distribution for this operation is an approximately log-uniform or Zipfian distribution:

`P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)`

This sampler is useful when the target classes approximately follow such a distribution - for example, if the classes represent words in a lexicon sorted in decreasing order of frequency. If your classes are not ordered by decreasing frequency, do not use this op.

In addition, this operation returns tensors `true_expected_count` and `sampled_expected_count` representing the number of times each of the target classes (`true_classes`) and the sampled classes (`sampled_candidates`) is expected to occur in an average tensor of sampled classes. These values correspond to `Q(y|x)` defined in [this document](http://www.tensorflow.org/extras/candidate_sampling.pdf). If `unique=True`, then these are post-rejection probabilities and we compute them approximately.
Parameters
object true_classes
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
object num_true
An `int`. The number of target classes per training example.
object num_sampled
An `int`. The number of classes to randomly sample.
object unique
A `bool`. Determines whether all sampled classes in a batch are unique.
object range_max
An `int`. The number of possible classes.
object seed
An `int`. An operation-specific seed. Default is 0.
object name
A name for the operation (optional).
Returns
object

Tensor lrn(IGraphNodeBase input, int depth_radius, double bias, double alpha, double beta, string name)

Local Response Normalization.

The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within `depth_radius`. In detail,

sqr_sum[a, b, c, d] = sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2) output = input / (bias + alpha * sqr_sum) ** beta

For details, see [Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. 4-D.
int depth_radius
An optional `int`. Defaults to `5`. 0-D. Half-width of the 1-D normalization window.
double bias
An optional `float`. Defaults to `1`. An offset (usually positive to avoid dividing by 0).
double alpha
An optional `float`. Defaults to `1`. A scale factor, usually positive.
double beta
An optional `float`. Defaults to `0.5`. An exponent.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor lrn(IGraphNodeBase input, int depth_radius, double bias, int alpha, double beta, string name)

Local Response Normalization.

The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within `depth_radius`. In detail,

sqr_sum[a, b, c, d] = sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2) output = input / (bias + alpha * sqr_sum) ** beta

For details, see [Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. 4-D.
int depth_radius
An optional `int`. Defaults to `5`. 0-D. Half-width of the 1-D normalization window.
double bias
An optional `float`. Defaults to `1`. An offset (usually positive to avoid dividing by 0).
int alpha
An optional `float`. Defaults to `1`. A scale factor, usually positive.
double beta
An optional `float`. Defaults to `0.5`. An exponent.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor lrn(IGraphNodeBase input, int depth_radius, int bias, double alpha, double beta, string name)

Local Response Normalization.

The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within `depth_radius`. In detail,

sqr_sum[a, b, c, d] = sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2) output = input / (bias + alpha * sqr_sum) ** beta

For details, see [Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. 4-D.
int depth_radius
An optional `int`. Defaults to `5`. 0-D. Half-width of the 1-D normalization window.
int bias
An optional `float`. Defaults to `1`. An offset (usually positive to avoid dividing by 0).
double alpha
An optional `float`. Defaults to `1`. A scale factor, usually positive.
double beta
An optional `float`. Defaults to `0.5`. An exponent.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

Tensor lrn(IGraphNodeBase input, int depth_radius, int bias, int alpha, double beta, string name)

Local Response Normalization.

The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within `depth_radius`. In detail,

sqr_sum[a, b, c, d] = sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2) output = input / (bias + alpha * sqr_sum) ** beta

For details, see [Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).
Parameters
IGraphNodeBase input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. 4-D.
int depth_radius
An optional `int`. Defaults to `5`. 0-D. Half-width of the 1-D normalization window.
int bias
An optional `float`. Defaults to `1`. An offset (usually positive to avoid dividing by 0).
int alpha
An optional `float`. Defaults to `1`. A scale factor, usually positive.
double beta
An optional `float`. Defaults to `0.5`. An exponent.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `input`.

object lrn_dyn(object input, ImplicitContainer<T> depth_radius, ImplicitContainer<T> bias, ImplicitContainer<T> alpha, ImplicitContainer<T> beta, object name)

Local Response Normalization.

The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within `depth_radius`. In detail,

sqr_sum[a, b, c, d] = sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2) output = input / (bias + alpha * sqr_sum) ** beta

For details, see [Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).
Parameters
object input
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. 4-D.
ImplicitContainer<T> depth_radius
An optional `int`. Defaults to `5`. 0-D. Half-width of the 1-D normalization window.
ImplicitContainer<T> bias
An optional `float`. Defaults to `1`. An offset (usually positive to avoid dividing by 0).
ImplicitContainer<T> alpha
An optional `float`. Defaults to `1`. A scale factor, usually positive.
ImplicitContainer<T> beta
An optional `float`. Defaults to `0.5`. An exponent.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `input`.

Tensor max_pool(IGraphNodeBase value, ValueTuple<int, int, object, object> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, int, object, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IEnumerable<object> value, ValueTuple<int, int, object, object> ksize, object strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, int, object, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(ndarray value, IEnumerable<int> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
ndarray value
A 4-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IEnumerable<object> value, int ksize, PythonClassContainer strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IGraphNodeBase value, int ksize, object strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IGraphNodeBase value, ValueTuple<int, int, object, object> ksize, object strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, int, object, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IEnumerable<object> value, int ksize, object strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IEnumerable<object> value, int ksize, object strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IGraphNodeBase value, IEnumerable<int> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IGraphNodeBase value, IEnumerable<int> ksize, PythonClassContainer strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(ndarray value, IEnumerable<int> ksize, object strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
ndarray value
A 4-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IGraphNodeBase value, int ksize, object strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(ndarray value, int ksize, PythonClassContainer strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
ndarray value
A 4-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(ndarray value, ValueTuple<int, int, object, object> ksize, object strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
ndarray value
A 4-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, int, object, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(ndarray value, ValueTuple<int, int, object, object> ksize, object strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
ndarray value
A 4-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, int, object, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(ndarray value, IEnumerable<int> ksize, PythonClassContainer strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
ndarray value
A 4-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(ndarray value, ValueTuple<int, int, object, object> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
ndarray value
A 4-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, int, object, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(ndarray value, IEnumerable<int> ksize, object strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
ndarray value
A 4-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IEnumerable<object> value, ValueTuple<int, int, object, object> ksize, object strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, int, object, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IEnumerable<object> value, int ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(ndarray value, int ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
ndarray value
A 4-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IGraphNodeBase value, ValueTuple<int, int, object, object> ksize, PythonClassContainer strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, int, object, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IGraphNodeBase value, IEnumerable<int> ksize, object strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(ndarray value, int ksize, object strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
ndarray value
A 4-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IGraphNodeBase value, IEnumerable<int> ksize, object strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IEnumerable<object> value, IEnumerable<int> ksize, PythonClassContainer strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IEnumerable<object> value, IEnumerable<int> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IGraphNodeBase value, int ksize, PythonClassContainer strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IEnumerable<object> value, IEnumerable<int> ksize, object strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IEnumerable<object> value, IEnumerable<int> ksize, object strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(ndarray value, int ksize, object strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
ndarray value
A 4-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IGraphNodeBase value, ValueTuple<int, int, object, object> ksize, object strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, int, object, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(ndarray value, ValueTuple<int, int, object, object> ksize, PythonClassContainer strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
ndarray value
A 4-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, int, object, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IGraphNodeBase value, int ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IGraphNodeBase value
A 4-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IEnumerable<object> value, ValueTuple<int, int, object, object> ksize, PythonClassContainer strides, object padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, int, object, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool(IEnumerable<object> value, ValueTuple<int, int, object, object> ksize, PythonClassContainer strides, PythonClassContainer padding, string data_format, string name, object input)

Performs the max pooling on the input.
Parameters
IEnumerable<object> value
A 4-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, int, object, object> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
PythonClassContainer strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
PythonClassContainer padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
object input
Alias for value.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

object max_pool_dyn(object value, object ksize, object strides, object padding, ImplicitContainer<T> data_format, object name, object input)

Performs the max pooling on the input.
Parameters
object value
A 4-D `Tensor` of the format specified by `data_format`.
object ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
ImplicitContainer<T> data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
object name
Optional name for the operation.
object input
Alias for value.
Returns
object
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool_v2(ndarray input, int ksize, int strides, string padding, object data_format, string name)

Performs the max pooling on the input.
Parameters
ndarray input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if `data_format` does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
int ksize
An int or list of `ints` that has length `1`, `N` or `N+2`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object data_format
A string. Specifies the channel dimension. For N=1 it can be either "NWC" (default) or "NCW", for N=2 it can be either "NHWC" (default) or "NCHW" and for N=3 either "NDHWC" (default) or "NCDHW".
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool_v2(IGraphNodeBase input, int ksize, int strides, string padding, object data_format, string name)

Performs the max pooling on the input.
Parameters
IGraphNodeBase input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if `data_format` does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
int ksize
An int or list of `ints` that has length `1`, `N` or `N+2`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object data_format
A string. Specifies the channel dimension. For N=1 it can be either "NWC" (default) or "NCW", for N=2 it can be either "NHWC" (default) or "NCHW" and for N=3 either "NDHWC" (default) or "NCDHW".
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

object max_pool_v2_dyn(object input, object ksize, object strides, object padding, object data_format, object name)

Performs the max pooling on the input.
Parameters
object input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if `data_format` does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
object ksize
An int or list of `ints` that has length `1`, `N` or `N+2`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object data_format
A string. Specifies the channel dimension. For N=1 it can be either "NWC" (default) or "NCW", for N=2 it can be either "NHWC" (default) or "NCHW" and for N=3 either "NDHWC" (default) or "NCDHW".
object name
Optional name for the operation.
Returns
object
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

object max_pool_with_argmax(object input, object ksize, object strides, object padding, string data_format, object Targmax, string name, object output_dtype, bool include_batch_in_index)

Performs max pooling on the input and outputs both max values and indices.

The indices in `argmax` are flattened, so that a maximum value at position `[b, y, x, c]` becomes flattened index: `(y * width + x) * channels + c` if `include_batch_in_index` is False; `((b * height + y) * width + x) * channels + c` if `include_batch_in_index` is True.

The indices returned are always in `[0, height) x [0, width)` before flattening, even if padding is involved and the mathematically correct answer is outside (either negative or too large). This is a bug, but fixing it is difficult to do in a safe backwards compatible way, especially due to flattening.
Parameters
object input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. 4-D with shape `[batch, height, width, channels]`. Input to pool over.
object ksize
A list of `ints` that has length `>= 4`. The size of the window for each dimension of the input tensor.
object strides
A list of `ints` that has length `>= 4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
string data_format
object Targmax
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
string name
A name for the operation (optional).
object output_dtype
bool include_batch_in_index
An optional `bool`. Defaults to `False`. Whether to include batch dimension in flattened index of `argmax`.
Returns
object
A tuple of `Tensor` objects (output, argmax).

object max_pool_with_argmax_dyn(object input, object ksize, object strides, object padding, ImplicitContainer<T> data_format, object Targmax, object name, object output_dtype, ImplicitContainer<T> include_batch_in_index)

Performs max pooling on the input and outputs both max values and indices.

The indices in `argmax` are flattened, so that a maximum value at position `[b, y, x, c]` becomes flattened index: `(y * width + x) * channels + c` if `include_batch_in_index` is False; `((b * height + y) * width + x) * channels + c` if `include_batch_in_index` is True.

The indices returned are always in `[0, height) x [0, width)` before flattening, even if padding is involved and the mathematically correct answer is outside (either negative or too large). This is a bug, but fixing it is difficult to do in a safe backwards compatible way, especially due to flattening.
Parameters
object input
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. 4-D with shape `[batch, height, width, channels]`. Input to pool over.
object ksize
A list of `ints` that has length `>= 4`. The size of the window for each dimension of the input tensor.
object strides
A list of `ints` that has length `>= 4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use.
ImplicitContainer<T> data_format
object Targmax
An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64.
object name
A name for the operation (optional).
object output_dtype
ImplicitContainer<T> include_batch_in_index
An optional `bool`. Defaults to `False`. Whether to include batch dimension in flattened index of `argmax`.
Returns
object
A tuple of `Tensor` objects (output, argmax).

Tensor max_pool1d(IGraphNodeBase input, int ksize, int strides, string padding, string data_format, string name)

Performs the max pooling on the input.

Note internally this op reshapes and uses the underlying 2d operation.
Parameters
IGraphNodeBase input
A 3-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1` or `3`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1` or `3`. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NWC", "NCW". Defaults to "NWC".
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool1d(ndarray input, int ksize, int strides, string padding, string data_format, string name)

Performs the max pooling on the input.

Note internally this op reshapes and uses the underlying 2d operation.
Parameters
ndarray input
A 3-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1` or `3`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1` or `3`. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NWC", "NCW". Defaults to "NWC".
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

object max_pool1d_dyn(object input, object ksize, object strides, object padding, ImplicitContainer<T> data_format, object name)

Performs the max pooling on the input.

Note internally this op reshapes and uses the underlying 2d operation.
Parameters
object input
A 3-D `Tensor` of the format specified by `data_format`.
object ksize
An int or list of `ints` that has length `1` or `3`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1` or `3`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
ImplicitContainer<T> data_format
An optional string from: "NWC", "NCW". Defaults to "NWC".
object name
A name for the operation (optional).
Returns
object
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool2d(IGraphNodeBase input, IEnumerable<int> ksize, IEnumerable<int> strides, string padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
IGraphNodeBase input
A 4-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
string name
Optional name for the operation.
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

object max_pool2d_dyn(object input, object ksize, object strides, object padding, ImplicitContainer<T> data_format, object name)

Performs the max pooling on the input.
Parameters
object input
A 4-D `Tensor` of the format specified by `data_format`.
object ksize
An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor.
object strides
An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
ImplicitContainer<T> data_format
A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.
object name
Optional name for the operation.
Returns
object
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(ndarray input, ValueTuple<int, object, object> ksize, ValueTuple<int, object, object> strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
ndarray input
A 5-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, object, object> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
ValueTuple<int, object, object> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(ndarray input, ValueTuple<int, object, object> ksize, int strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
ndarray input
A 5-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, object, object> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(ndarray input, int ksize, IEnumerable<int> strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
ndarray input
A 5-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(ndarray input, int ksize, ValueTuple<int, object, object> strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
ndarray input
A 5-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
ValueTuple<int, object, object> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(IGraphNodeBase input, IEnumerable<int> ksize, IEnumerable<int> strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(ndarray input, ValueTuple<int, object, object> ksize, IEnumerable<int> strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
ndarray input
A 5-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, object, object> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(ndarray input, int ksize, int strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
ndarray input
A 5-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(IGraphNodeBase input, IEnumerable<int> ksize, int strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(IGraphNodeBase input, int ksize, ValueTuple<int, object, object> strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
ValueTuple<int, object, object> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(ndarray input, IEnumerable<int> ksize, ValueTuple<int, object, object> strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
ndarray input
A 5-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
ValueTuple<int, object, object> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(ndarray input, IEnumerable<int> ksize, int strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
ndarray input
A 5-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(IGraphNodeBase input, ValueTuple<int, object, object> ksize, int strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, object, object> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(IGraphNodeBase input, int ksize, IEnumerable<int> strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(IGraphNodeBase input, ValueTuple<int, object, object> ksize, ValueTuple<int, object, object> strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, object, object> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
ValueTuple<int, object, object> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(ndarray input, IEnumerable<int> ksize, IEnumerable<int> strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
ndarray input
A 5-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(IGraphNodeBase input, IEnumerable<int> ksize, ValueTuple<int, object, object> strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of the format specified by `data_format`.
IEnumerable<int> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
ValueTuple<int, object, object> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(IGraphNodeBase input, ValueTuple<int, object, object> ksize, IEnumerable<int> strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of the format specified by `data_format`.
ValueTuple<int, object, object> ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
IEnumerable<int> strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

Tensor max_pool3d(IGraphNodeBase input, int ksize, int strides, object padding, string data_format, string name)

Performs the max pooling on the input.
Parameters
IGraphNodeBase input
A 5-D `Tensor` of the format specified by `data_format`.
int ksize
An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor.
int strides
An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
string data_format
An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of format specified by `data_format`. The max pooled output tensor.

ValueTuple<object, object> moments(IGraphNodeBase x, IEnumerable<int> axes, object shift, string name, Nullable<bool> keep_dims, Nullable<bool> keepdims)

Calculate the mean and variance of `x`.

The mean and variance are calculated by aggregating the contents of `x` across `axes`. If `x` is 1-D and `axes = [0]` this is just the mean and variance of a vector.

Note: shift is currently not used; the true mean is computed and used.

When using these moments for batch normalization (see tf.nn.batch_normalization):

* for so-called "global normalization", used with convolutional filters with shape `[batch, height, width, depth]`, pass `axes=[0, 1, 2]`. * for simple batch normalization pass `axes=[0]` (batch only).
Parameters
IGraphNodeBase x
A `Tensor`.
IEnumerable<int> axes
Array of ints. Axes along which to compute mean and variance.
object shift
Not used in the current implementation
string name
Name used to scope the operations that compute the moments.
Nullable<bool> keep_dims
produce moments with the same dimensionality as the input.
Nullable<bool> keepdims
Alias to keep_dims.
Returns
ValueTuple<object, object>
Two `Tensor` objects: `mean` and `variance`.

object moments_dyn(object x, object axes, object shift, object name, object keep_dims, object keepdims)

Calculate the mean and variance of `x`.

The mean and variance are calculated by aggregating the contents of `x` across `axes`. If `x` is 1-D and `axes = [0]` this is just the mean and variance of a vector.

Note: shift is currently not used; the true mean is computed and used.

When using these moments for batch normalization (see tf.nn.batch_normalization):

* for so-called "global normalization", used with convolutional filters with shape `[batch, height, width, depth]`, pass `axes=[0, 1, 2]`. * for simple batch normalization pass `axes=[0]` (batch only).
Parameters
object x
A `Tensor`.
object axes
Array of ints. Axes along which to compute mean and variance.
object shift
Not used in the current implementation
object name
Name used to scope the operations that compute the moments.
object keep_dims
produce moments with the same dimensionality as the input.
object keepdims
Alias to keep_dims.
Returns
object
Two `Tensor` objects: `mean` and `variance`.

Tensor nce_loss(IEnumerable<IGraphNodeBase> weights, IGraphNodeBase biases, IGraphNodeBase labels, IGraphNodeBase inputs, int num_sampled, int num_classes, int num_true, object sampled_values, bool remove_accidental_hits, string partition_strategy, string name)

Computes and returns the noise-contrastive estimation training loss.

See [Noise-contrastive estimation: A new estimation principle for unnormalized statistical models](http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf). Also see our [Candidate Sampling Algorithms Reference](https://www.tensorflow.org/extras/candidate_sampling.pdf)

A common use case is to use this method for training, and calculate the full sigmoid loss for evaluation or inference. In this case, you must set `partition_strategy="div"` for the two losses to be consistent, as in the following example: Note: By default this uses a log-uniform (Zipfian) distribution for sampling, so your labels must be sorted in order of decreasing frequency to achieve good results. For more details, see tf.random.log_uniform_candidate_sampler.

Note: In the case where `num_true` > 1, we assign to each target class the target probability 1 / `num_true` so that the target probabilities sum to 1 per-example.

Note: It would be useful to allow a variable number of target classes per example. We hope to provide this functionality in a future release. For now, if you have a variable number of target classes, you can pad them out to a constant number by either repeating them or by padding with an otherwise unused class.
Parameters
IEnumerable<IGraphNodeBase> weights
A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-partitioned) class embeddings.
IGraphNodeBase biases
A `Tensor` of shape `[num_classes]`. The class biases.
IGraphNodeBase labels
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
IGraphNodeBase inputs
A `Tensor` of shape `[batch_size, dim]`. The forward activations of the input network.
int num_sampled
An `int`. The number of negative classes to randomly sample per batch. This single sample of negative classes is evaluated for each element in the batch.
int num_classes
An `int`. The number of possible classes.
int num_true
An `int`. The number of target classes per training example.
object sampled_values
a tuple of (`sampled_candidates`, `true_expected_count`, `sampled_expected_count`) returned by a `*_candidate_sampler` function. (if None, we default to `log_uniform_candidate_sampler`)
bool remove_accidental_hits
A `bool`. Whether to remove "accidental hits" where a sampled class equals one of the target classes. If set to `True`, this is a "Sampled Logistic" loss instead of NCE, and we are learning to generate log-odds instead of log probabilities. See our [Candidate Sampling Algorithms Reference] (https://www.tensorflow.org/extras/candidate_sampling.pdf). Default is False.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
string name
A name for the operation (optional).
Returns
Tensor
A `batch_size` 1-D tensor of per-example NCE losses.
Show Example
if mode == "train":
              loss = tf.nn.nce_loss(
                  weights=weights,
                  biases=biases,
                  labels=labels,
                  inputs=inputs,
                 ...,
                  partition_strategy="div")
            elif mode == "eval":
              logits = tf.matmul(inputs, tf.transpose(weights))
              logits = tf.nn.bias_add(logits, biases)
              labels_one_hot = tf.one_hot(labels, n_classes)
              loss = tf.nn.sigmoid_cross_entropy_with_logits(
                  labels=labels_one_hot,
                  logits=logits)
              loss = tf.reduce_sum(loss, axis=1) 

Tensor nce_loss(IGraphNodeBase weights, IGraphNodeBase biases, IGraphNodeBase labels, IGraphNodeBase inputs, int num_sampled, int num_classes, int num_true, object sampled_values, bool remove_accidental_hits, string partition_strategy, string name)

Computes and returns the noise-contrastive estimation training loss.

See [Noise-contrastive estimation: A new estimation principle for unnormalized statistical models](http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf). Also see our [Candidate Sampling Algorithms Reference](https://www.tensorflow.org/extras/candidate_sampling.pdf)

A common use case is to use this method for training, and calculate the full sigmoid loss for evaluation or inference. In this case, you must set `partition_strategy="div"` for the two losses to be consistent, as in the following example: Note: By default this uses a log-uniform (Zipfian) distribution for sampling, so your labels must be sorted in order of decreasing frequency to achieve good results. For more details, see tf.random.log_uniform_candidate_sampler.

Note: In the case where `num_true` > 1, we assign to each target class the target probability 1 / `num_true` so that the target probabilities sum to 1 per-example.

Note: It would be useful to allow a variable number of target classes per example. We hope to provide this functionality in a future release. For now, if you have a variable number of target classes, you can pad them out to a constant number by either repeating them or by padding with an otherwise unused class.
Parameters
IGraphNodeBase weights
A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-partitioned) class embeddings.
IGraphNodeBase biases
A `Tensor` of shape `[num_classes]`. The class biases.
IGraphNodeBase labels
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
IGraphNodeBase inputs
A `Tensor` of shape `[batch_size, dim]`. The forward activations of the input network.
int num_sampled
An `int`. The number of negative classes to randomly sample per batch. This single sample of negative classes is evaluated for each element in the batch.
int num_classes
An `int`. The number of possible classes.
int num_true
An `int`. The number of target classes per training example.
object sampled_values
a tuple of (`sampled_candidates`, `true_expected_count`, `sampled_expected_count`) returned by a `*_candidate_sampler` function. (if None, we default to `log_uniform_candidate_sampler`)
bool remove_accidental_hits
A `bool`. Whether to remove "accidental hits" where a sampled class equals one of the target classes. If set to `True`, this is a "Sampled Logistic" loss instead of NCE, and we are learning to generate log-odds instead of log probabilities. See our [Candidate Sampling Algorithms Reference] (https://www.tensorflow.org/extras/candidate_sampling.pdf). Default is False.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
string name
A name for the operation (optional).
Returns
Tensor
A `batch_size` 1-D tensor of per-example NCE losses.
Show Example
if mode == "train":
              loss = tf.nn.nce_loss(
                  weights=weights,
                  biases=biases,
                  labels=labels,
                  inputs=inputs,
                 ...,
                  partition_strategy="div")
            elif mode == "eval":
              logits = tf.matmul(inputs, tf.transpose(weights))
              logits = tf.nn.bias_add(logits, biases)
              labels_one_hot = tf.one_hot(labels, n_classes)
              loss = tf.nn.sigmoid_cross_entropy_with_logits(
                  labels=labels_one_hot,
                  logits=logits)
              loss = tf.reduce_sum(loss, axis=1) 

object nce_loss_dyn(object weights, object biases, object labels, object inputs, object num_sampled, object num_classes, ImplicitContainer<T> num_true, object sampled_values, ImplicitContainer<T> remove_accidental_hits, ImplicitContainer<T> partition_strategy, ImplicitContainer<T> name)

Computes and returns the noise-contrastive estimation training loss.

See [Noise-contrastive estimation: A new estimation principle for unnormalized statistical models](http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf). Also see our [Candidate Sampling Algorithms Reference](https://www.tensorflow.org/extras/candidate_sampling.pdf)

A common use case is to use this method for training, and calculate the full sigmoid loss for evaluation or inference. In this case, you must set `partition_strategy="div"` for the two losses to be consistent, as in the following example: Note: By default this uses a log-uniform (Zipfian) distribution for sampling, so your labels must be sorted in order of decreasing frequency to achieve good results. For more details, see tf.random.log_uniform_candidate_sampler.

Note: In the case where `num_true` > 1, we assign to each target class the target probability 1 / `num_true` so that the target probabilities sum to 1 per-example.

Note: It would be useful to allow a variable number of target classes per example. We hope to provide this functionality in a future release. For now, if you have a variable number of target classes, you can pad them out to a constant number by either repeating them or by padding with an otherwise unused class.
Parameters
object weights
A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-partitioned) class embeddings.
object biases
A `Tensor` of shape `[num_classes]`. The class biases.
object labels
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
object inputs
A `Tensor` of shape `[batch_size, dim]`. The forward activations of the input network.
object num_sampled
An `int`. The number of negative classes to randomly sample per batch. This single sample of negative classes is evaluated for each element in the batch.
object num_classes
An `int`. The number of possible classes.
ImplicitContainer<T> num_true
An `int`. The number of target classes per training example.
object sampled_values
a tuple of (`sampled_candidates`, `true_expected_count`, `sampled_expected_count`) returned by a `*_candidate_sampler` function. (if None, we default to `log_uniform_candidate_sampler`)
ImplicitContainer<T> remove_accidental_hits
A `bool`. Whether to remove "accidental hits" where a sampled class equals one of the target classes. If set to `True`, this is a "Sampled Logistic" loss instead of NCE, and we are learning to generate log-odds instead of log probabilities. See our [Candidate Sampling Algorithms Reference] (https://www.tensorflow.org/extras/candidate_sampling.pdf). Default is False.
ImplicitContainer<T> partition_strategy
A string specifying the partitioning strategy, relevant if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
ImplicitContainer<T> name
A name for the operation (optional).
Returns
object
A `batch_size` 1-D tensor of per-example NCE losses.
Show Example
if mode == "train":
              loss = tf.nn.nce_loss(
                  weights=weights,
                  biases=biases,
                  labels=labels,
                  inputs=inputs,
                 ...,
                  partition_strategy="div")
            elif mode == "eval":
              logits = tf.matmul(inputs, tf.transpose(weights))
              logits = tf.nn.bias_add(logits, biases)
              labels_one_hot = tf.one_hot(labels, n_classes)
              loss = tf.nn.sigmoid_cross_entropy_with_logits(
                  labels=labels_one_hot,
                  logits=logits)
              loss = tf.reduce_sum(loss, axis=1) 

object normalize_moments(IGraphNodeBase counts, IGraphNodeBase mean_ss, IGraphNodeBase variance_ss, object shift, string name)

Calculate the mean and variance of based on the sufficient statistics.
Parameters
IGraphNodeBase counts
A `Tensor` containing the total count of the data (one value).
IGraphNodeBase mean_ss
A `Tensor` containing the mean sufficient statistics: the (possibly shifted) sum of the elements to average over.
IGraphNodeBase variance_ss
A `Tensor` containing the variance sufficient statistics: the (possibly shifted) squared sum of the data to compute the variance over.
object shift
A `Tensor` containing the value by which the data is shifted for numerical stability, or `None` if no shift was performed.
string name
Name used to scope the operations that compute the moments.
Returns
object
Two `Tensor` objects: `mean` and `variance`.

object normalize_moments_dyn(object counts, object mean_ss, object variance_ss, object shift, object name)

Calculate the mean and variance of based on the sufficient statistics.
Parameters
object counts
A `Tensor` containing the total count of the data (one value).
object mean_ss
A `Tensor` containing the mean sufficient statistics: the (possibly shifted) sum of the elements to average over.
object variance_ss
A `Tensor` containing the variance sufficient statistics: the (possibly shifted) squared sum of the data to compute the variance over.
object shift
A `Tensor` containing the value by which the data is shifted for numerical stability, or `None` if no shift was performed.
object name
Name used to scope the operations that compute the moments.
Returns
object
Two `Tensor` objects: `mean` and `variance`.

object pool(IGraphNodeBase input, IEnumerable<object> window_shape, string pooling_type, string padding, ndarray dilation_rate, ValueTuple<object> strides, string name, string data_format, object dilations)

Performs an N-D pooling operation.

In the case that `data_format` does not start with "NC", computes for 0 <= b < batch_size, 0 <= x[i] < output_spatial_shape[i], 0 <= c < num_channels:

``` output[b, x[0],..., x[N-1], c] = REDUCE_{z[0],..., z[N-1]} input[b, x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0], ... x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1], c], ```

where the reduction function REDUCE depends on the value of `pooling_type`, and pad_before is defined based on the value of `padding` as described in the "returns" section of tf.nn.convolution for details. The reduction never includes out-of-bounds positions.

In the case that `data_format` starts with `"NC"`, the `input` and output are simply transposed as follows:

``` pool(input, data_format, **kwargs) = tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1)) ```
Parameters
IGraphNodeBase input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if data_format does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
IEnumerable<object> window_shape
Sequence of N ints >= 1.
string pooling_type
Specifies pooling operation, must be "AVG" or "MAX".
string padding
The padding algorithm, must be "SAME" or "VALID". See the "returns" section of tf.nn.convolution for details.
ndarray dilation_rate
Optional. Dilation rate. List of N ints >= 1. Defaults to [1]*N. If any value of dilation_rate is > 1, then all values of strides must be 1.
ValueTuple<object> strides
Optional. Sequence of N ints >= 1. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
string name
Optional. Name of the op.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
Alias for dilation_rate
Returns
object
Tensor of rank N+2, of shape [batch_size] + output_spatial_shape + [num_channels]

if data_format is None or does not start with "NC", or

[batch_size, num_channels] + output_spatial_shape

if data_format starts with "NC", where `output_spatial_shape` depends on the value of padding:

If padding = "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding = "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i]) / strides[i]).

object pool(IGraphNodeBase input, IEnumerable<object> window_shape, string pooling_type, string padding, ValueTuple<object> dilation_rate, ValueTuple<object> strides, string name, string data_format, object dilations)

Performs an N-D pooling operation.

In the case that `data_format` does not start with "NC", computes for 0 <= b < batch_size, 0 <= x[i] < output_spatial_shape[i], 0 <= c < num_channels:

``` output[b, x[0],..., x[N-1], c] = REDUCE_{z[0],..., z[N-1]} input[b, x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0], ... x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1], c], ```

where the reduction function REDUCE depends on the value of `pooling_type`, and pad_before is defined based on the value of `padding` as described in the "returns" section of tf.nn.convolution for details. The reduction never includes out-of-bounds positions.

In the case that `data_format` starts with `"NC"`, the `input` and output are simply transposed as follows:

``` pool(input, data_format, **kwargs) = tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1)) ```
Parameters
IGraphNodeBase input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if data_format does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
IEnumerable<object> window_shape
Sequence of N ints >= 1.
string pooling_type
Specifies pooling operation, must be "AVG" or "MAX".
string padding
The padding algorithm, must be "SAME" or "VALID". See the "returns" section of tf.nn.convolution for details.
ValueTuple<object> dilation_rate
Optional. Dilation rate. List of N ints >= 1. Defaults to [1]*N. If any value of dilation_rate is > 1, then all values of strides must be 1.
ValueTuple<object> strides
Optional. Sequence of N ints >= 1. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
string name
Optional. Name of the op.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
Alias for dilation_rate
Returns
object
Tensor of rank N+2, of shape [batch_size] + output_spatial_shape + [num_channels]

if data_format is None or does not start with "NC", or

[batch_size, num_channels] + output_spatial_shape

if data_format starts with "NC", where `output_spatial_shape` depends on the value of padding:

If padding = "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding = "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i]) / strides[i]).

object pool(IGraphNodeBase input, IEnumerable<object> window_shape, string pooling_type, string padding, ndarray dilation_rate, ndarray strides, string name, string data_format, object dilations)

Performs an N-D pooling operation.

In the case that `data_format` does not start with "NC", computes for 0 <= b < batch_size, 0 <= x[i] < output_spatial_shape[i], 0 <= c < num_channels:

``` output[b, x[0],..., x[N-1], c] = REDUCE_{z[0],..., z[N-1]} input[b, x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0], ... x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1], c], ```

where the reduction function REDUCE depends on the value of `pooling_type`, and pad_before is defined based on the value of `padding` as described in the "returns" section of tf.nn.convolution for details. The reduction never includes out-of-bounds positions.

In the case that `data_format` starts with `"NC"`, the `input` and output are simply transposed as follows:

``` pool(input, data_format, **kwargs) = tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1)) ```
Parameters
IGraphNodeBase input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if data_format does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
IEnumerable<object> window_shape
Sequence of N ints >= 1.
string pooling_type
Specifies pooling operation, must be "AVG" or "MAX".
string padding
The padding algorithm, must be "SAME" or "VALID". See the "returns" section of tf.nn.convolution for details.
ndarray dilation_rate
Optional. Dilation rate. List of N ints >= 1. Defaults to [1]*N. If any value of dilation_rate is > 1, then all values of strides must be 1.
ndarray strides
Optional. Sequence of N ints >= 1. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
string name
Optional. Name of the op.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
Alias for dilation_rate
Returns
object
Tensor of rank N+2, of shape [batch_size] + output_spatial_shape + [num_channels]

if data_format is None or does not start with "NC", or

[batch_size, num_channels] + output_spatial_shape

if data_format starts with "NC", where `output_spatial_shape` depends on the value of padding:

If padding = "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding = "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i]) / strides[i]).

object pool(IGraphNodeBase input, IEnumerable<object> window_shape, string pooling_type, string padding, ValueTuple<object> dilation_rate, IEnumerable<int> strides, string name, string data_format, object dilations)

Performs an N-D pooling operation.

In the case that `data_format` does not start with "NC", computes for 0 <= b < batch_size, 0 <= x[i] < output_spatial_shape[i], 0 <= c < num_channels:

``` output[b, x[0],..., x[N-1], c] = REDUCE_{z[0],..., z[N-1]} input[b, x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0], ... x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1], c], ```

where the reduction function REDUCE depends on the value of `pooling_type`, and pad_before is defined based on the value of `padding` as described in the "returns" section of tf.nn.convolution for details. The reduction never includes out-of-bounds positions.

In the case that `data_format` starts with `"NC"`, the `input` and output are simply transposed as follows:

``` pool(input, data_format, **kwargs) = tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1)) ```
Parameters
IGraphNodeBase input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if data_format does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
IEnumerable<object> window_shape
Sequence of N ints >= 1.
string pooling_type
Specifies pooling operation, must be "AVG" or "MAX".
string padding
The padding algorithm, must be "SAME" or "VALID". See the "returns" section of tf.nn.convolution for details.
ValueTuple<object> dilation_rate
Optional. Dilation rate. List of N ints >= 1. Defaults to [1]*N. If any value of dilation_rate is > 1, then all values of strides must be 1.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
string name
Optional. Name of the op.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
Alias for dilation_rate
Returns
object
Tensor of rank N+2, of shape [batch_size] + output_spatial_shape + [num_channels]

if data_format is None or does not start with "NC", or

[batch_size, num_channels] + output_spatial_shape

if data_format starts with "NC", where `output_spatial_shape` depends on the value of padding:

If padding = "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding = "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i]) / strides[i]).

object pool(IGraphNodeBase input, IEnumerable<object> window_shape, string pooling_type, string padding, ValueTuple<object> dilation_rate, ndarray strides, string name, string data_format, object dilations)

Performs an N-D pooling operation.

In the case that `data_format` does not start with "NC", computes for 0 <= b < batch_size, 0 <= x[i] < output_spatial_shape[i], 0 <= c < num_channels:

``` output[b, x[0],..., x[N-1], c] = REDUCE_{z[0],..., z[N-1]} input[b, x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0], ... x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1], c], ```

where the reduction function REDUCE depends on the value of `pooling_type`, and pad_before is defined based on the value of `padding` as described in the "returns" section of tf.nn.convolution for details. The reduction never includes out-of-bounds positions.

In the case that `data_format` starts with `"NC"`, the `input` and output are simply transposed as follows:

``` pool(input, data_format, **kwargs) = tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1)) ```
Parameters
IGraphNodeBase input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if data_format does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
IEnumerable<object> window_shape
Sequence of N ints >= 1.
string pooling_type
Specifies pooling operation, must be "AVG" or "MAX".
string padding
The padding algorithm, must be "SAME" or "VALID". See the "returns" section of tf.nn.convolution for details.
ValueTuple<object> dilation_rate
Optional. Dilation rate. List of N ints >= 1. Defaults to [1]*N. If any value of dilation_rate is > 1, then all values of strides must be 1.
ndarray strides
Optional. Sequence of N ints >= 1. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
string name
Optional. Name of the op.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
Alias for dilation_rate
Returns
object
Tensor of rank N+2, of shape [batch_size] + output_spatial_shape + [num_channels]

if data_format is None or does not start with "NC", or

[batch_size, num_channels] + output_spatial_shape

if data_format starts with "NC", where `output_spatial_shape` depends on the value of padding:

If padding = "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding = "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i]) / strides[i]).

object pool(IGraphNodeBase input, IEnumerable<object> window_shape, string pooling_type, string padding, IEnumerable<int> dilation_rate, ValueTuple<object> strides, string name, string data_format, object dilations)

Performs an N-D pooling operation.

In the case that `data_format` does not start with "NC", computes for 0 <= b < batch_size, 0 <= x[i] < output_spatial_shape[i], 0 <= c < num_channels:

``` output[b, x[0],..., x[N-1], c] = REDUCE_{z[0],..., z[N-1]} input[b, x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0], ... x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1], c], ```

where the reduction function REDUCE depends on the value of `pooling_type`, and pad_before is defined based on the value of `padding` as described in the "returns" section of tf.nn.convolution for details. The reduction never includes out-of-bounds positions.

In the case that `data_format` starts with `"NC"`, the `input` and output are simply transposed as follows:

``` pool(input, data_format, **kwargs) = tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1)) ```
Parameters
IGraphNodeBase input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if data_format does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
IEnumerable<object> window_shape
Sequence of N ints >= 1.
string pooling_type
Specifies pooling operation, must be "AVG" or "MAX".
string padding
The padding algorithm, must be "SAME" or "VALID". See the "returns" section of tf.nn.convolution for details.
IEnumerable<int> dilation_rate
Optional. Dilation rate. List of N ints >= 1. Defaults to [1]*N. If any value of dilation_rate is > 1, then all values of strides must be 1.
ValueTuple<object> strides
Optional. Sequence of N ints >= 1. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
string name
Optional. Name of the op.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
Alias for dilation_rate
Returns
object
Tensor of rank N+2, of shape [batch_size] + output_spatial_shape + [num_channels]

if data_format is None or does not start with "NC", or

[batch_size, num_channels] + output_spatial_shape

if data_format starts with "NC", where `output_spatial_shape` depends on the value of padding:

If padding = "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding = "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i]) / strides[i]).

object pool(IGraphNodeBase input, IEnumerable<object> window_shape, string pooling_type, string padding, IEnumerable<int> dilation_rate, IEnumerable<int> strides, string name, string data_format, object dilations)

Performs an N-D pooling operation.

In the case that `data_format` does not start with "NC", computes for 0 <= b < batch_size, 0 <= x[i] < output_spatial_shape[i], 0 <= c < num_channels:

``` output[b, x[0],..., x[N-1], c] = REDUCE_{z[0],..., z[N-1]} input[b, x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0], ... x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1], c], ```

where the reduction function REDUCE depends on the value of `pooling_type`, and pad_before is defined based on the value of `padding` as described in the "returns" section of tf.nn.convolution for details. The reduction never includes out-of-bounds positions.

In the case that `data_format` starts with `"NC"`, the `input` and output are simply transposed as follows:

``` pool(input, data_format, **kwargs) = tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1)) ```
Parameters
IGraphNodeBase input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if data_format does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
IEnumerable<object> window_shape
Sequence of N ints >= 1.
string pooling_type
Specifies pooling operation, must be "AVG" or "MAX".
string padding
The padding algorithm, must be "SAME" or "VALID". See the "returns" section of tf.nn.convolution for details.
IEnumerable<int> dilation_rate
Optional. Dilation rate. List of N ints >= 1. Defaults to [1]*N. If any value of dilation_rate is > 1, then all values of strides must be 1.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
string name
Optional. Name of the op.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
Alias for dilation_rate
Returns
object
Tensor of rank N+2, of shape [batch_size] + output_spatial_shape + [num_channels]

if data_format is None or does not start with "NC", or

[batch_size, num_channels] + output_spatial_shape

if data_format starts with "NC", where `output_spatial_shape` depends on the value of padding:

If padding = "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding = "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i]) / strides[i]).

object pool(IGraphNodeBase input, IEnumerable<object> window_shape, string pooling_type, string padding, IEnumerable<int> dilation_rate, ndarray strides, string name, string data_format, object dilations)

Performs an N-D pooling operation.

In the case that `data_format` does not start with "NC", computes for 0 <= b < batch_size, 0 <= x[i] < output_spatial_shape[i], 0 <= c < num_channels:

``` output[b, x[0],..., x[N-1], c] = REDUCE_{z[0],..., z[N-1]} input[b, x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0], ... x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1], c], ```

where the reduction function REDUCE depends on the value of `pooling_type`, and pad_before is defined based on the value of `padding` as described in the "returns" section of tf.nn.convolution for details. The reduction never includes out-of-bounds positions.

In the case that `data_format` starts with `"NC"`, the `input` and output are simply transposed as follows:

``` pool(input, data_format, **kwargs) = tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1)) ```
Parameters
IGraphNodeBase input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if data_format does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
IEnumerable<object> window_shape
Sequence of N ints >= 1.
string pooling_type
Specifies pooling operation, must be "AVG" or "MAX".
string padding
The padding algorithm, must be "SAME" or "VALID". See the "returns" section of tf.nn.convolution for details.
IEnumerable<int> dilation_rate
Optional. Dilation rate. List of N ints >= 1. Defaults to [1]*N. If any value of dilation_rate is > 1, then all values of strides must be 1.
ndarray strides
Optional. Sequence of N ints >= 1. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
string name
Optional. Name of the op.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
Alias for dilation_rate
Returns
object
Tensor of rank N+2, of shape [batch_size] + output_spatial_shape + [num_channels]

if data_format is None or does not start with "NC", or

[batch_size, num_channels] + output_spatial_shape

if data_format starts with "NC", where `output_spatial_shape` depends on the value of padding:

If padding = "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding = "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i]) / strides[i]).

object pool(IGraphNodeBase input, IEnumerable<object> window_shape, string pooling_type, string padding, ndarray dilation_rate, IEnumerable<int> strides, string name, string data_format, object dilations)

Performs an N-D pooling operation.

In the case that `data_format` does not start with "NC", computes for 0 <= b < batch_size, 0 <= x[i] < output_spatial_shape[i], 0 <= c < num_channels:

``` output[b, x[0],..., x[N-1], c] = REDUCE_{z[0],..., z[N-1]} input[b, x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0], ... x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1], c], ```

where the reduction function REDUCE depends on the value of `pooling_type`, and pad_before is defined based on the value of `padding` as described in the "returns" section of tf.nn.convolution for details. The reduction never includes out-of-bounds positions.

In the case that `data_format` starts with `"NC"`, the `input` and output are simply transposed as follows:

``` pool(input, data_format, **kwargs) = tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1)) ```
Parameters
IGraphNodeBase input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if data_format does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
IEnumerable<object> window_shape
Sequence of N ints >= 1.
string pooling_type
Specifies pooling operation, must be "AVG" or "MAX".
string padding
The padding algorithm, must be "SAME" or "VALID". See the "returns" section of tf.nn.convolution for details.
ndarray dilation_rate
Optional. Dilation rate. List of N ints >= 1. Defaults to [1]*N. If any value of dilation_rate is > 1, then all values of strides must be 1.
IEnumerable<int> strides
Optional. Sequence of N ints >= 1. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
string name
Optional. Name of the op.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
Alias for dilation_rate
Returns
object
Tensor of rank N+2, of shape [batch_size] + output_spatial_shape + [num_channels]

if data_format is None or does not start with "NC", or

[batch_size, num_channels] + output_spatial_shape

if data_format starts with "NC", where `output_spatial_shape` depends on the value of padding:

If padding = "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding = "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i]) / strides[i]).

object pool_dyn(object input, object window_shape, object pooling_type, object padding, object dilation_rate, object strides, object name, object data_format, object dilations)

Performs an N-D pooling operation.

In the case that `data_format` does not start with "NC", computes for 0 <= b < batch_size, 0 <= x[i] < output_spatial_shape[i], 0 <= c < num_channels:

``` output[b, x[0],..., x[N-1], c] = REDUCE_{z[0],..., z[N-1]} input[b, x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0], ... x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1], c], ```

where the reduction function REDUCE depends on the value of `pooling_type`, and pad_before is defined based on the value of `padding` as described in the "returns" section of tf.nn.convolution for details. The reduction never includes out-of-bounds positions.

In the case that `data_format` starts with `"NC"`, the `input` and output are simply transposed as follows:

``` pool(input, data_format, **kwargs) = tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1)) ```
Parameters
object input
Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if data_format does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data_format starts with "NC". Pooling happens over the spatial dimensions only.
object window_shape
Sequence of N ints >= 1.
object pooling_type
Specifies pooling operation, must be "AVG" or "MAX".
object padding
The padding algorithm, must be "SAME" or "VALID". See the "returns" section of tf.nn.convolution for details.
object dilation_rate
Optional. Dilation rate. List of N ints >= 1. Defaults to [1]*N. If any value of dilation_rate is > 1, then all values of strides must be 1.
object strides
Optional. Sequence of N ints >= 1. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
object name
Optional. Name of the op.
object data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
object dilations
Alias for dilation_rate
Returns
object
Tensor of rank N+2, of shape [batch_size] + output_spatial_shape + [num_channels]

if data_format is None or does not start with "NC", or

[batch_size, num_channels] + output_spatial_shape

if data_format starts with "NC", where `output_spatial_shape` depends on the value of padding:

If padding = "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding = "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i]) / strides[i]).

ValueTuple<object, object, object> raw_rnn(LSTMCell cell, PythonFunctionContainer loop_fn, Nullable<int> parallel_iterations, bool swap_memory, string scope)

Creates an `RNN` specified by RNNCell `cell` and loop function `loop_fn`.

**NOTE: This method is still in testing, and the API may change.**

This function is a more primitive version of `dynamic_rnn` that provides more direct access to the inputs each iteration. It also provides more control over when to start and finish reading the sequence, and what to emit for the output.

For example, it can be used to implement the dynamic decoder of a seq2seq model.

Instead of working with `Tensor` objects, most operations work with `TensorArray` objects directly.

The operation of `raw_rnn`, in pseudo-code, is basically the following: with the additional properties that output and state may be (possibly nested) tuples, as determined by `cell.output_size` and `cell.state_size`, and as a result the final `state` and `emit_ta` may themselves be tuples.

A simple implementation of `dynamic_rnn` via `raw_rnn` looks like this:
Parameters
LSTMCell cell
An instance of RNNCell.
PythonFunctionContainer loop_fn
A callable that takes inputs `(time, cell_output, cell_state, loop_state)` and returns the tuple `(finished, next_input, next_cell_state, emit_output, next_loop_state)`. Here `time` is an int32 scalar `Tensor`, `cell_output` is a `Tensor` or (possibly nested) tuple of tensors as determined by `cell.output_size`, and `cell_state` is a `Tensor` or (possibly nested) tuple of tensors, as determined by the `loop_fn` on its first call (and should match `cell.state_size`). The outputs are: `finished`, a boolean `Tensor` of shape `[batch_size]`, `next_input`: the next input to feed to `cell`, `next_cell_state`: the next state to feed to `cell`, and `emit_output`: the output to store for this iteration. Note that `emit_output` should be a `Tensor` or (possibly nested) tuple of tensors which is aggregated in the `emit_ta` inside the `while_loop`. For the first call to `loop_fn`, the `emit_output` corresponds to the `emit_structure` which is then used to determine the size of the `zero_tensor` for the `emit_ta` (defaults to `cell.output_size`). For the subsequent calls to the `loop_fn`, the `emit_output` corresponds to the actual output tensor that is to be aggregated in the `emit_ta`. The parameter `cell_state` and output `next_cell_state` may be either a single or (possibly nested) tuple of tensors. The parameter `loop_state` and output `next_loop_state` may be either a single or (possibly nested) tuple of `Tensor` and `TensorArray` objects. This last parameter may be ignored by `loop_fn` and the return value may be `None`. If it is not `None`, then the `loop_state` will be propagated through the RNN loop, for use purely by `loop_fn` to keep track of its own state. The `next_loop_state` parameter returned may be `None`. The first call to `loop_fn` will be `time = 0`, `cell_output = None`, `cell_state = None`, and `loop_state = None`. For this call: The `next_cell_state` value should be the value with which to initialize the cell's state. It may be a final state from a previous RNN or it may be the output of `cell.zero_state()`. It should be a (possibly nested) tuple structure of tensors. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a `TensorShape`, this must be a `Tensor` of appropriate type and shape `[batch_size] + cell.state_size`. If `cell.state_size` is a (possibly nested) tuple of ints or `TensorShape`, this will be a tuple having the corresponding shapes. The `emit_output` value may be either `None` or a (possibly nested) tuple structure of tensors, e.g., `(tf.zeros(shape_0, dtype=dtype_0), tf.zeros(shape_1, dtype=dtype_1))`. If this first `emit_output` return value is `None`, then the `emit_ta` result of `raw_rnn` will have the same structure and dtypes as `cell.output_size`. Otherwise `emit_ta` will have the same structure, shapes (prepended with a `batch_size` dimension), and dtypes as `emit_output`. The actual values returned for `emit_output` at this initializing call are ignored. Note, this emit structure must be consistent across all time steps.
Nullable<int> parallel_iterations
(Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
bool swap_memory
Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<object, object, object>
A tuple `(emit_ta, final_state, final_loop_state)` where:

`emit_ta`: The RNN output `TensorArray`. If `loop_fn` returns a (possibly nested) set of Tensors for `emit_output` during initialization, (inputs `time = 0`, `cell_output = None`, and `loop_state = None`), then `emit_ta` will have the same structure, dtypes, and shapes as `emit_output` instead. If `loop_fn` returns `emit_output = None` during this call, the structure of `cell.output_size` is used: If `cell.output_size` is a (possibly nested) tuple of integers or `TensorShape` objects, then `emit_ta` will be a tuple having the same structure as `cell.output_size`, containing TensorArrays whose elements' shapes correspond to the shape data in `cell.output_size`.

`final_state`: The final cell state. If `cell.state_size` is an int, this will be shaped `[batch_size, cell.state_size]`. If it is a `TensorShape`, this will be shaped `[batch_size] + cell.state_size`. If it is a (possibly nested) tuple of ints or `TensorShape`, this will be a tuple having the corresponding shapes.

`final_loop_state`: The final loop state as returned by `loop_fn`.
Show Example
time = tf.constant(0, dtype=tf.int32)
            (finished, next_input, initial_state, emit_structure, loop_state) = loop_fn(
                time=time, cell_output=None, cell_state=None, loop_state=None)
            emit_ta = TensorArray(dynamic_size=True, dtype=initial_state.dtype)
            state = initial_state
            while not all(finished):
              (output, cell_state) = cell(next_input, state)
              (next_finished, next_input, next_state, emit, loop_state) = loop_fn(
                  time=time + 1, cell_output=output, cell_state=cell_state,
                  loop_state=loop_state)
              # Emit zeros and copy forward state for minibatch entries that are finished.
              state = tf.where(finished, state, next_state)
              emit = tf.where(finished, tf.zeros_like(emit_structure), emit)
              emit_ta = emit_ta.write(time, emit)
              # If any new minibatch entries are marked as finished, mark these.
              finished = tf.logical_or(finished, next_finished)
              time += 1
            return (emit_ta, state, loop_state) 

ValueTuple<object, object, object> raw_rnn(LSTMCell cell, PythonFunctionContainer loop_fn, Nullable<int> parallel_iterations, bool swap_memory, VariableScope scope)

Creates an `RNN` specified by RNNCell `cell` and loop function `loop_fn`.

**NOTE: This method is still in testing, and the API may change.**

This function is a more primitive version of `dynamic_rnn` that provides more direct access to the inputs each iteration. It also provides more control over when to start and finish reading the sequence, and what to emit for the output.

For example, it can be used to implement the dynamic decoder of a seq2seq model.

Instead of working with `Tensor` objects, most operations work with `TensorArray` objects directly.

The operation of `raw_rnn`, in pseudo-code, is basically the following: with the additional properties that output and state may be (possibly nested) tuples, as determined by `cell.output_size` and `cell.state_size`, and as a result the final `state` and `emit_ta` may themselves be tuples.

A simple implementation of `dynamic_rnn` via `raw_rnn` looks like this:
Parameters
LSTMCell cell
An instance of RNNCell.
PythonFunctionContainer loop_fn
A callable that takes inputs `(time, cell_output, cell_state, loop_state)` and returns the tuple `(finished, next_input, next_cell_state, emit_output, next_loop_state)`. Here `time` is an int32 scalar `Tensor`, `cell_output` is a `Tensor` or (possibly nested) tuple of tensors as determined by `cell.output_size`, and `cell_state` is a `Tensor` or (possibly nested) tuple of tensors, as determined by the `loop_fn` on its first call (and should match `cell.state_size`). The outputs are: `finished`, a boolean `Tensor` of shape `[batch_size]`, `next_input`: the next input to feed to `cell`, `next_cell_state`: the next state to feed to `cell`, and `emit_output`: the output to store for this iteration. Note that `emit_output` should be a `Tensor` or (possibly nested) tuple of tensors which is aggregated in the `emit_ta` inside the `while_loop`. For the first call to `loop_fn`, the `emit_output` corresponds to the `emit_structure` which is then used to determine the size of the `zero_tensor` for the `emit_ta` (defaults to `cell.output_size`). For the subsequent calls to the `loop_fn`, the `emit_output` corresponds to the actual output tensor that is to be aggregated in the `emit_ta`. The parameter `cell_state` and output `next_cell_state` may be either a single or (possibly nested) tuple of tensors. The parameter `loop_state` and output `next_loop_state` may be either a single or (possibly nested) tuple of `Tensor` and `TensorArray` objects. This last parameter may be ignored by `loop_fn` and the return value may be `None`. If it is not `None`, then the `loop_state` will be propagated through the RNN loop, for use purely by `loop_fn` to keep track of its own state. The `next_loop_state` parameter returned may be `None`. The first call to `loop_fn` will be `time = 0`, `cell_output = None`, `cell_state = None`, and `loop_state = None`. For this call: The `next_cell_state` value should be the value with which to initialize the cell's state. It may be a final state from a previous RNN or it may be the output of `cell.zero_state()`. It should be a (possibly nested) tuple structure of tensors. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a `TensorShape`, this must be a `Tensor` of appropriate type and shape `[batch_size] + cell.state_size`. If `cell.state_size` is a (possibly nested) tuple of ints or `TensorShape`, this will be a tuple having the corresponding shapes. The `emit_output` value may be either `None` or a (possibly nested) tuple structure of tensors, e.g., `(tf.zeros(shape_0, dtype=dtype_0), tf.zeros(shape_1, dtype=dtype_1))`. If this first `emit_output` return value is `None`, then the `emit_ta` result of `raw_rnn` will have the same structure and dtypes as `cell.output_size`. Otherwise `emit_ta` will have the same structure, shapes (prepended with a `batch_size` dimension), and dtypes as `emit_output`. The actual values returned for `emit_output` at this initializing call are ignored. Note, this emit structure must be consistent across all time steps.
Nullable<int> parallel_iterations
(Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
bool swap_memory
Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<object, object, object>
A tuple `(emit_ta, final_state, final_loop_state)` where:

`emit_ta`: The RNN output `TensorArray`. If `loop_fn` returns a (possibly nested) set of Tensors for `emit_output` during initialization, (inputs `time = 0`, `cell_output = None`, and `loop_state = None`), then `emit_ta` will have the same structure, dtypes, and shapes as `emit_output` instead. If `loop_fn` returns `emit_output = None` during this call, the structure of `cell.output_size` is used: If `cell.output_size` is a (possibly nested) tuple of integers or `TensorShape` objects, then `emit_ta` will be a tuple having the same structure as `cell.output_size`, containing TensorArrays whose elements' shapes correspond to the shape data in `cell.output_size`.

`final_state`: The final cell state. If `cell.state_size` is an int, this will be shaped `[batch_size, cell.state_size]`. If it is a `TensorShape`, this will be shaped `[batch_size] + cell.state_size`. If it is a (possibly nested) tuple of ints or `TensorShape`, this will be a tuple having the corresponding shapes.

`final_loop_state`: The final loop state as returned by `loop_fn`.
Show Example
time = tf.constant(0, dtype=tf.int32)
            (finished, next_input, initial_state, emit_structure, loop_state) = loop_fn(
                time=time, cell_output=None, cell_state=None, loop_state=None)
            emit_ta = TensorArray(dynamic_size=True, dtype=initial_state.dtype)
            state = initial_state
            while not all(finished):
              (output, cell_state) = cell(next_input, state)
              (next_finished, next_input, next_state, emit, loop_state) = loop_fn(
                  time=time + 1, cell_output=output, cell_state=cell_state,
                  loop_state=loop_state)
              # Emit zeros and copy forward state for minibatch entries that are finished.
              state = tf.where(finished, state, next_state)
              emit = tf.where(finished, tf.zeros_like(emit_structure), emit)
              emit_ta = emit_ta.write(time, emit)
              # If any new minibatch entries are marked as finished, mark these.
              finished = tf.logical_or(finished, next_finished)
              time += 1
            return (emit_ta, state, loop_state) 

object raw_rnn_dyn(object cell, object loop_fn, object parallel_iterations, ImplicitContainer<T> swap_memory, object scope)

Creates an `RNN` specified by RNNCell `cell` and loop function `loop_fn`.

**NOTE: This method is still in testing, and the API may change.**

This function is a more primitive version of `dynamic_rnn` that provides more direct access to the inputs each iteration. It also provides more control over when to start and finish reading the sequence, and what to emit for the output.

For example, it can be used to implement the dynamic decoder of a seq2seq model.

Instead of working with `Tensor` objects, most operations work with `TensorArray` objects directly.

The operation of `raw_rnn`, in pseudo-code, is basically the following: with the additional properties that output and state may be (possibly nested) tuples, as determined by `cell.output_size` and `cell.state_size`, and as a result the final `state` and `emit_ta` may themselves be tuples.

A simple implementation of `dynamic_rnn` via `raw_rnn` looks like this:
Parameters
object cell
An instance of RNNCell.
object loop_fn
A callable that takes inputs `(time, cell_output, cell_state, loop_state)` and returns the tuple `(finished, next_input, next_cell_state, emit_output, next_loop_state)`. Here `time` is an int32 scalar `Tensor`, `cell_output` is a `Tensor` or (possibly nested) tuple of tensors as determined by `cell.output_size`, and `cell_state` is a `Tensor` or (possibly nested) tuple of tensors, as determined by the `loop_fn` on its first call (and should match `cell.state_size`). The outputs are: `finished`, a boolean `Tensor` of shape `[batch_size]`, `next_input`: the next input to feed to `cell`, `next_cell_state`: the next state to feed to `cell`, and `emit_output`: the output to store for this iteration. Note that `emit_output` should be a `Tensor` or (possibly nested) tuple of tensors which is aggregated in the `emit_ta` inside the `while_loop`. For the first call to `loop_fn`, the `emit_output` corresponds to the `emit_structure` which is then used to determine the size of the `zero_tensor` for the `emit_ta` (defaults to `cell.output_size`). For the subsequent calls to the `loop_fn`, the `emit_output` corresponds to the actual output tensor that is to be aggregated in the `emit_ta`. The parameter `cell_state` and output `next_cell_state` may be either a single or (possibly nested) tuple of tensors. The parameter `loop_state` and output `next_loop_state` may be either a single or (possibly nested) tuple of `Tensor` and `TensorArray` objects. This last parameter may be ignored by `loop_fn` and the return value may be `None`. If it is not `None`, then the `loop_state` will be propagated through the RNN loop, for use purely by `loop_fn` to keep track of its own state. The `next_loop_state` parameter returned may be `None`. The first call to `loop_fn` will be `time = 0`, `cell_output = None`, `cell_state = None`, and `loop_state = None`. For this call: The `next_cell_state` value should be the value with which to initialize the cell's state. It may be a final state from a previous RNN or it may be the output of `cell.zero_state()`. It should be a (possibly nested) tuple structure of tensors. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a `TensorShape`, this must be a `Tensor` of appropriate type and shape `[batch_size] + cell.state_size`. If `cell.state_size` is a (possibly nested) tuple of ints or `TensorShape`, this will be a tuple having the corresponding shapes. The `emit_output` value may be either `None` or a (possibly nested) tuple structure of tensors, e.g., `(tf.zeros(shape_0, dtype=dtype_0), tf.zeros(shape_1, dtype=dtype_1))`. If this first `emit_output` return value is `None`, then the `emit_ta` result of `raw_rnn` will have the same structure and dtypes as `cell.output_size`. Otherwise `emit_ta` will have the same structure, shapes (prepended with a `batch_size` dimension), and dtypes as `emit_output`. The actual values returned for `emit_output` at this initializing call are ignored. Note, this emit structure must be consistent across all time steps.
object parallel_iterations
(Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
ImplicitContainer<T> swap_memory
Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
object scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
object
A tuple `(emit_ta, final_state, final_loop_state)` where:

`emit_ta`: The RNN output `TensorArray`. If `loop_fn` returns a (possibly nested) set of Tensors for `emit_output` during initialization, (inputs `time = 0`, `cell_output = None`, and `loop_state = None`), then `emit_ta` will have the same structure, dtypes, and shapes as `emit_output` instead. If `loop_fn` returns `emit_output = None` during this call, the structure of `cell.output_size` is used: If `cell.output_size` is a (possibly nested) tuple of integers or `TensorShape` objects, then `emit_ta` will be a tuple having the same structure as `cell.output_size`, containing TensorArrays whose elements' shapes correspond to the shape data in `cell.output_size`.

`final_state`: The final cell state. If `cell.state_size` is an int, this will be shaped `[batch_size, cell.state_size]`. If it is a `TensorShape`, this will be shaped `[batch_size] + cell.state_size`. If it is a (possibly nested) tuple of ints or `TensorShape`, this will be a tuple having the corresponding shapes.

`final_loop_state`: The final loop state as returned by `loop_fn`.
Show Example
time = tf.constant(0, dtype=tf.int32)
            (finished, next_input, initial_state, emit_structure, loop_state) = loop_fn(
                time=time, cell_output=None, cell_state=None, loop_state=None)
            emit_ta = TensorArray(dynamic_size=True, dtype=initial_state.dtype)
            state = initial_state
            while not all(finished):
              (output, cell_state) = cell(next_input, state)
              (next_finished, next_input, next_state, emit, loop_state) = loop_fn(
                  time=time + 1, cell_output=output, cell_state=cell_state,
                  loop_state=loop_state)
              # Emit zeros and copy forward state for minibatch entries that are finished.
              state = tf.where(finished, state, next_state)
              emit = tf.where(finished, tf.zeros_like(emit_structure), emit)
              emit_ta = emit_ta.write(time, emit)
              # If any new minibatch entries are marked as finished, mark these.
              finished = tf.logical_or(finished, next_finished)
              time += 1
            return (emit_ta, state, loop_state) 

Tensor relu(IGraphNodeBase features, PythonFunctionContainer name)

Computes rectified linear: `max(features, 0)`.
Parameters
IGraphNodeBase features
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `qint8`.
PythonFunctionContainer name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `features`.

Tensor relu(IGraphNodeBase features, string name)

Computes rectified linear: `max(features, 0)`.
Parameters
IGraphNodeBase features
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `qint8`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `features`.

object relu_dyn(object features, object name)

Computes rectified linear: `max(features, 0)`.
Parameters
object features
A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `qint8`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `features`.

Tensor relu_layer(object x, object weights, object biases, string name)

Computes Relu(x * weight + biases).
Parameters
object x
a 2D tensor. Dimensions typically: batch, in_units
object weights
a 2D tensor. Dimensions typically: in_units, out_units
object biases
a 1D tensor. Dimensions: out_units
string name
A name for the operation (optional). If not specified "nn_relu_layer" is used.
Returns
Tensor
A 2-D Tensor computing relu(matmul(x, weights) + biases). Dimensions typically: batch, out_units.

object relu_layer_dyn(object x, object weights, object biases, object name)

Computes Relu(x * weight + biases).
Parameters
object x
a 2D tensor. Dimensions typically: batch, in_units
object weights
a 2D tensor. Dimensions typically: in_units, out_units
object biases
a 1D tensor. Dimensions: out_units
object name
A name for the operation (optional). If not specified "nn_relu_layer" is used.
Returns
object
A 2-D Tensor computing relu(matmul(x, weights) + biases). Dimensions typically: batch, out_units.

Tensor relu6(IEnumerable<IGraphNodeBase> features, string name)

Computes Rectified Linear 6: `min(max(features, 0), 6)`.

Source: [Convolutional Deep Belief Networks on CIFAR-10. A. Krizhevsky](http://www.cs.utoronto.ca/~kriz/conv-cifar10-aug2010.pdf)
Parameters
IEnumerable<IGraphNodeBase> features
A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`, `int16`, or `int8`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` with the same type as `features`.

object safe_embedding_lookup_sparse_dyn(object embedding_weights, object sparse_ids, object sparse_weights, ImplicitContainer<T> combiner, object default_id, object name, ImplicitContainer<T> partition_strategy, object max_norm)

Lookup embedding results, accounting for invalid IDs and empty features.

The partitioned embedding in `embedding_weights` must all be the same shape except for the first dimension. The first dimension is allowed to vary as the vocabulary size is not necessarily a multiple of `P`. `embedding_weights` may be a `PartitionedVariable` as returned by using `tf.compat.v1.get_variable()` with a partitioner.

Invalid IDs (< 0) are pruned from input IDs and weights, as well as any IDs with non-positive weight. For an entry with no features, the embedding vector for `default_id` is returned, or the 0-vector if `default_id` is not supplied.

The ids and weights may be multi-dimensional. Embeddings are always aggregated along the last dimension.
Parameters
object embedding_weights
A list of `P` float `Tensor`s or values representing partitioned embedding `Tensor`s. Alternatively, a `PartitionedVariable` created by partitioning along dimension 0. The total unpartitioned shape should be `[e_0, e_1,..., e_m]`, where `e_0` represents the vocab size and `e_1,..., e_m` are the embedding dimensions.
object sparse_ids
`SparseTensor` of shape `[d_0, d_1,..., d_n]` containing the ids. `d_0` is typically batch size.
object sparse_weights
`SparseTensor` of same shape as `sparse_ids`, containing float weights corresponding to `sparse_ids`, or `None` if all weights are be assumed to be 1.0.
ImplicitContainer<T> combiner
A string specifying how to combine embedding results for each entry. Currently "mean", "sqrtn" and "sum" are supported, with "mean" the default.
object default_id
The id to use for an entry with no features.
object name
A name for this operation (optional).
ImplicitContainer<T> partition_strategy
A string specifying the partitioning strategy. Currently `"div"` and `"mod"` are supported. Default is `"div"`.
object max_norm
If not `None`, all embeddings are l2-normalized to max_norm before combining.
Returns
object
Dense `Tensor` of shape `[d_0, d_1,..., d_{n-1}, e_1,..., e_m]`.

Tensor sampled_softmax_loss(IEnumerable<IGraphNodeBase> weights, IGraphNodeBase biases, IGraphNodeBase labels, IGraphNodeBase inputs, int num_sampled, int num_classes, int num_true, object sampled_values, bool remove_accidental_hits, string partition_strategy, string name, object seed)

Computes and returns the sampled softmax training loss.

This is a faster way to train a softmax classifier over a huge number of classes.

This operation is for training only. It is generally an underestimate of the full softmax loss.

A common use case is to use this method for training, and calculate the full softmax loss for evaluation or inference. In this case, you must set `partition_strategy="div"` for the two losses to be consistent, as in the following example: See our [Candidate Sampling Algorithms Reference] (https://www.tensorflow.org/extras/candidate_sampling.pdf)

Also see Section 3 of [Jean et al., 2014](http://arxiv.org/abs/1412.2007) ([pdf](http://arxiv.org/pdf/1412.2007.pdf)) for the math.
Parameters
IEnumerable<IGraphNodeBase> weights
A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-sharded) class embeddings.
IGraphNodeBase biases
A `Tensor` of shape `[num_classes]`. The class biases.
IGraphNodeBase labels
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes. Note that this format differs from the `labels` argument of `nn.softmax_cross_entropy_with_logits`.
IGraphNodeBase inputs
A `Tensor` of shape `[batch_size, dim]`. The forward activations of the input network.
int num_sampled
An `int`. The number of classes to randomly sample per batch.
int num_classes
An `int`. The number of possible classes.
int num_true
An `int`. The number of target classes per training example.
object sampled_values
a tuple of (`sampled_candidates`, `true_expected_count`, `sampled_expected_count`) returned by a `*_candidate_sampler` function. (if None, we default to `log_uniform_candidate_sampler`)
bool remove_accidental_hits
A `bool`. whether to remove "accidental hits" where a sampled class equals one of the target classes. Default is True.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
string name
A name for the operation (optional).
object seed
random seed for candidate sampling. Default to None, which doesn't set the op-level random seed for candidate sampling.
Returns
Tensor
A `batch_size` 1-D tensor of per-example sampled softmax losses.
Show Example
if mode == "train":
              loss = tf.nn.sampled_softmax_loss(
                  weights=weights,
                  biases=biases,
                  labels=labels,
                  inputs=inputs,
                 ...,
                  partition_strategy="div")
            elif mode == "eval":
              logits = tf.matmul(inputs, tf.transpose(weights))
              logits = tf.nn.bias_add(logits, biases)
              labels_one_hot = tf.one_hot(labels, n_classes)
              loss = tf.nn.softmax_cross_entropy_with_logits(
                  labels=labels_one_hot,
                  logits=logits) 

Tensor sampled_softmax_loss(IGraphNodeBase weights, IGraphNodeBase biases, IGraphNodeBase labels, IGraphNodeBase inputs, int num_sampled, int num_classes, int num_true, object sampled_values, bool remove_accidental_hits, string partition_strategy, string name, object seed)

Computes and returns the sampled softmax training loss.

This is a faster way to train a softmax classifier over a huge number of classes.

This operation is for training only. It is generally an underestimate of the full softmax loss.

A common use case is to use this method for training, and calculate the full softmax loss for evaluation or inference. In this case, you must set `partition_strategy="div"` for the two losses to be consistent, as in the following example: See our [Candidate Sampling Algorithms Reference] (https://www.tensorflow.org/extras/candidate_sampling.pdf)

Also see Section 3 of [Jean et al., 2014](http://arxiv.org/abs/1412.2007) ([pdf](http://arxiv.org/pdf/1412.2007.pdf)) for the math.
Parameters
IGraphNodeBase weights
A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-sharded) class embeddings.
IGraphNodeBase biases
A `Tensor` of shape `[num_classes]`. The class biases.
IGraphNodeBase labels
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes. Note that this format differs from the `labels` argument of `nn.softmax_cross_entropy_with_logits`.
IGraphNodeBase inputs
A `Tensor` of shape `[batch_size, dim]`. The forward activations of the input network.
int num_sampled
An `int`. The number of classes to randomly sample per batch.
int num_classes
An `int`. The number of possible classes.
int num_true
An `int`. The number of target classes per training example.
object sampled_values
a tuple of (`sampled_candidates`, `true_expected_count`, `sampled_expected_count`) returned by a `*_candidate_sampler` function. (if None, we default to `log_uniform_candidate_sampler`)
bool remove_accidental_hits
A `bool`. whether to remove "accidental hits" where a sampled class equals one of the target classes. Default is True.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
string name
A name for the operation (optional).
object seed
random seed for candidate sampling. Default to None, which doesn't set the op-level random seed for candidate sampling.
Returns
Tensor
A `batch_size` 1-D tensor of per-example sampled softmax losses.
Show Example
if mode == "train":
              loss = tf.nn.sampled_softmax_loss(
                  weights=weights,
                  biases=biases,
                  labels=labels,
                  inputs=inputs,
                 ...,
                  partition_strategy="div")
            elif mode == "eval":
              logits = tf.matmul(inputs, tf.transpose(weights))
              logits = tf.nn.bias_add(logits, biases)
              labels_one_hot = tf.one_hot(labels, n_classes)
              loss = tf.nn.softmax_cross_entropy_with_logits(
                  labels=labels_one_hot,
                  logits=logits) 

Tensor sampled_softmax_loss(IGraphNodeBase weights, IGraphNodeBase biases, IGraphNodeBase labels, IGraphNodeBase inputs, int num_sampled, int num_classes, int num_true, ValueTuple<IEnumerable<int>, ndarray, object> sampled_values, bool remove_accidental_hits, string partition_strategy, string name, object seed)

Computes and returns the sampled softmax training loss.

This is a faster way to train a softmax classifier over a huge number of classes.

This operation is for training only. It is generally an underestimate of the full softmax loss.

A common use case is to use this method for training, and calculate the full softmax loss for evaluation or inference. In this case, you must set `partition_strategy="div"` for the two losses to be consistent, as in the following example: See our [Candidate Sampling Algorithms Reference] (https://www.tensorflow.org/extras/candidate_sampling.pdf)

Also see Section 3 of [Jean et al., 2014](http://arxiv.org/abs/1412.2007) ([pdf](http://arxiv.org/pdf/1412.2007.pdf)) for the math.
Parameters
IGraphNodeBase weights
A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-sharded) class embeddings.
IGraphNodeBase biases
A `Tensor` of shape `[num_classes]`. The class biases.
IGraphNodeBase labels
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes. Note that this format differs from the `labels` argument of `nn.softmax_cross_entropy_with_logits`.
IGraphNodeBase inputs
A `Tensor` of shape `[batch_size, dim]`. The forward activations of the input network.
int num_sampled
An `int`. The number of classes to randomly sample per batch.
int num_classes
An `int`. The number of possible classes.
int num_true
An `int`. The number of target classes per training example.
ValueTuple<IEnumerable<int>, ndarray, object> sampled_values
a tuple of (`sampled_candidates`, `true_expected_count`, `sampled_expected_count`) returned by a `*_candidate_sampler` function. (if None, we default to `log_uniform_candidate_sampler`)
bool remove_accidental_hits
A `bool`. whether to remove "accidental hits" where a sampled class equals one of the target classes. Default is True.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
string name
A name for the operation (optional).
object seed
random seed for candidate sampling. Default to None, which doesn't set the op-level random seed for candidate sampling.
Returns
Tensor
A `batch_size` 1-D tensor of per-example sampled softmax losses.
Show Example
if mode == "train":
              loss = tf.nn.sampled_softmax_loss(
                  weights=weights,
                  biases=biases,
                  labels=labels,
                  inputs=inputs,
                 ...,
                  partition_strategy="div")
            elif mode == "eval":
              logits = tf.matmul(inputs, tf.transpose(weights))
              logits = tf.nn.bias_add(logits, biases)
              labels_one_hot = tf.one_hot(labels, n_classes)
              loss = tf.nn.softmax_cross_entropy_with_logits(
                  labels=labels_one_hot,
                  logits=logits) 

Tensor sampled_softmax_loss(IEnumerable<IGraphNodeBase> weights, IGraphNodeBase biases, IGraphNodeBase labels, IGraphNodeBase inputs, int num_sampled, int num_classes, int num_true, ValueTuple<IEnumerable<int>, ndarray, object> sampled_values, bool remove_accidental_hits, string partition_strategy, string name, object seed)

Computes and returns the sampled softmax training loss.

This is a faster way to train a softmax classifier over a huge number of classes.

This operation is for training only. It is generally an underestimate of the full softmax loss.

A common use case is to use this method for training, and calculate the full softmax loss for evaluation or inference. In this case, you must set `partition_strategy="div"` for the two losses to be consistent, as in the following example: See our [Candidate Sampling Algorithms Reference] (https://www.tensorflow.org/extras/candidate_sampling.pdf)

Also see Section 3 of [Jean et al., 2014](http://arxiv.org/abs/1412.2007) ([pdf](http://arxiv.org/pdf/1412.2007.pdf)) for the math.
Parameters
IEnumerable<IGraphNodeBase> weights
A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-sharded) class embeddings.
IGraphNodeBase biases
A `Tensor` of shape `[num_classes]`. The class biases.
IGraphNodeBase labels
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes. Note that this format differs from the `labels` argument of `nn.softmax_cross_entropy_with_logits`.
IGraphNodeBase inputs
A `Tensor` of shape `[batch_size, dim]`. The forward activations of the input network.
int num_sampled
An `int`. The number of classes to randomly sample per batch.
int num_classes
An `int`. The number of possible classes.
int num_true
An `int`. The number of target classes per training example.
ValueTuple<IEnumerable<int>, ndarray, object> sampled_values
a tuple of (`sampled_candidates`, `true_expected_count`, `sampled_expected_count`) returned by a `*_candidate_sampler` function. (if None, we default to `log_uniform_candidate_sampler`)
bool remove_accidental_hits
A `bool`. whether to remove "accidental hits" where a sampled class equals one of the target classes. Default is True.
string partition_strategy
A string specifying the partitioning strategy, relevant if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
string name
A name for the operation (optional).
object seed
random seed for candidate sampling. Default to None, which doesn't set the op-level random seed for candidate sampling.
Returns
Tensor
A `batch_size` 1-D tensor of per-example sampled softmax losses.
Show Example
if mode == "train":
              loss = tf.nn.sampled_softmax_loss(
                  weights=weights,
                  biases=biases,
                  labels=labels,
                  inputs=inputs,
                 ...,
                  partition_strategy="div")
            elif mode == "eval":
              logits = tf.matmul(inputs, tf.transpose(weights))
              logits = tf.nn.bias_add(logits, biases)
              labels_one_hot = tf.one_hot(labels, n_classes)
              loss = tf.nn.softmax_cross_entropy_with_logits(
                  labels=labels_one_hot,
                  logits=logits) 

object sampled_softmax_loss_dyn(object weights, object biases, object labels, object inputs, object num_sampled, object num_classes, ImplicitContainer<T> num_true, object sampled_values, ImplicitContainer<T> remove_accidental_hits, ImplicitContainer<T> partition_strategy, ImplicitContainer<T> name, object seed)

Computes and returns the sampled softmax training loss.

This is a faster way to train a softmax classifier over a huge number of classes.

This operation is for training only. It is generally an underestimate of the full softmax loss.

A common use case is to use this method for training, and calculate the full softmax loss for evaluation or inference. In this case, you must set `partition_strategy="div"` for the two losses to be consistent, as in the following example: See our [Candidate Sampling Algorithms Reference] (https://www.tensorflow.org/extras/candidate_sampling.pdf)

Also see Section 3 of [Jean et al., 2014](http://arxiv.org/abs/1412.2007) ([pdf](http://arxiv.org/pdf/1412.2007.pdf)) for the math.
Parameters
object weights
A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-sharded) class embeddings.
object biases
A `Tensor` of shape `[num_classes]`. The class biases.
object labels
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes. Note that this format differs from the `labels` argument of `nn.softmax_cross_entropy_with_logits`.
object inputs
A `Tensor` of shape `[batch_size, dim]`. The forward activations of the input network.
object num_sampled
An `int`. The number of classes to randomly sample per batch.
object num_classes
An `int`. The number of possible classes.
ImplicitContainer<T> num_true
An `int`. The number of target classes per training example.
object sampled_values
a tuple of (`sampled_candidates`, `true_expected_count`, `sampled_expected_count`) returned by a `*_candidate_sampler` function. (if None, we default to `log_uniform_candidate_sampler`)
ImplicitContainer<T> remove_accidental_hits
A `bool`. whether to remove "accidental hits" where a sampled class equals one of the target classes. Default is True.
ImplicitContainer<T> partition_strategy
A string specifying the partitioning strategy, relevant if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. See tf.nn.embedding_lookup for more details.
ImplicitContainer<T> name
A name for the operation (optional).
object seed
random seed for candidate sampling. Default to None, which doesn't set the op-level random seed for candidate sampling.
Returns
object
A `batch_size` 1-D tensor of per-example sampled softmax losses.
Show Example
if mode == "train":
              loss = tf.nn.sampled_softmax_loss(
                  weights=weights,
                  biases=biases,
                  labels=labels,
                  inputs=inputs,
                 ...,
                  partition_strategy="div")
            elif mode == "eval":
              logits = tf.matmul(inputs, tf.transpose(weights))
              logits = tf.nn.bias_add(logits, biases)
              labels_one_hot = tf.one_hot(labels, n_classes)
              loss = tf.nn.softmax_cross_entropy_with_logits(
                  labels=labels_one_hot,
                  logits=logits) 

object scale_regularization_loss(IEnumerable<int> regularization_loss)

Scales the sum of the given regularization losses by number of replicas.

Usage with distribution strategy and custom training loop:
Parameters
IEnumerable<int> regularization_loss
Regularization loss.
Returns
object
Scalar loss value.
Show Example
with strategy.scope():
              def compute_loss(self, label, predictions):
                per_example_loss = tf.keras.losses.sparse_categorical_crossentropy(
                    labels, predictions) 

# Compute loss that is scaled by sample_weight and by global batch size. loss = tf.compute_average_loss( per_example_loss, sample_weight=sample_weight, global_batch_size=GLOBAL_BATCH_SIZE)

# Add scaled regularization losses. loss += tf.scale_regularization_loss(tf.nn.l2_loss(weights)) return loss

object scale_regularization_loss(IGraphNodeBase regularization_loss)

Scales the sum of the given regularization losses by number of replicas.

Usage with distribution strategy and custom training loop:
Parameters
IGraphNodeBase regularization_loss
Regularization loss.
Returns
object
Scalar loss value.
Show Example
with strategy.scope():
              def compute_loss(self, label, predictions):
                per_example_loss = tf.keras.losses.sparse_categorical_crossentropy(
                    labels, predictions) 

# Compute loss that is scaled by sample_weight and by global batch size. loss = tf.compute_average_loss( per_example_loss, sample_weight=sample_weight, global_batch_size=GLOBAL_BATCH_SIZE)

# Add scaled regularization losses. loss += tf.scale_regularization_loss(tf.nn.l2_loss(weights)) return loss

object scale_regularization_loss_dyn(object regularization_loss)

Scales the sum of the given regularization losses by number of replicas.

Usage with distribution strategy and custom training loop:
Parameters
object regularization_loss
Regularization loss.
Returns
object
Scalar loss value.
Show Example
with strategy.scope():
              def compute_loss(self, label, predictions):
                per_example_loss = tf.keras.losses.sparse_categorical_crossentropy(
                    labels, predictions) 

# Compute loss that is scaled by sample_weight and by global batch size. loss = tf.compute_average_loss( per_example_loss, sample_weight=sample_weight, global_batch_size=GLOBAL_BATCH_SIZE)

# Add scaled regularization losses. loss += tf.scale_regularization_loss(tf.nn.l2_loss(weights)) return loss

Tensor selu(IGraphNodeBase features, string name)

Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)`

if < 0, `scale * features` otherwise.

To be used together with `initializer = tf.variance_scaling_initializer(factor=1.0, mode='FAN_IN')`. For correct dropout, use tf.contrib.nn.alpha_dropout.

See [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515)
Parameters
IGraphNodeBase features
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `features`.

object selu_dyn(object features, object name)

Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)`

if < 0, `scale * features` otherwise.

To be used together with `initializer = tf.variance_scaling_initializer(factor=1.0, mode='FAN_IN')`. For correct dropout, use tf.contrib.nn.alpha_dropout.

See [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515)
Parameters
object features
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `features`.

Tensor separable_conv2d(IEnumerable<IGraphNodeBase> input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, int strides, string padding, IEnumerable<int> rate, string name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IEnumerable<IGraphNodeBase> input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
int strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
IEnumerable<int> rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor separable_conv2d(IEnumerable<IGraphNodeBase> input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, IEnumerable<int> strides, string padding, int rate, PythonFunctionContainer name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IEnumerable<IGraphNodeBase> input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
IEnumerable<int> strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
int rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
PythonFunctionContainer name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor separable_conv2d(IEnumerable<IGraphNodeBase> input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, IEnumerable<int> strides, string padding, int rate, string name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IEnumerable<IGraphNodeBase> input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
IEnumerable<int> strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
int rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor separable_conv2d(IGraphNodeBase input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, int strides, string padding, int rate, string name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
int strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
int rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor separable_conv2d(IEnumerable<IGraphNodeBase> input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, int strides, string padding, IEnumerable<int> rate, PythonFunctionContainer name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IEnumerable<IGraphNodeBase> input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
int strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
IEnumerable<int> rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
PythonFunctionContainer name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor separable_conv2d(IEnumerable<IGraphNodeBase> input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, IEnumerable<int> strides, string padding, IEnumerable<int> rate, PythonFunctionContainer name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IEnumerable<IGraphNodeBase> input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
IEnumerable<int> strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
IEnumerable<int> rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
PythonFunctionContainer name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor separable_conv2d(IGraphNodeBase input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, int strides, string padding, int rate, PythonFunctionContainer name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
int strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
int rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
PythonFunctionContainer name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor separable_conv2d(IEnumerable<IGraphNodeBase> input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, int strides, string padding, int rate, string name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IEnumerable<IGraphNodeBase> input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
int strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
int rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor separable_conv2d(IGraphNodeBase input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, IEnumerable<int> strides, string padding, int rate, string name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
IEnumerable<int> strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
int rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor separable_conv2d(IGraphNodeBase input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, int strides, string padding, IEnumerable<int> rate, string name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
int strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
IEnumerable<int> rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor separable_conv2d(IGraphNodeBase input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, IEnumerable<int> strides, string padding, IEnumerable<int> rate, PythonFunctionContainer name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
IEnumerable<int> strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
IEnumerable<int> rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
PythonFunctionContainer name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor separable_conv2d(IGraphNodeBase input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, IEnumerable<int> strides, string padding, IEnumerable<int> rate, string name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
IEnumerable<int> strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
IEnumerable<int> rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor separable_conv2d(IGraphNodeBase input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, int strides, string padding, IEnumerable<int> rate, PythonFunctionContainer name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
int strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
IEnumerable<int> rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
PythonFunctionContainer name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor separable_conv2d(IEnumerable<IGraphNodeBase> input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, int strides, string padding, int rate, PythonFunctionContainer name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IEnumerable<IGraphNodeBase> input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
int strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
int rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
PythonFunctionContainer name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor separable_conv2d(IEnumerable<IGraphNodeBase> input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, IEnumerable<int> strides, string padding, IEnumerable<int> rate, string name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IEnumerable<IGraphNodeBase> input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
IEnumerable<int> strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
IEnumerable<int> rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
string name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor separable_conv2d(IGraphNodeBase input, IGraphNodeBase depthwise_filter, IGraphNodeBase pointwise_filter, IEnumerable<int> strides, string padding, int rate, PythonFunctionContainer name, string data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
IGraphNodeBase input
4-D `Tensor` with shape according to `data_format`.
IGraphNodeBase depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
IGraphNodeBase pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
IEnumerable<int> strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
string padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
int rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
PythonFunctionContainer name
A name for this operation (optional).
string data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
Tensor
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

object separable_conv2d_dyn(object input, object depthwise_filter, object pointwise_filter, object strides, object padding, object rate, object name, object data_format, object dilations)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.

In detail, with the default NHWC format,

output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]

`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
Parameters
object input
4-D `Tensor` with shape according to `data_format`.
object depthwise_filter
4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1.
object pointwise_filter
4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially.
object strides
1-D of size 4. The strides for the depthwise convolution for each dimension of `input`.
object padding
A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
object rate
1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
object name
A name for this operation (optional).
object data_format
The data format for input. Either "NHWC" (default) or "NCHW".
object dilations
Alias of rate.
Returns
object
A 4-D `Tensor` with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].

Tensor sigmoid_cross_entropy_with_logits(IEnumerable<double> _sentinel, IGraphNodeBase labels, object logits, string name)

Computes sigmoid cross entropy given `logits`.

Measures the probability error in discrete classification tasks in which each class is independent and not mutually exclusive. For instance, one could perform multilabel classification where a picture can contain both an elephant and a dog at the same time.

For brevity, let `x = logits`, `z = labels`. The logistic loss is

z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) = z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) = z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) = (1 - z) * x + log(1 + exp(-x)) = x - x * z + log(1 + exp(-x))

For x < 0, to avoid overflow in exp(-x), we reformulate the above

x - x * z + log(1 + exp(-x)) = log(exp(x)) - x * z + log(1 + exp(-x)) = - x * z + log(1 + exp(x))

Hence, to ensure stability and avoid overflow, the implementation uses this equivalent formulation

max(x, 0) - x * z + log(1 + exp(-abs(x)))

`logits` and `labels` must have the same type and shape.
Parameters
IEnumerable<double> _sentinel
Used to prevent positional parameters. Internal, do not use.
IGraphNodeBase labels
A `Tensor` of the same type and shape as `logits`.
object logits
A `Tensor` of type `float32` or `float64`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `logits` with the componentwise logistic losses.

Tensor sigmoid_cross_entropy_with_logits(IEnumerable<double> _sentinel, IGraphNodeBase labels, IEnumerable<IGraphNodeBase> logits, string name)

Computes sigmoid cross entropy given `logits`.

Measures the probability error in discrete classification tasks in which each class is independent and not mutually exclusive. For instance, one could perform multilabel classification where a picture can contain both an elephant and a dog at the same time.

For brevity, let `x = logits`, `z = labels`. The logistic loss is

z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) = z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) = z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) = (1 - z) * x + log(1 + exp(-x)) = x - x * z + log(1 + exp(-x))

For x < 0, to avoid overflow in exp(-x), we reformulate the above

x - x * z + log(1 + exp(-x)) = log(exp(x)) - x * z + log(1 + exp(-x)) = - x * z + log(1 + exp(x))

Hence, to ensure stability and avoid overflow, the implementation uses this equivalent formulation

max(x, 0) - x * z + log(1 + exp(-abs(x)))

`logits` and `labels` must have the same type and shape.
Parameters
IEnumerable<double> _sentinel
Used to prevent positional parameters. Internal, do not use.
IGraphNodeBase labels
A `Tensor` of the same type and shape as `logits`.
IEnumerable<IGraphNodeBase> logits
A `Tensor` of type `float32` or `float64`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `logits` with the componentwise logistic losses.

object sigmoid_cross_entropy_with_logits_dyn(object _sentinel, object labels, object logits, object name)

Computes sigmoid cross entropy given `logits`.

Measures the probability error in discrete classification tasks in which each class is independent and not mutually exclusive. For instance, one could perform multilabel classification where a picture can contain both an elephant and a dog at the same time.

For brevity, let `x = logits`, `z = labels`. The logistic loss is

z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) = z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) = z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) = (1 - z) * x + log(1 + exp(-x)) = x - x * z + log(1 + exp(-x))

For x < 0, to avoid overflow in exp(-x), we reformulate the above

x - x * z + log(1 + exp(-x)) = log(exp(x)) - x * z + log(1 + exp(-x)) = - x * z + log(1 + exp(x))

Hence, to ensure stability and avoid overflow, the implementation uses this equivalent formulation

max(x, 0) - x * z + log(1 + exp(-abs(x)))

`logits` and `labels` must have the same type and shape.
Parameters
object _sentinel
Used to prevent positional parameters. Internal, do not use.
object labels
A `Tensor` of the same type and shape as `logits`.
object logits
A `Tensor` of type `float32` or `float64`.
object name
A name for the operation (optional).
Returns
object
A `Tensor` of the same shape as `logits` with the componentwise logistic losses.

Tensor softmax(IEnumerable<IGraphNodeBase> logits, IGraphNodeBase axis, string name, object dim)

Computes softmax activations. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

This function performs the equivalent of

softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis)
Parameters
IEnumerable<IGraphNodeBase> logits
A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
IGraphNodeBase axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for `axis`.
Returns
Tensor
A `Tensor`. Has the same type and shape as `logits`.

Tensor softmax(IEnumerable<IGraphNodeBase> logits, int axis, string name, object dim)

Computes softmax activations. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

This function performs the equivalent of

softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis)
Parameters
IEnumerable<IGraphNodeBase> logits
A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
int axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for `axis`.
Returns
Tensor
A `Tensor`. Has the same type and shape as `logits`.

Tensor softmax(PythonClassContainer logits, int axis, string name, object dim)

Computes softmax activations. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

This function performs the equivalent of

softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis)
Parameters
PythonClassContainer logits
A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
int axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for `axis`.
Returns
Tensor
A `Tensor`. Has the same type and shape as `logits`.

Tensor softmax(PythonClassContainer logits, IGraphNodeBase axis, string name, object dim)

Computes softmax activations. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

This function performs the equivalent of

softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis)
Parameters
PythonClassContainer logits
A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
IGraphNodeBase axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for `axis`.
Returns
Tensor
A `Tensor`. Has the same type and shape as `logits`.

Tensor softmax(object logits, int axis, string name, object dim)

Computes softmax activations. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

This function performs the equivalent of

softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis)
Parameters
object logits
A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
int axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for `axis`.
Returns
Tensor
A `Tensor`. Has the same type and shape as `logits`.

Tensor softmax(object logits, IGraphNodeBase axis, string name, object dim)

Computes softmax activations. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

This function performs the equivalent of

softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis)
Parameters
object logits
A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
IGraphNodeBase axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for `axis`.
Returns
Tensor
A `Tensor`. Has the same type and shape as `logits`.

Tensor softmax_cross_entropy_with_logits(IEnumerable<double> _sentinel, ValueTuple<PythonClassContainer, PythonClassContainer> labels, IEnumerable<double> logits, int dim, string name, object axis)

Computes softmax cross entropy between `logits` and `labels`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `dim` argument specifying the class dimension.

Backpropagation will happen only into `logits`. To calculate a cross entropy loss that allows backpropagation into both `logits` and `labels`, see tf.nn.softmax_cross_entropy_with_logits_v2.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IEnumerable<double> _sentinel
Used to prevent positional parameters. Internal, do not use.
ValueTuple<PythonClassContainer, PythonClassContainer> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IEnumerable<double> logits
Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities.
int dim
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object axis
Alias for dim.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits(IEnumerable<double> _sentinel, IEnumerable<double> labels, IEnumerable<double> logits, int dim, string name, object axis)

Computes softmax cross entropy between `logits` and `labels`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `dim` argument specifying the class dimension.

Backpropagation will happen only into `logits`. To calculate a cross entropy loss that allows backpropagation into both `logits` and `labels`, see tf.nn.softmax_cross_entropy_with_logits_v2.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IEnumerable<double> _sentinel
Used to prevent positional parameters. Internal, do not use.
IEnumerable<double> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IEnumerable<double> logits
Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities.
int dim
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object axis
Alias for dim.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits(IEnumerable<double> _sentinel, IndexedSlices labels, IEnumerable<double> logits, int dim, string name, object axis)

Computes softmax cross entropy between `logits` and `labels`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `dim` argument specifying the class dimension.

Backpropagation will happen only into `logits`. To calculate a cross entropy loss that allows backpropagation into both `logits` and `labels`, see tf.nn.softmax_cross_entropy_with_logits_v2.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IEnumerable<double> _sentinel
Used to prevent positional parameters. Internal, do not use.
IndexedSlices labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IEnumerable<double> logits
Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities.
int dim
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object axis
Alias for dim.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits(IEnumerable<double> _sentinel, IndexedSlices labels, IGraphNodeBase logits, int dim, string name, object axis)

Computes softmax cross entropy between `logits` and `labels`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `dim` argument specifying the class dimension.

Backpropagation will happen only into `logits`. To calculate a cross entropy loss that allows backpropagation into both `logits` and `labels`, see tf.nn.softmax_cross_entropy_with_logits_v2.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IEnumerable<double> _sentinel
Used to prevent positional parameters. Internal, do not use.
IndexedSlices labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IGraphNodeBase logits
Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities.
int dim
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object axis
Alias for dim.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits(IEnumerable<double> _sentinel, int labels, IEnumerable<double> logits, int dim, string name, object axis)

Computes softmax cross entropy between `logits` and `labels`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `dim` argument specifying the class dimension.

Backpropagation will happen only into `logits`. To calculate a cross entropy loss that allows backpropagation into both `logits` and `labels`, see tf.nn.softmax_cross_entropy_with_logits_v2.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IEnumerable<double> _sentinel
Used to prevent positional parameters. Internal, do not use.
int labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IEnumerable<double> logits
Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities.
int dim
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object axis
Alias for dim.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits(IEnumerable<double> _sentinel, int labels, IGraphNodeBase logits, int dim, string name, object axis)

Computes softmax cross entropy between `logits` and `labels`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `dim` argument specifying the class dimension.

Backpropagation will happen only into `logits`. To calculate a cross entropy loss that allows backpropagation into both `logits` and `labels`, see tf.nn.softmax_cross_entropy_with_logits_v2.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IEnumerable<double> _sentinel
Used to prevent positional parameters. Internal, do not use.
int labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IGraphNodeBase logits
Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities.
int dim
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object axis
Alias for dim.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits(IEnumerable<double> _sentinel, IGraphNodeBase labels, IEnumerable<double> logits, int dim, string name, object axis)

Computes softmax cross entropy between `logits` and `labels`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `dim` argument specifying the class dimension.

Backpropagation will happen only into `logits`. To calculate a cross entropy loss that allows backpropagation into both `logits` and `labels`, see tf.nn.softmax_cross_entropy_with_logits_v2.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IEnumerable<double> _sentinel
Used to prevent positional parameters. Internal, do not use.
IGraphNodeBase labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IEnumerable<double> logits
Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities.
int dim
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object axis
Alias for dim.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits(IEnumerable<double> _sentinel, IGraphNodeBase labels, IGraphNodeBase logits, int dim, string name, object axis)

Computes softmax cross entropy between `logits` and `labels`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `dim` argument specifying the class dimension.

Backpropagation will happen only into `logits`. To calculate a cross entropy loss that allows backpropagation into both `logits` and `labels`, see tf.nn.softmax_cross_entropy_with_logits_v2.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IEnumerable<double> _sentinel
Used to prevent positional parameters. Internal, do not use.
IGraphNodeBase labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IGraphNodeBase logits
Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities.
int dim
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object axis
Alias for dim.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits(IEnumerable<double> _sentinel, IEnumerable<double> labels, IGraphNodeBase logits, int dim, string name, object axis)

Computes softmax cross entropy between `logits` and `labels`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `dim` argument specifying the class dimension.

Backpropagation will happen only into `logits`. To calculate a cross entropy loss that allows backpropagation into both `logits` and `labels`, see tf.nn.softmax_cross_entropy_with_logits_v2.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IEnumerable<double> _sentinel
Used to prevent positional parameters. Internal, do not use.
IEnumerable<double> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IGraphNodeBase logits
Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities.
int dim
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object axis
Alias for dim.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits(IEnumerable<double> _sentinel, ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase logits, int dim, string name, object axis)

Computes softmax cross entropy between `logits` and `labels`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `dim` argument specifying the class dimension.

Backpropagation will happen only into `logits`. To calculate a cross entropy loss that allows backpropagation into both `logits` and `labels`, see tf.nn.softmax_cross_entropy_with_logits_v2.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IEnumerable<double> _sentinel
Used to prevent positional parameters. Internal, do not use.
ValueTuple<PythonClassContainer, PythonClassContainer> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IGraphNodeBase logits
Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities.
int dim
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object axis
Alias for dim.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

object softmax_cross_entropy_with_logits_dyn(object _sentinel, object labels, object logits, ImplicitContainer<T> dim, object name, object axis)

Computes softmax cross entropy between `logits` and `labels`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `dim` argument specifying the class dimension.

Backpropagation will happen only into `logits`. To calculate a cross entropy loss that allows backpropagation into both `logits` and `labels`, see tf.nn.softmax_cross_entropy_with_logits_v2.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
object _sentinel
Used to prevent positional parameters. Internal, do not use.
object labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
object logits
Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities.
ImplicitContainer<T> dim
The class dimension. Defaulted to -1 which is the last dimension.
object name
A name for the operation (optional).
object axis
Alias for dim.
Returns
object
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IndexedSlices labels, IEnumerable<IGraphNodeBase> logits, Nullable<int> axis, PythonFunctionContainer name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IndexedSlices labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IEnumerable<IGraphNodeBase> logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
PythonFunctionContainer name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IGraphNodeBase labels, object logits, Nullable<int> axis, string name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
object logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IDictionary<object, object> labels, IEnumerable<IGraphNodeBase> logits, Nullable<int> axis, PythonFunctionContainer name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IDictionary<object, object> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IEnumerable<IGraphNodeBase> logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
PythonFunctionContainer name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IGraphNodeBase labels, IEnumerable<IGraphNodeBase> logits, Nullable<int> axis, string name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IEnumerable<IGraphNodeBase> logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IGraphNodeBase labels, object logits, Nullable<int> axis, PythonFunctionContainer name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
object logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
PythonFunctionContainer name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IGraphNodeBase labels, IEnumerable<IGraphNodeBase> logits, Nullable<int> axis, PythonFunctionContainer name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IEnumerable<IGraphNodeBase> logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
PythonFunctionContainer name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(ValueTuple<PythonClassContainer, PythonClassContainer> labels, object logits, Nullable<int> axis, string name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
object logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IndexedSlices labels, object logits, Nullable<int> axis, PythonFunctionContainer name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IndexedSlices labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
object logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
PythonFunctionContainer name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IDictionary<object, object> labels, IEnumerable<IGraphNodeBase> logits, Nullable<int> axis, string name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IDictionary<object, object> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IEnumerable<IGraphNodeBase> logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IDictionary<object, object> labels, object logits, Nullable<int> axis, PythonFunctionContainer name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IDictionary<object, object> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
object logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
PythonFunctionContainer name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IDictionary<object, object> labels, object logits, Nullable<int> axis, string name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IDictionary<object, object> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
object logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IEnumerable<int> labels, IEnumerable<IGraphNodeBase> logits, Nullable<int> axis, PythonFunctionContainer name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IEnumerable<int> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IEnumerable<IGraphNodeBase> logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
PythonFunctionContainer name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IEnumerable<int> labels, IEnumerable<IGraphNodeBase> logits, Nullable<int> axis, string name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IEnumerable<int> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IEnumerable<IGraphNodeBase> logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IEnumerable<int> labels, object logits, Nullable<int> axis, PythonFunctionContainer name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IEnumerable<int> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
object logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
PythonFunctionContainer name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IndexedSlices labels, object logits, Nullable<int> axis, string name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IndexedSlices labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
object logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IEnumerable<int> labels, object logits, Nullable<int> axis, string name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IEnumerable<int> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
object logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IEnumerable<IGraphNodeBase> logits, Nullable<int> axis, string name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IEnumerable<IGraphNodeBase> logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(ValueTuple<PythonClassContainer, PythonClassContainer> labels, object logits, Nullable<int> axis, PythonFunctionContainer name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
object logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
PythonFunctionContainer name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(IndexedSlices labels, IEnumerable<IGraphNodeBase> logits, Nullable<int> axis, string name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IndexedSlices labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IEnumerable<IGraphNodeBase> logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
string name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

Tensor softmax_cross_entropy_with_logits_v2(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IEnumerable<IGraphNodeBase> logits, Nullable<int> axis, PythonFunctionContainer name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
IEnumerable<IGraphNodeBase> logits
Unscaled log probabilities.
Nullable<int> axis
The class dimension. Defaulted to -1 which is the last dimension.
PythonFunctionContainer name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
Tensor
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

object softmax_cross_entropy_with_logits_v2_dyn(object labels, object logits, object axis, object name, object dim)

Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.

`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).

Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through tf.stop_gradient before feeding it to this function.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
object labels
Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution.
object logits
Unscaled log probabilities.
object axis
The class dimension. Defaulted to -1 which is the last dimension.
object name
A name for the operation (optional).
object dim
Deprecated alias for axis.
Returns
object
A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`.

object softmax_dyn(object logits, object axis, object name, object dim)

Computes softmax activations. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

This function performs the equivalent of

softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis)
Parameters
object logits
A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
object axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
object name
A name for the operation (optional).
object dim
Deprecated alias for `axis`.
Returns
object
A `Tensor`. Has the same type and shape as `logits`.

Tensor softplus(IGraphNodeBase features, string name)

Computes softplus: `log(exp(features) + 1)`.
Parameters
IGraphNodeBase features
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `features`.

object softplus_dyn(object features, object name)

Computes softplus: `log(exp(features) + 1)`.
Parameters
object features
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `features`.

Tensor softsign(IGraphNodeBase features, string name)

Computes softsign: `features / (abs(features) + 1)`.
Parameters
IGraphNodeBase features
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `features`.

object softsign_dyn(object features, object name)

Computes softsign: `features / (abs(features) + 1)`.
Parameters
object features
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `features`.

Tensor sparse_softmax_cross_entropy_with_logits(IGraphNodeBase _sentinel, IGraphNodeBase labels, object logits, string name)

Computes sparse softmax cross entropy between `logits` and `labels`.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the `labels` vector must provide a single specific index for the true class for each row of `logits` (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see `softmax_cross_entropy_with_logits_v2`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits of shape `[batch_size, num_classes]` and have labels of shape `[batch_size]`, but higher dimensions are supported, in which case the `dim`-th dimension is assumed to be of size `num_classes`. `logits` must have the dtype of `float16`, `float32`, or `float64`, and `labels` must have the dtype of `int32` or `int64`.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase _sentinel
Used to prevent positional parameters. Internal, do not use.
IGraphNodeBase labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
object logits
Per-label activations (typically a linear output) of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or `float64`. These activation energies are interpreted as unnormalized log probabilities.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `labels` and of the same type as `logits` with the softmax cross entropy loss.

Tensor sparse_softmax_cross_entropy_with_logits(IGraphNodeBase _sentinel, IEnumerable<int> labels, IEnumerable<IGraphNodeBase> logits, string name)

Computes sparse softmax cross entropy between `logits` and `labels`.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the `labels` vector must provide a single specific index for the true class for each row of `logits` (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see `softmax_cross_entropy_with_logits_v2`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits of shape `[batch_size, num_classes]` and have labels of shape `[batch_size]`, but higher dimensions are supported, in which case the `dim`-th dimension is assumed to be of size `num_classes`. `logits` must have the dtype of `float16`, `float32`, or `float64`, and `labels` must have the dtype of `int32` or `int64`.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase _sentinel
Used to prevent positional parameters. Internal, do not use.
IEnumerable<int> labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
IEnumerable<IGraphNodeBase> logits
Per-label activations (typically a linear output) of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or `float64`. These activation energies are interpreted as unnormalized log probabilities.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `labels` and of the same type as `logits` with the softmax cross entropy loss.

Tensor sparse_softmax_cross_entropy_with_logits(IGraphNodeBase _sentinel, IndexedSlices labels, object logits, string name)

Computes sparse softmax cross entropy between `logits` and `labels`.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the `labels` vector must provide a single specific index for the true class for each row of `logits` (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see `softmax_cross_entropy_with_logits_v2`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits of shape `[batch_size, num_classes]` and have labels of shape `[batch_size]`, but higher dimensions are supported, in which case the `dim`-th dimension is assumed to be of size `num_classes`. `logits` must have the dtype of `float16`, `float32`, or `float64`, and `labels` must have the dtype of `int32` or `int64`.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase _sentinel
Used to prevent positional parameters. Internal, do not use.
IndexedSlices labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
object logits
Per-label activations (typically a linear output) of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or `float64`. These activation energies are interpreted as unnormalized log probabilities.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `labels` and of the same type as `logits` with the softmax cross entropy loss.

Tensor sparse_softmax_cross_entropy_with_logits(IGraphNodeBase _sentinel, IndexedSlices labels, IEnumerable<IGraphNodeBase> logits, string name)

Computes sparse softmax cross entropy between `logits` and `labels`.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the `labels` vector must provide a single specific index for the true class for each row of `logits` (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see `softmax_cross_entropy_with_logits_v2`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits of shape `[batch_size, num_classes]` and have labels of shape `[batch_size]`, but higher dimensions are supported, in which case the `dim`-th dimension is assumed to be of size `num_classes`. `logits` must have the dtype of `float16`, `float32`, or `float64`, and `labels` must have the dtype of `int32` or `int64`.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase _sentinel
Used to prevent positional parameters. Internal, do not use.
IndexedSlices labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
IEnumerable<IGraphNodeBase> logits
Per-label activations (typically a linear output) of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or `float64`. These activation energies are interpreted as unnormalized log probabilities.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `labels` and of the same type as `logits` with the softmax cross entropy loss.

Tensor sparse_softmax_cross_entropy_with_logits(IGraphNodeBase _sentinel, ValueTuple<PythonClassContainer, PythonClassContainer> labels, object logits, string name)

Computes sparse softmax cross entropy between `logits` and `labels`.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the `labels` vector must provide a single specific index for the true class for each row of `logits` (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see `softmax_cross_entropy_with_logits_v2`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits of shape `[batch_size, num_classes]` and have labels of shape `[batch_size]`, but higher dimensions are supported, in which case the `dim`-th dimension is assumed to be of size `num_classes`. `logits` must have the dtype of `float16`, `float32`, or `float64`, and `labels` must have the dtype of `int32` or `int64`.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase _sentinel
Used to prevent positional parameters. Internal, do not use.
ValueTuple<PythonClassContainer, PythonClassContainer> labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
object logits
Per-label activations (typically a linear output) of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or `float64`. These activation energies are interpreted as unnormalized log probabilities.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `labels` and of the same type as `logits` with the softmax cross entropy loss.

Tensor sparse_softmax_cross_entropy_with_logits(IGraphNodeBase _sentinel, ValueTuple<PythonClassContainer, PythonClassContainer> labels, IEnumerable<IGraphNodeBase> logits, string name)

Computes sparse softmax cross entropy between `logits` and `labels`.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the `labels` vector must provide a single specific index for the true class for each row of `logits` (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see `softmax_cross_entropy_with_logits_v2`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits of shape `[batch_size, num_classes]` and have labels of shape `[batch_size]`, but higher dimensions are supported, in which case the `dim`-th dimension is assumed to be of size `num_classes`. `logits` must have the dtype of `float16`, `float32`, or `float64`, and `labels` must have the dtype of `int32` or `int64`.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase _sentinel
Used to prevent positional parameters. Internal, do not use.
ValueTuple<PythonClassContainer, PythonClassContainer> labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
IEnumerable<IGraphNodeBase> logits
Per-label activations (typically a linear output) of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or `float64`. These activation energies are interpreted as unnormalized log probabilities.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `labels` and of the same type as `logits` with the softmax cross entropy loss.

Tensor sparse_softmax_cross_entropy_with_logits(IGraphNodeBase _sentinel, IEnumerable<int> labels, object logits, string name)

Computes sparse softmax cross entropy between `logits` and `labels`.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the `labels` vector must provide a single specific index for the true class for each row of `logits` (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see `softmax_cross_entropy_with_logits_v2`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits of shape `[batch_size, num_classes]` and have labels of shape `[batch_size]`, but higher dimensions are supported, in which case the `dim`-th dimension is assumed to be of size `num_classes`. `logits` must have the dtype of `float16`, `float32`, or `float64`, and `labels` must have the dtype of `int32` or `int64`.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase _sentinel
Used to prevent positional parameters. Internal, do not use.
IEnumerable<int> labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
object logits
Per-label activations (typically a linear output) of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or `float64`. These activation energies are interpreted as unnormalized log probabilities.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `labels` and of the same type as `logits` with the softmax cross entropy loss.

Tensor sparse_softmax_cross_entropy_with_logits(IGraphNodeBase _sentinel, double labels, object logits, string name)

Computes sparse softmax cross entropy between `logits` and `labels`.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the `labels` vector must provide a single specific index for the true class for each row of `logits` (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see `softmax_cross_entropy_with_logits_v2`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits of shape `[batch_size, num_classes]` and have labels of shape `[batch_size]`, but higher dimensions are supported, in which case the `dim`-th dimension is assumed to be of size `num_classes`. `logits` must have the dtype of `float16`, `float32`, or `float64`, and `labels` must have the dtype of `int32` or `int64`.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase _sentinel
Used to prevent positional parameters. Internal, do not use.
double labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
object logits
Per-label activations (typically a linear output) of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or `float64`. These activation energies are interpreted as unnormalized log probabilities.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `labels` and of the same type as `logits` with the softmax cross entropy loss.

Tensor sparse_softmax_cross_entropy_with_logits(IGraphNodeBase _sentinel, ndarray labels, object logits, string name)

Computes sparse softmax cross entropy between `logits` and `labels`.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the `labels` vector must provide a single specific index for the true class for each row of `logits` (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see `softmax_cross_entropy_with_logits_v2`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits of shape `[batch_size, num_classes]` and have labels of shape `[batch_size]`, but higher dimensions are supported, in which case the `dim`-th dimension is assumed to be of size `num_classes`. `logits` must have the dtype of `float16`, `float32`, or `float64`, and `labels` must have the dtype of `int32` or `int64`.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase _sentinel
Used to prevent positional parameters. Internal, do not use.
ndarray labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
object logits
Per-label activations (typically a linear output) of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or `float64`. These activation energies are interpreted as unnormalized log probabilities.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `labels` and of the same type as `logits` with the softmax cross entropy loss.

Tensor sparse_softmax_cross_entropy_with_logits(IGraphNodeBase _sentinel, ndarray labels, IEnumerable<IGraphNodeBase> logits, string name)

Computes sparse softmax cross entropy between `logits` and `labels`.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the `labels` vector must provide a single specific index for the true class for each row of `logits` (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see `softmax_cross_entropy_with_logits_v2`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits of shape `[batch_size, num_classes]` and have labels of shape `[batch_size]`, but higher dimensions are supported, in which case the `dim`-th dimension is assumed to be of size `num_classes`. `logits` must have the dtype of `float16`, `float32`, or `float64`, and `labels` must have the dtype of `int32` or `int64`.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase _sentinel
Used to prevent positional parameters. Internal, do not use.
ndarray labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
IEnumerable<IGraphNodeBase> logits
Per-label activations (typically a linear output) of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or `float64`. These activation energies are interpreted as unnormalized log probabilities.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `labels` and of the same type as `logits` with the softmax cross entropy loss.

Tensor sparse_softmax_cross_entropy_with_logits(IGraphNodeBase _sentinel, IGraphNodeBase labels, IEnumerable<IGraphNodeBase> logits, string name)

Computes sparse softmax cross entropy between `logits` and `labels`.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the `labels` vector must provide a single specific index for the true class for each row of `logits` (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see `softmax_cross_entropy_with_logits_v2`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits of shape `[batch_size, num_classes]` and have labels of shape `[batch_size]`, but higher dimensions are supported, in which case the `dim`-th dimension is assumed to be of size `num_classes`. `logits` must have the dtype of `float16`, `float32`, or `float64`, and `labels` must have the dtype of `int32` or `int64`.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase _sentinel
Used to prevent positional parameters. Internal, do not use.
IGraphNodeBase labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
IEnumerable<IGraphNodeBase> logits
Per-label activations (typically a linear output) of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or `float64`. These activation energies are interpreted as unnormalized log probabilities.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `labels` and of the same type as `logits` with the softmax cross entropy loss.

Tensor sparse_softmax_cross_entropy_with_logits(IGraphNodeBase _sentinel, double labels, IEnumerable<IGraphNodeBase> logits, string name)

Computes sparse softmax cross entropy between `logits` and `labels`.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the `labels` vector must provide a single specific index for the true class for each row of `logits` (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see `softmax_cross_entropy_with_logits_v2`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits of shape `[batch_size, num_classes]` and have labels of shape `[batch_size]`, but higher dimensions are supported, in which case the `dim`-th dimension is assumed to be of size `num_classes`. `logits` must have the dtype of `float16`, `float32`, or `float64`, and `labels` must have the dtype of `int32` or `int64`.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
IGraphNodeBase _sentinel
Used to prevent positional parameters. Internal, do not use.
double labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
IEnumerable<IGraphNodeBase> logits
Per-label activations (typically a linear output) of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or `float64`. These activation energies are interpreted as unnormalized log probabilities.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of the same shape as `labels` and of the same type as `logits` with the softmax cross entropy loss.

object sparse_softmax_cross_entropy_with_logits_dyn(object _sentinel, object labels, object logits, object name)

Computes sparse softmax cross entropy between `logits` and `labels`.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

**NOTE:** For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the `labels` vector must provide a single specific index for the true class for each row of `logits` (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see `softmax_cross_entropy_with_logits_v2`.

**WARNING:** This op expects unscaled logits, since it performs a `softmax` on `logits` internally for efficiency. Do not call this op with the output of `softmax`, as it will produce incorrect results.

A common use case is to have logits of shape `[batch_size, num_classes]` and have labels of shape `[batch_size]`, but higher dimensions are supported, in which case the `dim`-th dimension is assumed to be of size `num_classes`. `logits` must have the dtype of `float16`, `float32`, or `float64`, and `labels` must have the dtype of `int32` or `int64`.

**Note that to avoid confusion, it is required to pass only named arguments to this function.**
Parameters
object _sentinel
Used to prevent positional parameters. Internal, do not use.
object labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
object logits
Per-label activations (typically a linear output) of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or `float64`. These activation energies are interpreted as unnormalized log probabilities.
object name
A name for the operation (optional).
Returns
object
A `Tensor` of the same shape as `labels` and of the same type as `logits` with the softmax cross entropy loss.

object static_bidirectional_rnn(RNNCell cell_fw, RNNCell cell_bw, IGraphNodeBase inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IGraphNodeBase sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
RNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
RNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IGraphNodeBase inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IGraphNodeBase sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn(RNNCell cell_fw, RNNCell cell_bw, IGraphNodeBase inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IEnumerable<int> sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
RNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
RNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IGraphNodeBase inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IEnumerable<int> sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn(RNNCell cell_fw, RNNCell cell_bw, IEnumerable<IGraphNodeBase> inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IGraphNodeBase sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
RNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
RNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IGraphNodeBase sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn(LayerRNNCell cell_fw, LayerRNNCell cell_bw, IEnumerable<IGraphNodeBase> inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IGraphNodeBase sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
LayerRNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
LayerRNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IGraphNodeBase sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn(LayerRNNCell cell_fw, LayerRNNCell cell_bw, IEnumerable<IGraphNodeBase> inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IEnumerable<int> sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
LayerRNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
LayerRNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IEnumerable<int> sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn(LayerRNNCell cell_fw, LayerRNNCell cell_bw, IGraphNodeBase inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IEnumerable<int> sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
LayerRNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
LayerRNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IGraphNodeBase inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IEnumerable<int> sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn(LayerRNNCell cell_fw, LayerRNNCell cell_bw, IGraphNodeBase inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IGraphNodeBase sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
LayerRNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
LayerRNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IGraphNodeBase inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IGraphNodeBase sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn(LayerRNNCell cell_fw, RNNCell cell_bw, IEnumerable<IGraphNodeBase> inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IEnumerable<int> sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
LayerRNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
RNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IEnumerable<int> sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn(RNNCell cell_fw, RNNCell cell_bw, IEnumerable<IGraphNodeBase> inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IEnumerable<int> sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
RNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
RNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IEnumerable<int> sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn(RNNCell cell_fw, LayerRNNCell cell_bw, IGraphNodeBase inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IGraphNodeBase sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
RNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
LayerRNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IGraphNodeBase inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IGraphNodeBase sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn(RNNCell cell_fw, LayerRNNCell cell_bw, IGraphNodeBase inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IEnumerable<int> sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
RNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
LayerRNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IGraphNodeBase inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IEnumerable<int> sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn(RNNCell cell_fw, LayerRNNCell cell_bw, IEnumerable<IGraphNodeBase> inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IGraphNodeBase sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
RNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
LayerRNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IGraphNodeBase sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn(RNNCell cell_fw, LayerRNNCell cell_bw, IEnumerable<IGraphNodeBase> inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IEnumerable<int> sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
RNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
LayerRNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IEnumerable<int> sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn(LayerRNNCell cell_fw, RNNCell cell_bw, IGraphNodeBase inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IGraphNodeBase sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
LayerRNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
RNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IGraphNodeBase inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IGraphNodeBase sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn(LayerRNNCell cell_fw, RNNCell cell_bw, IGraphNodeBase inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IEnumerable<int> sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
LayerRNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
RNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IGraphNodeBase inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IEnumerable<int> sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn(LayerRNNCell cell_fw, RNNCell cell_bw, IEnumerable<IGraphNodeBase> inputs, IGraphNodeBase initial_state_fw, IGraphNodeBase initial_state_bw, DType dtype, IGraphNodeBase sequence_length, VariableScope scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
LayerRNNCell cell_fw
An instance of RNNCell, to be used for forward direction.
RNNCell cell_bw
An instance of RNNCell, to be used for backward direction.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
IGraphNodeBase initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
IGraphNodeBase initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
DType dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
IGraphNodeBase sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
VariableScope scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

object static_bidirectional_rnn_dyn(object cell_fw, object cell_bw, object inputs, object initial_state_fw, object initial_state_bw, object dtype, object sequence_length, object scope)

Creates a bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
object cell_fw
An instance of RNNCell, to be used for forward direction.
object cell_bw
An instance of RNNCell, to be used for backward direction.
object inputs
A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.
object initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
object initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
object dtype
(optional) The data type for the initial state. Required if either of the initial states are not provided.
object sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences.
object scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

ValueTuple<IList<object>, object> static_rnn(object cell, IGraphNodeBase inputs, object initial_state, DType dtype, object sequence_length, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IGraphNodeBase inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
object initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IGraphNodeBase inputs, PythonClassContainer initial_state, DType dtype, object sequence_length, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IGraphNodeBase inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
PythonClassContainer initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IEnumerable<IGraphNodeBase> inputs, IEnumerable<object> initial_state, DType dtype, object sequence_length, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
IEnumerable<object> initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IEnumerable<IGraphNodeBase> inputs, AttentionWrapperState initial_state, DType dtype, object sequence_length, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
AttentionWrapperState initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IEnumerable<IGraphNodeBase> inputs, AttentionWrapperState initial_state, DType dtype, object sequence_length, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
AttentionWrapperState initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IEnumerable<IGraphNodeBase> inputs, IGraphNodeBase initial_state, DType dtype, object sequence_length, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
IGraphNodeBase initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IEnumerable<IGraphNodeBase> inputs, IGraphNodeBase initial_state, DType dtype, object sequence_length, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
IGraphNodeBase initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IEnumerable<IGraphNodeBase> inputs, PythonClassContainer initial_state, DType dtype, object sequence_length, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
PythonClassContainer initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IGraphNodeBase inputs, IEnumerable<object> initial_state, DType dtype, object sequence_length, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IGraphNodeBase inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
IEnumerable<object> initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IEnumerable<IGraphNodeBase> inputs, PythonClassContainer initial_state, DType dtype, object sequence_length, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
PythonClassContainer initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IEnumerable<IGraphNodeBase> inputs, object initial_state, DType dtype, object sequence_length, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
object initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IEnumerable<IGraphNodeBase> inputs, string initial_state, DType dtype, object sequence_length, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
string initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IEnumerable<IGraphNodeBase> inputs, string initial_state, DType dtype, object sequence_length, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
string initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IGraphNodeBase inputs, IEnumerable<object> initial_state, DType dtype, object sequence_length, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IGraphNodeBase inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
IEnumerable<object> initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IGraphNodeBase inputs, PythonClassContainer initial_state, DType dtype, object sequence_length, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IGraphNodeBase inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
PythonClassContainer initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IGraphNodeBase inputs, AttentionWrapperState initial_state, DType dtype, object sequence_length, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IGraphNodeBase inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
AttentionWrapperState initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IGraphNodeBase inputs, object initial_state, DType dtype, object sequence_length, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IGraphNodeBase inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
object initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IEnumerable<IGraphNodeBase> inputs, object initial_state, DType dtype, object sequence_length, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
object initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IGraphNodeBase inputs, IGraphNodeBase initial_state, DType dtype, object sequence_length, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IGraphNodeBase inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
IGraphNodeBase initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IGraphNodeBase inputs, AttentionWrapperState initial_state, DType dtype, object sequence_length, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IGraphNodeBase inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
AttentionWrapperState initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IEnumerable<IGraphNodeBase> inputs, IEnumerable<object> initial_state, DType dtype, object sequence_length, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
IEnumerable<object> initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IGraphNodeBase inputs, string initial_state, DType dtype, object sequence_length, VariableScope scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IGraphNodeBase inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
string initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
VariableScope scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IGraphNodeBase inputs, string initial_state, DType dtype, object sequence_length, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IGraphNodeBase inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
string initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_rnn(object cell, IGraphNodeBase inputs, IGraphNodeBase initial_state, DType dtype, object sequence_length, string scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
IGraphNodeBase inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
IGraphNodeBase initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
DType dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

object static_rnn_dyn(object cell, object inputs, object initial_state, object dtype, object sequence_length, object scope)

Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API

The simplest form of RNN network generated is: However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time `t` for batch row `b`,
Parameters
object cell
An instance of RNNCell.
object inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements.
object initial_state
(optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
object dtype
(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
object sequence_length
Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
object scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
object
A pair (outputs, state) where:

- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Show Example
state = cell.zero_state(...)
            outputs = []
            for input_ in inputs:
              output, state = cell(input_, state)
              outputs.append(output)
            return (outputs, state) 

ValueTuple<IList<object>, object> static_state_saving_rnn(RNNCell cell, IEnumerable<IGraphNodeBase> inputs, Object state_saver, IEnumerable<string> state_name, IEnumerable<int> sequence_length, string scope)

RNN that accepts a state saver for time-truncated RNN calculation. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, stateful=True)`, which is equivalent to this API
Parameters
RNNCell cell
An instance of `RNNCell`.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`.
Object state_saver
A state saver object with methods `state` and `save_state`.
IEnumerable<string> state_name
Python string or tuple of strings. The name to use with the state_saver. If the cell returns tuples of states (i.e., `cell.state_size` is a tuple) then `state_name` should be a tuple of strings having the same length as `cell.state_size`. Otherwise it should be a single string.
IEnumerable<int> sequence_length
(optional) An int32/int64 vector size [batch_size]. See the documentation for rnn() for more details about sequence_length.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where: outputs is a length T list of outputs (one for each input) states is the final state

ValueTuple<IList<object>, object> static_state_saving_rnn(RNNCell cell, IEnumerable<IGraphNodeBase> inputs, Object state_saver, string state_name, IEnumerable<int> sequence_length, string scope)

RNN that accepts a state saver for time-truncated RNN calculation. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, stateful=True)`, which is equivalent to this API
Parameters
RNNCell cell
An instance of `RNNCell`.
IEnumerable<IGraphNodeBase> inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`.
Object state_saver
A state saver object with methods `state` and `save_state`.
string state_name
Python string or tuple of strings. The name to use with the state_saver. If the cell returns tuples of states (i.e., `cell.state_size` is a tuple) then `state_name` should be a tuple of strings having the same length as `cell.state_size`. Otherwise it should be a single string.
IEnumerable<int> sequence_length
(optional) An int32/int64 vector size [batch_size]. See the documentation for rnn() for more details about sequence_length.
string scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
ValueTuple<IList<object>, object>
A pair (outputs, state) where: outputs is a length T list of outputs (one for each input) states is the final state

object static_state_saving_rnn_dyn(object cell, object inputs, object state_saver, object state_name, object sequence_length, object scope)

RNN that accepts a state saver for time-truncated RNN calculation. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell, stateful=True)`, which is equivalent to this API
Parameters
object cell
An instance of `RNNCell`.
object inputs
A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`.
object state_saver
A state saver object with methods `state` and `save_state`.
object state_name
Python string or tuple of strings. The name to use with the state_saver. If the cell returns tuples of states (i.e., `cell.state_size` is a tuple) then `state_name` should be a tuple of strings having the same length as `cell.state_size`. Otherwise it should be a single string.
object sequence_length
(optional) An int32/int64 vector size [batch_size]. See the documentation for rnn() for more details about sequence_length.
object scope
VariableScope for the created subgraph; defaults to "rnn".
Returns
object
A pair (outputs, state) where: outputs is a length T list of outputs (one for each input) states is the final state

object sufficient_statistics(IGraphNodeBase x, IEnumerable<int> axes, Nullable<double> shift, Nullable<bool> keep_dims, string name, object keepdims)

Calculate the sufficient statistics for the mean and variance of `x`.

These sufficient statistics are computed using the one pass algorithm on an input that's optionally shifted. See: https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Computing_shifted_data
Parameters
IGraphNodeBase x
A `Tensor`.
IEnumerable<int> axes
Array of ints. Axes along which to compute mean and variance.
Nullable<double> shift
A `Tensor` containing the value by which to shift the data for numerical stability, or `None` if no shift is to be performed. A shift close to the true mean provides the most numerically stable results.
Nullable<bool> keep_dims
produce statistics with the same dimensionality as the input.
string name
Name used to scope the operations that compute the sufficient stats.
object keepdims
Alias for keep_dims.
Returns
object
Four `Tensor` objects of the same type as `x`:

* the count (number of elements to average over). * the (possibly shifted) sum of the elements in the array. * the (possibly shifted) sum of squares of the elements in the array. * the shift by which the mean must be corrected or None if `shift` is None.

object sufficient_statistics_dyn(object x, object axes, object shift, object keep_dims, object name, object keepdims)

Calculate the sufficient statistics for the mean and variance of `x`.

These sufficient statistics are computed using the one pass algorithm on an input that's optionally shifted. See: https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Computing_shifted_data
Parameters
object x
A `Tensor`.
object axes
Array of ints. Axes along which to compute mean and variance.
object shift
A `Tensor` containing the value by which to shift the data for numerical stability, or `None` if no shift is to be performed. A shift close to the true mean provides the most numerically stable results.
object keep_dims
produce statistics with the same dimensionality as the input.
object name
Name used to scope the operations that compute the sufficient stats.
object keepdims
Alias for keep_dims.
Returns
object
Four `Tensor` objects of the same type as `x`:

* the count (number of elements to average over). * the (possibly shifted) sum of the elements in the array. * the (possibly shifted) sum of squares of the elements in the array. * the shift by which the mean must be corrected or None if `shift` is None.

object swish(IGraphNodeBase features)

Computes the Swish activation function: `x * sigmoid(x)`.

Source: "Searching for Activation Functions" (Ramachandran et al. 2017) https://arxiv.org/abs/1710.05941
Parameters
IGraphNodeBase features
A `Tensor` representing preactivation values.
Returns
object
The activation value.

object swish_dyn(object features)

Computes the Swish activation function: `x * sigmoid(x)`.

Source: "Searching for Activation Functions" (Ramachandran et al. 2017) https://arxiv.org/abs/1710.05941
Parameters
object features
A `Tensor` representing preactivation values.
Returns
object
The activation value.

object top_k(object input, IGraphNodeBase k, bool sorted, string name)

Finds values and indices of the `k` largest entries for the last dimension.

If the input is a vector (rank=1), finds the `k` largest entries in the vector and outputs their values and indices as vectors. Thus `values[j]` is the `j`-th largest entry in `input`, and its index is `indices[j]`.

For matrices (resp. higher rank input), computes the top `k` entries in each row (resp. vector along the last dimension). Thus,

values.shape = indices.shape = input.shape[:-1] + [k]

If two elements are equal, the lower-index element appears first.
Parameters
object input
1-D or higher `Tensor` with last dimension at least `k`.
IGraphNodeBase k
0-D `int32` `Tensor`. Number of top elements to look for along the last dimension (along each row for matrices).
bool sorted
If true the resulting `k` elements will be sorted by the values in descending order.
string name
Optional name for the operation.
Returns
object

object top_k(object input, int k, bool sorted, string name)

Finds values and indices of the `k` largest entries for the last dimension.

If the input is a vector (rank=1), finds the `k` largest entries in the vector and outputs their values and indices as vectors. Thus `values[j]` is the `j`-th largest entry in `input`, and its index is `indices[j]`.

For matrices (resp. higher rank input), computes the top `k` entries in each row (resp. vector along the last dimension). Thus,

values.shape = indices.shape = input.shape[:-1] + [k]

If two elements are equal, the lower-index element appears first.
Parameters
object input
1-D or higher `Tensor` with last dimension at least `k`.
int k
0-D `int32` `Tensor`. Number of top elements to look for along the last dimension (along each row for matrices).
bool sorted
If true the resulting `k` elements will be sorted by the values in descending order.
string name
Optional name for the operation.
Returns
object

object top_k_dyn(object input, ImplicitContainer<T> k, ImplicitContainer<T> sorted, object name)

Finds values and indices of the `k` largest entries for the last dimension.

If the input is a vector (rank=1), finds the `k` largest entries in the vector and outputs their values and indices as vectors. Thus `values[j]` is the `j`-th largest entry in `input`, and its index is `indices[j]`.

For matrices (resp. higher rank input), computes the top `k` entries in each row (resp. vector along the last dimension). Thus,

values.shape = indices.shape = input.shape[:-1] + [k]

If two elements are equal, the lower-index element appears first.
Parameters
object input
1-D or higher `Tensor` with last dimension at least `k`.
ImplicitContainer<T> k
0-D `int32` `Tensor`. Number of top elements to look for along the last dimension (along each row for matrices).
ImplicitContainer<T> sorted
If true the resulting `k` elements will be sorted by the values in descending order.
object name
Optional name for the operation.
Returns
object

object uniform_candidate_sampler(IGraphNodeBase true_classes, object num_true, object num_sampled, object unique, object range_max, object seed, string name)

Samples a set of classes using a uniform base distribution.

This operation randomly samples a tensor of sampled classes (`sampled_candidates`) from the range of integers `[0, range_max)`.

The elements of `sampled_candidates` are drawn without replacement (if `unique=True`) or with replacement (if `unique=False`) from the base distribution.

The base distribution for this operation is the uniform distribution over the range of integers `[0, range_max)`.

In addition, this operation returns tensors `true_expected_count` and `sampled_expected_count` representing the number of times each of the target classes (`true_classes`) and the sampled classes (`sampled_candidates`) is expected to occur in an average tensor of sampled classes. These values correspond to `Q(y|x)` defined in [this document](http://www.tensorflow.org/extras/candidate_sampling.pdf). If `unique=True`, then these are post-rejection probabilities and we compute them approximately.
Parameters
IGraphNodeBase true_classes
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
object num_true
An `int`. The number of target classes per training example.
object num_sampled
An `int`. The number of classes to randomly sample. The `sampled_candidates` return value will have shape `[num_sampled]`. If `unique=True`, `num_sampled` must be less than or equal to `range_max`.
object unique
A `bool`. Determines whether all sampled classes in a batch are unique.
object range_max
An `int`. The number of possible classes.
object seed
An `int`. An operation-specific seed. Default is 0.
string name
A name for the operation (optional).
Returns
object

object uniform_candidate_sampler_dyn(object true_classes, object num_true, object num_sampled, object unique, object range_max, object seed, object name)

Samples a set of classes using a uniform base distribution.

This operation randomly samples a tensor of sampled classes (`sampled_candidates`) from the range of integers `[0, range_max)`.

The elements of `sampled_candidates` are drawn without replacement (if `unique=True`) or with replacement (if `unique=False`) from the base distribution.

The base distribution for this operation is the uniform distribution over the range of integers `[0, range_max)`.

In addition, this operation returns tensors `true_expected_count` and `sampled_expected_count` representing the number of times each of the target classes (`true_classes`) and the sampled classes (`sampled_candidates`) is expected to occur in an average tensor of sampled classes. These values correspond to `Q(y|x)` defined in [this document](http://www.tensorflow.org/extras/candidate_sampling.pdf). If `unique=True`, then these are post-rejection probabilities and we compute them approximately.
Parameters
object true_classes
A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes.
object num_true
An `int`. The number of target classes per training example.
object num_sampled
An `int`. The number of classes to randomly sample. The `sampled_candidates` return value will have shape `[num_sampled]`. If `unique=True`, `num_sampled` must be less than or equal to `range_max`.
object unique
A `bool`. Determines whether all sampled classes in a batch are unique.
object range_max
An `int`. The number of possible classes.
object seed
An `int`. An operation-specific seed. Default is 0.
object name
A name for the operation (optional).
Returns
object

Tensor weighted_cross_entropy_with_logits(IGraphNodeBase labels, IEnumerable<object> logits, Nullable<double> pos_weight, string name, IEnumerable<int> targets)

Computes a weighted cross entropy. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(targets)`. They will be removed in a future version. Instructions for updating: targets is deprecated, use labels instead

This is like `sigmoid_cross_entropy_with_logits()` except that `pos_weight`, allows one to trade off recall and precision by up- or down-weighting the cost of a positive error relative to a negative error.

The usual cross-entropy cost is defined as:

labels * -log(sigmoid(logits)) + (1 - labels) * -log(1 - sigmoid(logits))

A value `pos_weight > 1` decreases the false negative count, hence increasing the recall. Conversely setting `pos_weight < 1` decreases the false positive count and increases the precision. This can be seen from the fact that `pos_weight` is introduced as a multiplicative coefficient for the positive labels term in the loss expression:

labels * -log(sigmoid(logits)) * pos_weight + (1 - labels) * -log(1 - sigmoid(logits))

For brevity, let `x = logits`, `z = labels`, `q = pos_weight`. The loss is:

qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) = qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) = qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) = qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) = (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x)) = (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x))

Setting `l = (1 + (q - 1) * z)`, to ensure stability and avoid overflow, the implementation uses

(1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0))

`logits` and `labels` must have the same type and shape.
Parameters
IGraphNodeBase labels
A `Tensor` of the same type and shape as `logits`.
IEnumerable<object> logits
A `Tensor` of type `float32` or `float64`.
Nullable<double> pos_weight
A coefficient to use on the positive examples.
string name
A name for the operation (optional).
IEnumerable<int> targets
Deprecated alias for labels.
Returns
Tensor
A `Tensor` of the same shape as `logits` with the componentwise weighted logistic losses.

object weighted_cross_entropy_with_logits_dyn(object labels, object logits, object pos_weight, object name, object targets)

Computes a weighted cross entropy. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(targets)`. They will be removed in a future version. Instructions for updating: targets is deprecated, use labels instead

This is like `sigmoid_cross_entropy_with_logits()` except that `pos_weight`, allows one to trade off recall and precision by up- or down-weighting the cost of a positive error relative to a negative error.

The usual cross-entropy cost is defined as:

labels * -log(sigmoid(logits)) + (1 - labels) * -log(1 - sigmoid(logits))

A value `pos_weight > 1` decreases the false negative count, hence increasing the recall. Conversely setting `pos_weight < 1` decreases the false positive count and increases the precision. This can be seen from the fact that `pos_weight` is introduced as a multiplicative coefficient for the positive labels term in the loss expression:

labels * -log(sigmoid(logits)) * pos_weight + (1 - labels) * -log(1 - sigmoid(logits))

For brevity, let `x = logits`, `z = labels`, `q = pos_weight`. The loss is:

qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) = qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) = qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) = qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) = (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x)) = (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x))

Setting `l = (1 + (q - 1) * z)`, to ensure stability and avoid overflow, the implementation uses

(1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0))

`logits` and `labels` must have the same type and shape.
Parameters
object labels
A `Tensor` of the same type and shape as `logits`.
object logits
A `Tensor` of type `float32` or `float64`.
object pos_weight
A coefficient to use on the positive examples.
object name
A name for the operation (optional).
object targets
Deprecated alias for labels.
Returns
object
A `Tensor` of the same shape as `logits` with the componentwise weighted logistic losses.

object weighted_moments(ValueTuple<PythonClassContainer, PythonClassContainer> x, IEnumerable<int> axes, IGraphNodeBase frequency_weights, string name, Nullable<bool> keep_dims, Nullable<bool> keepdims)

Returns the frequency-weighted mean and variance of `x`.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> x
A tensor.
IEnumerable<int> axes
1-d tensor of int32 values; these are the axes along which to compute mean and variance.
IGraphNodeBase frequency_weights
A tensor of positive weights which can be broadcast with x.
string name
Name used to scope the operation.
Nullable<bool> keep_dims
Produce moments with the same dimensionality as the input.
Nullable<bool> keepdims
Alias of keep_dims.
Returns
object
Two tensors: `weighted_mean` and `weighted_variance`.

object weighted_moments(IndexedSlices x, IEnumerable<int> axes, IGraphNodeBase frequency_weights, string name, Nullable<bool> keep_dims, Nullable<bool> keepdims)

Returns the frequency-weighted mean and variance of `x`.
Parameters
IndexedSlices x
A tensor.
IEnumerable<int> axes
1-d tensor of int32 values; these are the axes along which to compute mean and variance.
IGraphNodeBase frequency_weights
A tensor of positive weights which can be broadcast with x.
string name
Name used to scope the operation.
Nullable<bool> keep_dims
Produce moments with the same dimensionality as the input.
Nullable<bool> keepdims
Alias of keep_dims.
Returns
object
Two tensors: `weighted_mean` and `weighted_variance`.

object weighted_moments(IGraphNodeBase x, IEnumerable<int> axes, IGraphNodeBase frequency_weights, string name, Nullable<bool> keep_dims, Nullable<bool> keepdims)

Returns the frequency-weighted mean and variance of `x`.
Parameters
IGraphNodeBase x
A tensor.
IEnumerable<int> axes
1-d tensor of int32 values; these are the axes along which to compute mean and variance.
IGraphNodeBase frequency_weights
A tensor of positive weights which can be broadcast with x.
string name
Name used to scope the operation.
Nullable<bool> keep_dims
Produce moments with the same dimensionality as the input.
Nullable<bool> keepdims
Alias of keep_dims.
Returns
object
Two tensors: `weighted_mean` and `weighted_variance`.

object weighted_moments(IEnumerable<IGraphNodeBase> x, IEnumerable<int> axes, IGraphNodeBase frequency_weights, string name, Nullable<bool> keep_dims, Nullable<bool> keepdims)

Returns the frequency-weighted mean and variance of `x`.
Parameters
IEnumerable<IGraphNodeBase> x
A tensor.
IEnumerable<int> axes
1-d tensor of int32 values; these are the axes along which to compute mean and variance.
IGraphNodeBase frequency_weights
A tensor of positive weights which can be broadcast with x.
string name
Name used to scope the operation.
Nullable<bool> keep_dims
Produce moments with the same dimensionality as the input.
Nullable<bool> keepdims
Alias of keep_dims.
Returns
object
Two tensors: `weighted_mean` and `weighted_variance`.

object weighted_moments_dyn(object x, object axes, object frequency_weights, object name, object keep_dims, object keepdims)

Returns the frequency-weighted mean and variance of `x`.
Parameters
object x
A tensor.
object axes
1-d tensor of int32 values; these are the axes along which to compute mean and variance.
object frequency_weights
A tensor of positive weights which can be broadcast with x.
object name
Name used to scope the operation.
object keep_dims
Produce moments with the same dimensionality as the input.
object keepdims
Alias of keep_dims.
Returns
object
Two tensors: `weighted_mean` and `weighted_variance`.

object with_space_to_batch(IEnumerable<IGraphNodeBase> input, object dilation_rate, string padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IEnumerable<IGraphNodeBase> input
Tensor of rank > max(spatial_dims).
object dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
string padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IEnumerable<IGraphNodeBase> input, IEnumerable<int> dilation_rate, string padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IEnumerable<IGraphNodeBase> input
Tensor of rank > max(spatial_dims).
IEnumerable<int> dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
string padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IEnumerable<IGraphNodeBase> input, IEnumerable<int> dilation_rate, ValueTuple<IEnumerable<object>, object> padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IEnumerable<IGraphNodeBase> input
Tensor of rank > max(spatial_dims).
IEnumerable<int> dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
ValueTuple<IEnumerable<object>, object> padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IEnumerable<IGraphNodeBase> input, ndarray dilation_rate, string padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IEnumerable<IGraphNodeBase> input
Tensor of rank > max(spatial_dims).
ndarray dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
string padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IEnumerable<IGraphNodeBase> input, ndarray dilation_rate, ValueTuple<IEnumerable<object>, object> padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IEnumerable<IGraphNodeBase> input
Tensor of rank > max(spatial_dims).
ndarray dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
ValueTuple<IEnumerable<object>, object> padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IEnumerable<IGraphNodeBase> input, ValueTuple<int, object> dilation_rate, ValueTuple<IEnumerable<object>, object> padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IEnumerable<IGraphNodeBase> input
Tensor of rank > max(spatial_dims).
ValueTuple<int, object> dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
ValueTuple<IEnumerable<object>, object> padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IEnumerable<IGraphNodeBase> input, ValueTuple<int, object> dilation_rate, string padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IEnumerable<IGraphNodeBase> input
Tensor of rank > max(spatial_dims).
ValueTuple<int, object> dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
string padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IEnumerable<IGraphNodeBase> input, int dilation_rate, ValueTuple<IEnumerable<object>, object> padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IEnumerable<IGraphNodeBase> input
Tensor of rank > max(spatial_dims).
int dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
ValueTuple<IEnumerable<object>, object> padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IEnumerable<IGraphNodeBase> input, int dilation_rate, string padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IEnumerable<IGraphNodeBase> input
Tensor of rank > max(spatial_dims).
int dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
string padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IEnumerable<IGraphNodeBase> input, object dilation_rate, ValueTuple<IEnumerable<object>, object> padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IEnumerable<IGraphNodeBase> input
Tensor of rank > max(spatial_dims).
object dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
ValueTuple<IEnumerable<object>, object> padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(ValueTuple<PythonClassContainer, PythonClassContainer> input, ndarray dilation_rate, ValueTuple<IEnumerable<object>, object> padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
Tensor of rank > max(spatial_dims).
ndarray dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
ValueTuple<IEnumerable<object>, object> padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(ValueTuple<PythonClassContainer, PythonClassContainer> input, int dilation_rate, ValueTuple<IEnumerable<object>, object> padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
Tensor of rank > max(spatial_dims).
int dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
ValueTuple<IEnumerable<object>, object> padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(ValueTuple<PythonClassContainer, PythonClassContainer> input, IEnumerable<int> dilation_rate, ValueTuple<IEnumerable<object>, object> padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
Tensor of rank > max(spatial_dims).
IEnumerable<int> dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
ValueTuple<IEnumerable<object>, object> padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IGraphNodeBase input, ValueTuple<int, object> dilation_rate, string padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IGraphNodeBase input
Tensor of rank > max(spatial_dims).
ValueTuple<int, object> dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
string padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IGraphNodeBase input, ValueTuple<int, object> dilation_rate, ValueTuple<IEnumerable<object>, object> padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IGraphNodeBase input
Tensor of rank > max(spatial_dims).
ValueTuple<int, object> dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
ValueTuple<IEnumerable<object>, object> padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IGraphNodeBase input, IEnumerable<int> dilation_rate, string padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IGraphNodeBase input
Tensor of rank > max(spatial_dims).
IEnumerable<int> dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
string padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IGraphNodeBase input, IEnumerable<int> dilation_rate, ValueTuple<IEnumerable<object>, object> padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IGraphNodeBase input
Tensor of rank > max(spatial_dims).
IEnumerable<int> dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
ValueTuple<IEnumerable<object>, object> padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IGraphNodeBase input, ndarray dilation_rate, string padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IGraphNodeBase input
Tensor of rank > max(spatial_dims).
ndarray dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
string padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IGraphNodeBase input, ndarray dilation_rate, ValueTuple<IEnumerable<object>, object> padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IGraphNodeBase input
Tensor of rank > max(spatial_dims).
ndarray dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
ValueTuple<IEnumerable<object>, object> padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(ValueTuple<PythonClassContainer, PythonClassContainer> input, ndarray dilation_rate, string padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
Tensor of rank > max(spatial_dims).
ndarray dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
string padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(ValueTuple<PythonClassContainer, PythonClassContainer> input, object dilation_rate, ValueTuple<IEnumerable<object>, object> padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
Tensor of rank > max(spatial_dims).
object dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
ValueTuple<IEnumerable<object>, object> padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(ValueTuple<PythonClassContainer, PythonClassContainer> input, object dilation_rate, string padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
Tensor of rank > max(spatial_dims).
object dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
string padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IGraphNodeBase input, object dilation_rate, string padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IGraphNodeBase input
Tensor of rank > max(spatial_dims).
object dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
string padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(ValueTuple<PythonClassContainer, PythonClassContainer> input, int dilation_rate, string padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
Tensor of rank > max(spatial_dims).
int dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
string padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IGraphNodeBase input, int dilation_rate, string padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IGraphNodeBase input
Tensor of rank > max(spatial_dims).
int dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
string padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(ValueTuple<PythonClassContainer, PythonClassContainer> input, ValueTuple<int, object> dilation_rate, string padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
Tensor of rank > max(spatial_dims).
ValueTuple<int, object> dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
string padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(ValueTuple<PythonClassContainer, PythonClassContainer> input, ValueTuple<int, object> dilation_rate, ValueTuple<IEnumerable<object>, object> padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
Tensor of rank > max(spatial_dims).
ValueTuple<int, object> dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
ValueTuple<IEnumerable<object>, object> padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(ValueTuple<PythonClassContainer, PythonClassContainer> input, IEnumerable<int> dilation_rate, string padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> input
Tensor of rank > max(spatial_dims).
IEnumerable<int> dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
string padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IGraphNodeBase input, object dilation_rate, ValueTuple<IEnumerable<object>, object> padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IGraphNodeBase input
Tensor of rank > max(spatial_dims).
object dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
ValueTuple<IEnumerable<object>, object> padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch(IGraphNodeBase input, int dilation_rate, ValueTuple<IEnumerable<object>, object> padding, object op, object filter_shape, IEnumerable<int> spatial_dims, string data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
IGraphNodeBase input
Tensor of rank > max(spatial_dims).
int dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
ValueTuple<IEnumerable<object>, object> padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
IEnumerable<int> spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
string data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

object with_space_to_batch_dyn(object input, object dilation_rate, object padding, object op, object filter_shape, object spatial_dims, object data_format)

Performs `op` on the space-to-batch representation of `input`.

This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.

In the special case that `dilation_rate` is uniformly 1, this simply returns:

op(input, num_spatial_dims, padding)

Otherwise, it returns:

batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops),

where:

adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]

defined as follows:

We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:

If `padding = "VALID"`, then:

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate)

If `padding = "SAME"`, then:

dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1)

paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])

Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.

For 0 <= i < len(spatial_dims), we assign:

adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]

All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.

Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter_shape of `[1]*N`.

Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding

net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)

This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.

Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions

net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)

can be combined into a single `with_space_to_batch` operation as follows:

def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME")

net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Parameters
object input
Tensor of rank > max(spatial_dims).
object dilation_rate
int32 Tensor of *known* shape [num_spatial_dims].
object padding
str constant equal to "VALID" or "SAME"
object op
Function that maps (input, num_spatial_dims, padding) -> output
object filter_shape
If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
object spatial_dims
Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`.
object data_format
A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns
object
The output Tensor as described above, dimensions will vary based on the op provided.

Tensor xw_plus_b(IDictionary<object, object> x, IGraphNodeBase weights, IGraphNodeBase biases, string name)

Computes matmul(x, weights) + biases.
Parameters
IDictionary<object, object> x
a 2D tensor. Dimensions typically: batch, in_units
IGraphNodeBase weights
a 2D tensor. Dimensions typically: in_units, out_units
IGraphNodeBase biases
a 1D tensor. Dimensions: out_units
string name
A name for the operation (optional). If not specified "xw_plus_b" is used.
Returns
Tensor
A 2-D Tensor computing matmul(x, weights) + biases. Dimensions typically: batch, out_units.

Tensor xw_plus_b(IGraphNodeBase x, IGraphNodeBase weights, IGraphNodeBase biases, string name)

Computes matmul(x, weights) + biases.
Parameters
IGraphNodeBase x
a 2D tensor. Dimensions typically: batch, in_units
IGraphNodeBase weights
a 2D tensor. Dimensions typically: in_units, out_units
IGraphNodeBase biases
a 1D tensor. Dimensions: out_units
string name
A name for the operation (optional). If not specified "xw_plus_b" is used.
Returns
Tensor
A 2-D Tensor computing matmul(x, weights) + biases. Dimensions typically: batch, out_units.

Tensor xw_plus_b(int x, IGraphNodeBase weights, IGraphNodeBase biases, string name)

Computes matmul(x, weights) + biases.
Parameters
int x
a 2D tensor. Dimensions typically: batch, in_units
IGraphNodeBase weights
a 2D tensor. Dimensions typically: in_units, out_units
IGraphNodeBase biases
a 1D tensor. Dimensions: out_units
string name
A name for the operation (optional). If not specified "xw_plus_b" is used.
Returns
Tensor
A 2-D Tensor computing matmul(x, weights) + biases. Dimensions typically: batch, out_units.

object xw_plus_b_dyn(object x, object weights, object biases, object name)

Computes matmul(x, weights) + biases.
Parameters
object x
a 2D tensor. Dimensions typically: batch, in_units
object weights
a 2D tensor. Dimensions typically: in_units, out_units
object biases
a 1D tensor. Dimensions: out_units
object name
A name for the operation (optional). If not specified "xw_plus_b" is used.
Returns
object
A 2-D Tensor computing matmul(x, weights) + biases. Dimensions typically: batch, out_units.

Tensor zero_fraction(IGraphNodeBase value, string name)

Returns the fraction of zeros in `value`.

If `value` is empty, the result is `nan`.

This is useful in summaries to measure and report sparsity. For example,
Parameters
IGraphNodeBase value
A tensor of numeric type.
string name
A name for the operation (optional).
Returns
Tensor
The fraction of zeros in `value`, with type `float32`.
Show Example
z = tf.nn.relu(...)
            summ = tf.compat.v1.summary.scalar('sparsity', tf.nn.zero_fraction(z)) 

Tensor zero_fraction(ndarray value, string name)

Returns the fraction of zeros in `value`.

If `value` is empty, the result is `nan`.

This is useful in summaries to measure and report sparsity. For example,
Parameters
ndarray value
A tensor of numeric type.
string name
A name for the operation (optional).
Returns
Tensor
The fraction of zeros in `value`, with type `float32`.
Show Example
z = tf.nn.relu(...)
            summ = tf.compat.v1.summary.scalar('sparsity', tf.nn.zero_fraction(z)) 

Tensor zero_fraction(IEnumerable<IGraphNodeBase> value, string name)

Returns the fraction of zeros in `value`.

If `value` is empty, the result is `nan`.

This is useful in summaries to measure and report sparsity. For example,
Parameters
IEnumerable<IGraphNodeBase> value
A tensor of numeric type.
string name
A name for the operation (optional).
Returns
Tensor
The fraction of zeros in `value`, with type `float32`.
Show Example
z = tf.nn.relu(...)
            summ = tf.compat.v1.summary.scalar('sparsity', tf.nn.zero_fraction(z)) 

object zero_fraction_dyn(object value, object name)

Returns the fraction of zeros in `value`.

If `value` is empty, the result is `nan`.

This is useful in summaries to measure and report sparsity. For example,
Parameters
object value
A tensor of numeric type.
object name
A name for the operation (optional).
Returns
object
The fraction of zeros in `value`, with type `float32`.
Show Example
z = tf.nn.relu(...)
            summ = tf.compat.v1.summary.scalar('sparsity', tf.nn.zero_fraction(z)) 

Public properties

PythonFunctionContainer all_candidate_sampler_fn get;

PythonFunctionContainer atrous_conv2d_fn get;

PythonFunctionContainer atrous_conv2d_transpose_fn get;

PythonFunctionContainer avg_pool_fn get;

PythonFunctionContainer avg_pool_v2_fn get;

PythonFunctionContainer avg_pool1d_fn get;

PythonFunctionContainer avg_pool3d_fn get;

PythonFunctionContainer batch_norm_with_global_normalization_fn get;

PythonFunctionContainer batch_normalization_fn get;

PythonFunctionContainer bias_add_fn get;

PythonFunctionContainer bidirectional_dynamic_rnn_fn get;

PythonFunctionContainer collapse_repeated_fn get;

PythonFunctionContainer compute_accidental_hits_fn get;

PythonFunctionContainer compute_average_loss_fn get;

PythonFunctionContainer conv_transpose_fn get;

PythonFunctionContainer conv1d_fn get;

PythonFunctionContainer conv1d_transpose_fn get;

PythonFunctionContainer conv2d_backprop_filter_fn get;

PythonFunctionContainer conv2d_backprop_input_fn get;

PythonFunctionContainer conv2d_fn get;

PythonFunctionContainer conv2d_transpose_fn get;

PythonFunctionContainer conv3d_backprop_filter_fn_ get;

PythonFunctionContainer conv3d_fn get;

PythonFunctionContainer conv3d_transpose_fn get;

PythonFunctionContainer convolution_fn get;

PythonFunctionContainer crelu_fn get;

PythonFunctionContainer ctc_beam_search_decoder_fn get;

PythonFunctionContainer ctc_beam_search_decoder_v2_fn get;

PythonFunctionContainer ctc_greedy_decoder_fn get;

PythonFunctionContainer ctc_loss_fn get;

PythonFunctionContainer ctc_loss_v2_fn get;

PythonFunctionContainer ctc_unique_labels_fn get;

PythonFunctionContainer depthwise_conv2d_backprop_filter_fn get;

PythonFunctionContainer depthwise_conv2d_backprop_input_fn get;

PythonFunctionContainer depthwise_conv2d_fn get;

PythonFunctionContainer depthwise_conv2d_native_fn get;

PythonFunctionContainer dilation2d_fn get;

PythonFunctionContainer dropout_fn get;

PythonFunctionContainer dynamic_rnn_fn get;

PythonFunctionContainer embedding_lookup_fn get;

PythonFunctionContainer embedding_lookup_sparse_fn get;

PythonFunctionContainer erosion2d_fn get;

PythonFunctionContainer fixed_unigram_candidate_sampler_fn get;

PythonFunctionContainer fractional_avg_pool_fn get;

PythonFunctionContainer fractional_max_pool_fn get;

PythonFunctionContainer fused_batch_norm_fn get;

PythonFunctionContainer in_top_k_fn get;

PythonFunctionContainer l2_loss_fn get;

PythonFunctionContainer l2_normalize_fn get;

PythonFunctionContainer leaky_relu_fn get;

PythonFunctionContainer learned_unigram_candidate_sampler_fn get;

PythonFunctionContainer log_poisson_loss_fn get;

PythonFunctionContainer log_softmax_fn get;

PythonFunctionContainer log_uniform_candidate_sampler_fn get;

PythonFunctionContainer max_pool_fn get;

PythonFunctionContainer max_pool_v2_fn get;

PythonFunctionContainer max_pool_with_argmax_fn get;

PythonFunctionContainer max_pool1d_fn get;

PythonFunctionContainer max_pool2d_fn get;

PythonFunctionContainer max_pool3d_fn get;

PythonFunctionContainer moments_fn get;

PythonFunctionContainer nce_loss_fn get;

PythonFunctionContainer normalize_moments_fn get;

PythonFunctionContainer raw_rnn_fn get;

PythonFunctionContainer relu_layer_fn get;

PythonFunctionContainer relu6_fn get;

PythonFunctionContainer safe_embedding_lookup_sparse_fn get;

PythonFunctionContainer sampled_softmax_loss_fn get;

PythonFunctionContainer scale_regularization_loss_fn get;

PythonFunctionContainer separable_conv2d_fn get;

PythonFunctionContainer sigmoid_cross_entropy_with_logits_fn get;

PythonFunctionContainer softmax_cross_entropy_with_logits_fn get;

PythonFunctionContainer softmax_cross_entropy_with_logits_v2_fn get;

PythonFunctionContainer softmax_fn get;

PythonFunctionContainer softplus_fn get;

PythonFunctionContainer softsign_fn get;

PythonFunctionContainer sparse_softmax_cross_entropy_with_logits_fn get;

PythonFunctionContainer static_bidirectional_rnn_fn get;

PythonFunctionContainer static_rnn_fn get;

PythonFunctionContainer static_state_saving_rnn_fn get;

PythonFunctionContainer sufficient_statistics_fn get;

PythonFunctionContainer swish_fn get;

PythonFunctionContainer top_k_fn get;

PythonFunctionContainer uniform_candidate_sampler_fn get;

PythonFunctionContainer weighted_cross_entropy_with_logits_fn get;

PythonFunctionContainer weighted_moments_fn get;

PythonFunctionContainer with_space_to_batch_fn get;

PythonFunctionContainer xw_plus_b_fn get;

PythonFunctionContainer zero_fraction_fn get;