LostTech.TensorFlow : API Documentation

Type tf.image

Namespace tensorflow

Methods

Properties

Public static methods

object adjust_brightness(IGraphNodeBase image, IGraphNodeBase delta)

Adjust the brightness of RGB or Grayscale images.

This is a convenience method that converts RGB images to float representation, adjusts their brightness, and then converts them back to the original data type. If several adjustments are chained, it is advisable to minimize the number of redundant conversions.

The value `delta` is added to all components of the tensor `image`. `image` is converted to `float` and scaled appropriately if it is in fixed-point representation, and `delta` is converted to the same data type. For regular images, `delta` should be in the range `[0,1)`, as it is added to the image in floating point representation, where pixel values are in the `[0,1)` range.
Parameters
IGraphNodeBase image
RGB image or images to adjust.
IGraphNodeBase delta
A scalar. Amount to add to the pixel values.
Returns
object
A brightness-adjusted tensor of the same shape and type as `image`.

Usage Example: ```python import tensorflow as tf x = tf.random.normal(shape=(256, 256, 3)) tf.image.adjust_brightness(x, delta=0.1) ```

object adjust_brightness(IGraphNodeBase image, double delta)

Adjust the brightness of RGB or Grayscale images.

This is a convenience method that converts RGB images to float representation, adjusts their brightness, and then converts them back to the original data type. If several adjustments are chained, it is advisable to minimize the number of redundant conversions.

The value `delta` is added to all components of the tensor `image`. `image` is converted to `float` and scaled appropriately if it is in fixed-point representation, and `delta` is converted to the same data type. For regular images, `delta` should be in the range `[0,1)`, as it is added to the image in floating point representation, where pixel values are in the `[0,1)` range.
Parameters
IGraphNodeBase image
RGB image or images to adjust.
double delta
A scalar. Amount to add to the pixel values.
Returns
object
A brightness-adjusted tensor of the same shape and type as `image`.

Usage Example: ```python import tensorflow as tf x = tf.random.normal(shape=(256, 256, 3)) tf.image.adjust_brightness(x, delta=0.1) ```

object adjust_brightness_dyn(object image, object delta)

Adjust the brightness of RGB or Grayscale images.

This is a convenience method that converts RGB images to float representation, adjusts their brightness, and then converts them back to the original data type. If several adjustments are chained, it is advisable to minimize the number of redundant conversions.

The value `delta` is added to all components of the tensor `image`. `image` is converted to `float` and scaled appropriately if it is in fixed-point representation, and `delta` is converted to the same data type. For regular images, `delta` should be in the range `[0,1)`, as it is added to the image in floating point representation, where pixel values are in the `[0,1)` range.
Parameters
object image
RGB image or images to adjust.
object delta
A scalar. Amount to add to the pixel values.
Returns
object
A brightness-adjusted tensor of the same shape and type as `image`.

Usage Example: ```python import tensorflow as tf x = tf.random.normal(shape=(256, 256, 3)) tf.image.adjust_brightness(x, delta=0.1) ```

object adjust_contrast(IGraphNodeBase images, double contrast_factor)

Adjust contrast of RGB or grayscale images.

This is a convenience method that converts RGB images to float representation, adjusts their contrast, and then converts them back to the original data type. If several adjustments are chained, it is advisable to minimize the number of redundant conversions.

`images` is a tensor of at least 3 dimensions. The last 3 dimensions are interpreted as `[height, width, channels]`. The other dimensions only represent a collection of images, such as `[batch, height, width, channels].`

Contrast is adjusted independently for each channel of each image.

For each channel, this Op computes the mean of the image pixels in the channel and then adjusts each component `x` of each pixel to `(x - mean) * contrast_factor + mean`.
Parameters
IGraphNodeBase images
Images to adjust. At least 3-D.
double contrast_factor
A float multiplier for adjusting contrast.
Returns
object
The contrast-adjusted image or images.

Usage Example: ```python import tensorflow as tf x = tf.random.normal(shape=(256, 256, 3)) tf.image.adjust_contrast(x,2) ```

object adjust_contrast(IGraphNodeBase images, IGraphNodeBase contrast_factor)

Adjust contrast of RGB or grayscale images.

This is a convenience method that converts RGB images to float representation, adjusts their contrast, and then converts them back to the original data type. If several adjustments are chained, it is advisable to minimize the number of redundant conversions.

`images` is a tensor of at least 3 dimensions. The last 3 dimensions are interpreted as `[height, width, channels]`. The other dimensions only represent a collection of images, such as `[batch, height, width, channels].`

Contrast is adjusted independently for each channel of each image.

For each channel, this Op computes the mean of the image pixels in the channel and then adjusts each component `x` of each pixel to `(x - mean) * contrast_factor + mean`.
Parameters
IGraphNodeBase images
Images to adjust. At least 3-D.
IGraphNodeBase contrast_factor
A float multiplier for adjusting contrast.
Returns
object
The contrast-adjusted image or images.

Usage Example: ```python import tensorflow as tf x = tf.random.normal(shape=(256, 256, 3)) tf.image.adjust_contrast(x,2) ```

object adjust_contrast(IGraphNodeBase images, IEnumerable<double> contrast_factor)

Adjust contrast of RGB or grayscale images.

This is a convenience method that converts RGB images to float representation, adjusts their contrast, and then converts them back to the original data type. If several adjustments are chained, it is advisable to minimize the number of redundant conversions.

`images` is a tensor of at least 3 dimensions. The last 3 dimensions are interpreted as `[height, width, channels]`. The other dimensions only represent a collection of images, such as `[batch, height, width, channels].`

Contrast is adjusted independently for each channel of each image.

For each channel, this Op computes the mean of the image pixels in the channel and then adjusts each component `x` of each pixel to `(x - mean) * contrast_factor + mean`.
Parameters
IGraphNodeBase images
Images to adjust. At least 3-D.
IEnumerable<double> contrast_factor
A float multiplier for adjusting contrast.
Returns
object
The contrast-adjusted image or images.

Usage Example: ```python import tensorflow as tf x = tf.random.normal(shape=(256, 256, 3)) tf.image.adjust_contrast(x,2) ```

object adjust_contrast(ValueTuple<PythonClassContainer, PythonClassContainer> images, IGraphNodeBase contrast_factor)

Adjust contrast of RGB or grayscale images.

This is a convenience method that converts RGB images to float representation, adjusts their contrast, and then converts them back to the original data type. If several adjustments are chained, it is advisable to minimize the number of redundant conversions.

`images` is a tensor of at least 3 dimensions. The last 3 dimensions are interpreted as `[height, width, channels]`. The other dimensions only represent a collection of images, such as `[batch, height, width, channels].`

Contrast is adjusted independently for each channel of each image.

For each channel, this Op computes the mean of the image pixels in the channel and then adjusts each component `x` of each pixel to `(x - mean) * contrast_factor + mean`.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> images
Images to adjust. At least 3-D.
IGraphNodeBase contrast_factor
A float multiplier for adjusting contrast.
Returns
object
The contrast-adjusted image or images.

Usage Example: ```python import tensorflow as tf x = tf.random.normal(shape=(256, 256, 3)) tf.image.adjust_contrast(x,2) ```

object adjust_contrast(ValueTuple<PythonClassContainer, PythonClassContainer> images, IEnumerable<double> contrast_factor)

Adjust contrast of RGB or grayscale images.

This is a convenience method that converts RGB images to float representation, adjusts their contrast, and then converts them back to the original data type. If several adjustments are chained, it is advisable to minimize the number of redundant conversions.

`images` is a tensor of at least 3 dimensions. The last 3 dimensions are interpreted as `[height, width, channels]`. The other dimensions only represent a collection of images, such as `[batch, height, width, channels].`

Contrast is adjusted independently for each channel of each image.

For each channel, this Op computes the mean of the image pixels in the channel and then adjusts each component `x` of each pixel to `(x - mean) * contrast_factor + mean`.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> images
Images to adjust. At least 3-D.
IEnumerable<double> contrast_factor
A float multiplier for adjusting contrast.
Returns
object
The contrast-adjusted image or images.

Usage Example: ```python import tensorflow as tf x = tf.random.normal(shape=(256, 256, 3)) tf.image.adjust_contrast(x,2) ```

object adjust_contrast(ValueTuple<PythonClassContainer, PythonClassContainer> images, double contrast_factor)

Adjust contrast of RGB or grayscale images.

This is a convenience method that converts RGB images to float representation, adjusts their contrast, and then converts them back to the original data type. If several adjustments are chained, it is advisable to minimize the number of redundant conversions.

`images` is a tensor of at least 3 dimensions. The last 3 dimensions are interpreted as `[height, width, channels]`. The other dimensions only represent a collection of images, such as `[batch, height, width, channels].`

Contrast is adjusted independently for each channel of each image.

For each channel, this Op computes the mean of the image pixels in the channel and then adjusts each component `x` of each pixel to `(x - mean) * contrast_factor + mean`.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> images
Images to adjust. At least 3-D.
double contrast_factor
A float multiplier for adjusting contrast.
Returns
object
The contrast-adjusted image or images.

Usage Example: ```python import tensorflow as tf x = tf.random.normal(shape=(256, 256, 3)) tf.image.adjust_contrast(x,2) ```

object adjust_contrast_dyn(object images, object contrast_factor)

Adjust contrast of RGB or grayscale images.

This is a convenience method that converts RGB images to float representation, adjusts their contrast, and then converts them back to the original data type. If several adjustments are chained, it is advisable to minimize the number of redundant conversions.

`images` is a tensor of at least 3 dimensions. The last 3 dimensions are interpreted as `[height, width, channels]`. The other dimensions only represent a collection of images, such as `[batch, height, width, channels].`

Contrast is adjusted independently for each channel of each image.

For each channel, this Op computes the mean of the image pixels in the channel and then adjusts each component `x` of each pixel to `(x - mean) * contrast_factor + mean`.
Parameters
object images
Images to adjust. At least 3-D.
object contrast_factor
A float multiplier for adjusting contrast.
Returns
object
The contrast-adjusted image or images.

Usage Example: ```python import tensorflow as tf x = tf.random.normal(shape=(256, 256, 3)) tf.image.adjust_contrast(x,2) ```

object adjust_gamma(IGraphNodeBase image, int gamma, int gain)

Performs Gamma Correction on the input image.

Also known as Power Law Transform. This function converts the input images at first to float representation, then transforms them pixelwise according to the equation `Out = gain * In**gamma`, and then converts the back to the original data type.
Returns
object
A Tensor. A Gamma-adjusted tensor of the same shape and type as `image`. Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.adjust_gamma(x, 0.2) ```

object adjust_gamma(IGraphNodeBase image, double gamma, int gain)

Performs Gamma Correction on the input image.

Also known as Power Law Transform. This function converts the input images at first to float representation, then transforms them pixelwise according to the equation `Out = gain * In**gamma`, and then converts the back to the original data type.
Returns
object
A Tensor. A Gamma-adjusted tensor of the same shape and type as `image`. Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.adjust_gamma(x, 0.2) ```

object adjust_gamma(IGraphNodeBase image, IGraphNodeBase gamma, int gain)

Performs Gamma Correction on the input image.

Also known as Power Law Transform. This function converts the input images at first to float representation, then transforms them pixelwise according to the equation `Out = gain * In**gamma`, and then converts the back to the original data type.
Returns
object
A Tensor. A Gamma-adjusted tensor of the same shape and type as `image`. Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.adjust_gamma(x, 0.2) ```

object adjust_gamma_dyn(object image, ImplicitContainer<T> gamma, ImplicitContainer<T> gain)

Performs Gamma Correction on the input image.

Also known as Power Law Transform. This function converts the input images at first to float representation, then transforms them pixelwise according to the equation `Out = gain * In**gamma`, and then converts the back to the original data type.
Returns
object
A Tensor. A Gamma-adjusted tensor of the same shape and type as `image`. Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.adjust_gamma(x, 0.2) ```

object adjust_hue(ValueTuple<PythonClassContainer, PythonClassContainer> image, double delta, string name)

Adjust hue of RGB images.

This is a convenience method that converts an RGB image to float representation, converts it to HSV, add an offset to the hue channel, converts back to RGB and then back to the original data type. If several adjustments are chained it is advisable to minimize the number of redundant conversions.

`image` is an RGB image. The image hue is adjusted by converting the image(s) to HSV and rotating the hue channel (H) by `delta`. The image is then converted back to RGB.

`delta` must be in the interval `[-1, 1]`.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> image
RGB image or images. Size of the last dimension must be 3.
double delta
float. How much to add to the hue channel.
string name
A name for this operation (optional).
Returns
object
Adjusted image(s), same shape and DType as `image`.

Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.adjust_hue(x, 0.2) ```

object adjust_hue(ValueTuple<PythonClassContainer, PythonClassContainer> image, IGraphNodeBase delta, string name)

Adjust hue of RGB images.

This is a convenience method that converts an RGB image to float representation, converts it to HSV, add an offset to the hue channel, converts back to RGB and then back to the original data type. If several adjustments are chained it is advisable to minimize the number of redundant conversions.

`image` is an RGB image. The image hue is adjusted by converting the image(s) to HSV and rotating the hue channel (H) by `delta`. The image is then converted back to RGB.

`delta` must be in the interval `[-1, 1]`.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> image
RGB image or images. Size of the last dimension must be 3.
IGraphNodeBase delta
float. How much to add to the hue channel.
string name
A name for this operation (optional).
Returns
object
Adjusted image(s), same shape and DType as `image`.

Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.adjust_hue(x, 0.2) ```

object adjust_hue(IndexedSlices image, double delta, string name)

Adjust hue of RGB images.

This is a convenience method that converts an RGB image to float representation, converts it to HSV, add an offset to the hue channel, converts back to RGB and then back to the original data type. If several adjustments are chained it is advisable to minimize the number of redundant conversions.

`image` is an RGB image. The image hue is adjusted by converting the image(s) to HSV and rotating the hue channel (H) by `delta`. The image is then converted back to RGB.

`delta` must be in the interval `[-1, 1]`.
Parameters
IndexedSlices image
RGB image or images. Size of the last dimension must be 3.
double delta
float. How much to add to the hue channel.
string name
A name for this operation (optional).
Returns
object
Adjusted image(s), same shape and DType as `image`.

Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.adjust_hue(x, 0.2) ```

object adjust_hue(IndexedSlices image, IGraphNodeBase delta, string name)

Adjust hue of RGB images.

This is a convenience method that converts an RGB image to float representation, converts it to HSV, add an offset to the hue channel, converts back to RGB and then back to the original data type. If several adjustments are chained it is advisable to minimize the number of redundant conversions.

`image` is an RGB image. The image hue is adjusted by converting the image(s) to HSV and rotating the hue channel (H) by `delta`. The image is then converted back to RGB.

`delta` must be in the interval `[-1, 1]`.
Parameters
IndexedSlices image
RGB image or images. Size of the last dimension must be 3.
IGraphNodeBase delta
float. How much to add to the hue channel.
string name
A name for this operation (optional).
Returns
object
Adjusted image(s), same shape and DType as `image`.

Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.adjust_hue(x, 0.2) ```

object adjust_hue(IGraphNodeBase image, double delta, string name)

Adjust hue of RGB images.

This is a convenience method that converts an RGB image to float representation, converts it to HSV, add an offset to the hue channel, converts back to RGB and then back to the original data type. If several adjustments are chained it is advisable to minimize the number of redundant conversions.

`image` is an RGB image. The image hue is adjusted by converting the image(s) to HSV and rotating the hue channel (H) by `delta`. The image is then converted back to RGB.

`delta` must be in the interval `[-1, 1]`.
Parameters
IGraphNodeBase image
RGB image or images. Size of the last dimension must be 3.
double delta
float. How much to add to the hue channel.
string name
A name for this operation (optional).
Returns
object
Adjusted image(s), same shape and DType as `image`.

Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.adjust_hue(x, 0.2) ```

object adjust_jpeg_quality(IGraphNodeBase image, IGraphNodeBase jpeg_quality, string name)

Adjust jpeg encoding quality of an RGB image.

This is a convenience method that adjusts jpeg encoding quality of an RGB image.

`image` is an RGB image. The image's encoding quality is adjusted to `jpeg_quality`. `jpeg_quality` must be in the interval `[0, 100]`.
Parameters
IGraphNodeBase image
RGB image or images. Size of the last dimension must be 3.
IGraphNodeBase jpeg_quality
Python int or Tensor of type int32. jpeg encoding quality.
string name
A name for this operation (optional).
Returns
object
Adjusted image(s), same shape and DType as `image`.

Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.adjust_jpeg_quality(x, 75) ```

object adjust_jpeg_quality_dyn(object image, object jpeg_quality, object name)

Adjust jpeg encoding quality of an RGB image.

This is a convenience method that adjusts jpeg encoding quality of an RGB image.

`image` is an RGB image. The image's encoding quality is adjusted to `jpeg_quality`. `jpeg_quality` must be in the interval `[0, 100]`.
Parameters
object image
RGB image or images. Size of the last dimension must be 3.
object jpeg_quality
Python int or Tensor of type int32. jpeg encoding quality.
object name
A name for this operation (optional).
Returns
object
Adjusted image(s), same shape and DType as `image`.

Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.adjust_jpeg_quality(x, 75) ```

object adjust_saturation(IGraphNodeBase image, double saturation_factor, string name)

Adjust saturation of RGB images.

This is a convenience method that converts RGB images to float representation, converts them to HSV, add an offset to the saturation channel, converts back to RGB and then back to the original data type. If several adjustments are chained it is advisable to minimize the number of redundant conversions.

`image` is an RGB image or images. The image saturation is adjusted by converting the images to HSV and multiplying the saturation (S) channel by `saturation_factor` and clipping. The images are then converted back to RGB.
Parameters
IGraphNodeBase image
RGB image or images. Size of the last dimension must be 3.
double saturation_factor
float. Factor to multiply the saturation by.
string name
A name for this operation (optional).
Returns
object
Adjusted image(s), same shape and DType as `image`.

Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.adjust_saturation(x, 0.5) ```

object central_crop(ndarray image, double central_fraction)

Crop the central region of the image(s).

Remove the outer parts of an image but retain the central region of the image along each dimension. If we specify central_fraction = 0.5, this function returns the region marked with "X" in the below diagram.

-------- | | | XXXX | | XXXX | | | where "X" is the central 50% of the image. --------

This function works on either a single image (`image` is a 3-D Tensor), or a batch of images (`image` is a 4-D Tensor).
Parameters
ndarray image
Either a 3-D float Tensor of shape [height, width, depth], or a 4-D Tensor of shape [batch_size, height, width, depth].
double central_fraction
float (0, 1], fraction of size to crop Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.central_crop(x, 0.5) ```
Returns
object
3-D / 4-D float Tensor, as per the input.

object central_crop(IGraphNodeBase image, double central_fraction)

Crop the central region of the image(s).

Remove the outer parts of an image but retain the central region of the image along each dimension. If we specify central_fraction = 0.5, this function returns the region marked with "X" in the below diagram.

-------- | | | XXXX | | XXXX | | | where "X" is the central 50% of the image. --------

This function works on either a single image (`image` is a 3-D Tensor), or a batch of images (`image` is a 4-D Tensor).
Parameters
IGraphNodeBase image
Either a 3-D float Tensor of shape [height, width, depth], or a 4-D Tensor of shape [batch_size, height, width, depth].
double central_fraction
float (0, 1], fraction of size to crop Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.central_crop(x, 0.5) ```
Returns
object
3-D / 4-D float Tensor, as per the input.

object central_crop(ValueTuple<PythonClassContainer, PythonClassContainer> image, double central_fraction)

Crop the central region of the image(s).

Remove the outer parts of an image but retain the central region of the image along each dimension. If we specify central_fraction = 0.5, this function returns the region marked with "X" in the below diagram.

-------- | | | XXXX | | XXXX | | | where "X" is the central 50% of the image. --------

This function works on either a single image (`image` is a 3-D Tensor), or a batch of images (`image` is a 4-D Tensor).
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> image
Either a 3-D float Tensor of shape [height, width, depth], or a 4-D Tensor of shape [batch_size, height, width, depth].
double central_fraction
float (0, 1], fraction of size to crop Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.central_crop(x, 0.5) ```
Returns
object
3-D / 4-D float Tensor, as per the input.

object central_crop_dyn(object image, object central_fraction)

Crop the central region of the image(s).

Remove the outer parts of an image but retain the central region of the image along each dimension. If we specify central_fraction = 0.5, this function returns the region marked with "X" in the below diagram.

-------- | | | XXXX | | XXXX | | | where "X" is the central 50% of the image. --------

This function works on either a single image (`image` is a 3-D Tensor), or a batch of images (`image` is a 4-D Tensor).
Parameters
object image
Either a 3-D float Tensor of shape [height, width, depth], or a 4-D Tensor of shape [batch_size, height, width, depth].
object central_fraction
float (0, 1], fraction of size to crop Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3)) >> tf.image.central_crop(x, 0.5) ```
Returns
object
3-D / 4-D float Tensor, as per the input.

object combined_non_max_suppression(object boxes, object scores, IGraphNodeBase max_output_size_per_class, IGraphNodeBase max_total_size, double iou_threshold, IGraphNodeBase score_threshold, bool pad_per_class, bool clip_boxes, string name)

Greedily selects a subset of bounding boxes in descending order of score.

This operation performs non_max_suppression on the inputs per batch, across all classes. Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Also note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is the final boxes, scores and classes tensor returned after performing non_max_suppression.
Parameters
object boxes
A 4-D float `Tensor` of shape `[batch_size, num_boxes, q, 4]`. If `q` is 1 then same boxes are used for all classes otherwise, if `q` is equal to number of classes, class-specific boxes are used.
object scores
A 3-D float `Tensor` of shape `[batch_size, num_boxes, num_classes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size_per_class
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression per class
IGraphNodeBase max_total_size
A scalar representing maximum number of boxes retained over all classes.
double iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
IGraphNodeBase score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
bool pad_per_class
If false, the output nmsed boxes, scores and classes are padded/clipped to `max_total_size`. If true, the output nmsed boxes, scores and classes are padded to be of length `max_size_per_class`*`num_classes`, unless it exceeds `max_total_size` in which case it is clipped to `max_total_size`. Defaults to false.
bool clip_boxes
If true, the coordinates of output nmsed boxes will be clipped to [0, 1]. If false, output the box coordinates as it is. Defaults to true.
string name
A name for the operation (optional).
Returns
object
'nmsed_boxes': A [batch_size, max_detections, 4] float32 tensor containing the non-max suppressed boxes. 'nmsed_scores': A [batch_size, max_detections] float32 tensor containing the scores for the boxes. 'nmsed_classes': A [batch_size, max_detections] float32 tensor containing the class for boxes. 'valid_detections': A [batch_size] int32 tensor indicating the number of valid detections per batch item. Only the top valid_detections[i] entries in nms_boxes[i], nms_scores[i] and nms_class[i] are valid. The rest of the entries are zero paddings.

object combined_non_max_suppression(object boxes, object scores, IGraphNodeBase max_output_size_per_class, IGraphNodeBase max_total_size, IGraphNodeBase iou_threshold, ImplicitContainer<T> score_threshold, bool pad_per_class, bool clip_boxes, string name)

Greedily selects a subset of bounding boxes in descending order of score.

This operation performs non_max_suppression on the inputs per batch, across all classes. Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Also note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is the final boxes, scores and classes tensor returned after performing non_max_suppression.
Parameters
object boxes
A 4-D float `Tensor` of shape `[batch_size, num_boxes, q, 4]`. If `q` is 1 then same boxes are used for all classes otherwise, if `q` is equal to number of classes, class-specific boxes are used.
object scores
A 3-D float `Tensor` of shape `[batch_size, num_boxes, num_classes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size_per_class
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression per class
IGraphNodeBase max_total_size
A scalar representing maximum number of boxes retained over all classes.
IGraphNodeBase iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
bool pad_per_class
If false, the output nmsed boxes, scores and classes are padded/clipped to `max_total_size`. If true, the output nmsed boxes, scores and classes are padded to be of length `max_size_per_class`*`num_classes`, unless it exceeds `max_total_size` in which case it is clipped to `max_total_size`. Defaults to false.
bool clip_boxes
If true, the coordinates of output nmsed boxes will be clipped to [0, 1]. If false, output the box coordinates as it is. Defaults to true.
string name
A name for the operation (optional).
Returns
object
'nmsed_boxes': A [batch_size, max_detections, 4] float32 tensor containing the non-max suppressed boxes. 'nmsed_scores': A [batch_size, max_detections] float32 tensor containing the scores for the boxes. 'nmsed_classes': A [batch_size, max_detections] float32 tensor containing the class for boxes. 'valid_detections': A [batch_size] int32 tensor indicating the number of valid detections per batch item. Only the top valid_detections[i] entries in nms_boxes[i], nms_scores[i] and nms_class[i] are valid. The rest of the entries are zero paddings.

object combined_non_max_suppression(object boxes, object scores, IGraphNodeBase max_output_size_per_class, IGraphNodeBase max_total_size, IGraphNodeBase iou_threshold, IGraphNodeBase score_threshold, bool pad_per_class, bool clip_boxes, string name)

Greedily selects a subset of bounding boxes in descending order of score.

This operation performs non_max_suppression on the inputs per batch, across all classes. Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Also note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is the final boxes, scores and classes tensor returned after performing non_max_suppression.
Parameters
object boxes
A 4-D float `Tensor` of shape `[batch_size, num_boxes, q, 4]`. If `q` is 1 then same boxes are used for all classes otherwise, if `q` is equal to number of classes, class-specific boxes are used.
object scores
A 3-D float `Tensor` of shape `[batch_size, num_boxes, num_classes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size_per_class
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression per class
IGraphNodeBase max_total_size
A scalar representing maximum number of boxes retained over all classes.
IGraphNodeBase iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
IGraphNodeBase score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
bool pad_per_class
If false, the output nmsed boxes, scores and classes are padded/clipped to `max_total_size`. If true, the output nmsed boxes, scores and classes are padded to be of length `max_size_per_class`*`num_classes`, unless it exceeds `max_total_size` in which case it is clipped to `max_total_size`. Defaults to false.
bool clip_boxes
If true, the coordinates of output nmsed boxes will be clipped to [0, 1]. If false, output the box coordinates as it is. Defaults to true.
string name
A name for the operation (optional).
Returns
object
'nmsed_boxes': A [batch_size, max_detections, 4] float32 tensor containing the non-max suppressed boxes. 'nmsed_scores': A [batch_size, max_detections] float32 tensor containing the scores for the boxes. 'nmsed_classes': A [batch_size, max_detections] float32 tensor containing the class for boxes. 'valid_detections': A [batch_size] int32 tensor indicating the number of valid detections per batch item. Only the top valid_detections[i] entries in nms_boxes[i], nms_scores[i] and nms_class[i] are valid. The rest of the entries are zero paddings.

object combined_non_max_suppression(object boxes, object scores, IGraphNodeBase max_output_size_per_class, IGraphNodeBase max_total_size, double iou_threshold, ImplicitContainer<T> score_threshold, bool pad_per_class, bool clip_boxes, string name)

Greedily selects a subset of bounding boxes in descending order of score.

This operation performs non_max_suppression on the inputs per batch, across all classes. Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Also note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is the final boxes, scores and classes tensor returned after performing non_max_suppression.
Parameters
object boxes
A 4-D float `Tensor` of shape `[batch_size, num_boxes, q, 4]`. If `q` is 1 then same boxes are used for all classes otherwise, if `q` is equal to number of classes, class-specific boxes are used.
object scores
A 3-D float `Tensor` of shape `[batch_size, num_boxes, num_classes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size_per_class
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression per class
IGraphNodeBase max_total_size
A scalar representing maximum number of boxes retained over all classes.
double iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
bool pad_per_class
If false, the output nmsed boxes, scores and classes are padded/clipped to `max_total_size`. If true, the output nmsed boxes, scores and classes are padded to be of length `max_size_per_class`*`num_classes`, unless it exceeds `max_total_size` in which case it is clipped to `max_total_size`. Defaults to false.
bool clip_boxes
If true, the coordinates of output nmsed boxes will be clipped to [0, 1]. If false, output the box coordinates as it is. Defaults to true.
string name
A name for the operation (optional).
Returns
object
'nmsed_boxes': A [batch_size, max_detections, 4] float32 tensor containing the non-max suppressed boxes. 'nmsed_scores': A [batch_size, max_detections] float32 tensor containing the scores for the boxes. 'nmsed_classes': A [batch_size, max_detections] float32 tensor containing the class for boxes. 'valid_detections': A [batch_size] int32 tensor indicating the number of valid detections per batch item. Only the top valid_detections[i] entries in nms_boxes[i], nms_scores[i] and nms_class[i] are valid. The rest of the entries are zero paddings.

object combined_non_max_suppression_dyn(object boxes, object scores, object max_output_size_per_class, object max_total_size, ImplicitContainer<T> iou_threshold, ImplicitContainer<T> score_threshold, ImplicitContainer<T> pad_per_class, ImplicitContainer<T> clip_boxes, object name)

Greedily selects a subset of bounding boxes in descending order of score.

This operation performs non_max_suppression on the inputs per batch, across all classes. Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Also note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is the final boxes, scores and classes tensor returned after performing non_max_suppression.
Parameters
object boxes
A 4-D float `Tensor` of shape `[batch_size, num_boxes, q, 4]`. If `q` is 1 then same boxes are used for all classes otherwise, if `q` is equal to number of classes, class-specific boxes are used.
object scores
A 3-D float `Tensor` of shape `[batch_size, num_boxes, num_classes]` representing a single score corresponding to each box (each row of boxes).
object max_output_size_per_class
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression per class
object max_total_size
A scalar representing maximum number of boxes retained over all classes.
ImplicitContainer<T> iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
ImplicitContainer<T> pad_per_class
If false, the output nmsed boxes, scores and classes are padded/clipped to `max_total_size`. If true, the output nmsed boxes, scores and classes are padded to be of length `max_size_per_class`*`num_classes`, unless it exceeds `max_total_size` in which case it is clipped to `max_total_size`. Defaults to false.
ImplicitContainer<T> clip_boxes
If true, the coordinates of output nmsed boxes will be clipped to [0, 1]. If false, output the box coordinates as it is. Defaults to true.
object name
A name for the operation (optional).
Returns
object
'nmsed_boxes': A [batch_size, max_detections, 4] float32 tensor containing the non-max suppressed boxes. 'nmsed_scores': A [batch_size, max_detections] float32 tensor containing the scores for the boxes. 'nmsed_classes': A [batch_size, max_detections] float32 tensor containing the class for boxes. 'valid_detections': A [batch_size] int32 tensor indicating the number of valid detections per batch item. Only the top valid_detections[i] entries in nms_boxes[i], nms_scores[i] and nms_class[i] are valid. The rest of the entries are zero paddings.

object convert_image_dtype(object image, DType dtype, bool saturate, string name)

Convert `image` to `dtype`, scaling its values if needed.

Images that are represented using floating point values are expected to have values in the range [0,1). Image data stored in integer data types are expected to have values in the range `[0,MAX]`, where `MAX` is the largest positive representable number for the data type.

This op converts between data types, scaling the values appropriately before casting.

Note that converting from floating point inputs to integer types may lead to over/underflow problems. Set saturate to `True` to avoid such problem in problematic conversions. If enabled, saturation will clip the output into the allowed range before performing a potentially dangerous cast (and only before performing such a cast, i.e., when casting from a floating point to an integer type, and when casting from a signed to an unsigned type; `saturate` has no effect on casts between floats, or on casts that increase the type's range).
Parameters
object image
An image.
DType dtype
A `DType` to convert `image` to.
bool saturate
If `True`, clip the input before casting (if necessary).
string name
A name for this operation (optional).
Returns
object
`image`, converted to `dtype`.

Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3), dtype=tf.float32) >> tf.image.convert_image_dtype(x, dtype=tf.float16, saturate=False) ```

object convert_image_dtype(object image, DType dtype, bool saturate, PythonFunctionContainer name)

Convert `image` to `dtype`, scaling its values if needed.

Images that are represented using floating point values are expected to have values in the range [0,1). Image data stored in integer data types are expected to have values in the range `[0,MAX]`, where `MAX` is the largest positive representable number for the data type.

This op converts between data types, scaling the values appropriately before casting.

Note that converting from floating point inputs to integer types may lead to over/underflow problems. Set saturate to `True` to avoid such problem in problematic conversions. If enabled, saturation will clip the output into the allowed range before performing a potentially dangerous cast (and only before performing such a cast, i.e., when casting from a floating point to an integer type, and when casting from a signed to an unsigned type; `saturate` has no effect on casts between floats, or on casts that increase the type's range).
Parameters
object image
An image.
DType dtype
A `DType` to convert `image` to.
bool saturate
If `True`, clip the input before casting (if necessary).
PythonFunctionContainer name
A name for this operation (optional).
Returns
object
`image`, converted to `dtype`.

Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3), dtype=tf.float32) >> tf.image.convert_image_dtype(x, dtype=tf.float16, saturate=False) ```

object convert_image_dtype_dyn(object image, object dtype, ImplicitContainer<T> saturate, object name)

Convert `image` to `dtype`, scaling its values if needed.

Images that are represented using floating point values are expected to have values in the range [0,1). Image data stored in integer data types are expected to have values in the range `[0,MAX]`, where `MAX` is the largest positive representable number for the data type.

This op converts between data types, scaling the values appropriately before casting.

Note that converting from floating point inputs to integer types may lead to over/underflow problems. Set saturate to `True` to avoid such problem in problematic conversions. If enabled, saturation will clip the output into the allowed range before performing a potentially dangerous cast (and only before performing such a cast, i.e., when casting from a floating point to an integer type, and when casting from a signed to an unsigned type; `saturate` has no effect on casts between floats, or on casts that increase the type's range).
Parameters
object image
An image.
object dtype
A `DType` to convert `image` to.
ImplicitContainer<T> saturate
If `True`, clip the input before casting (if necessary).
object name
A name for this operation (optional).
Returns
object
`image`, converted to `dtype`.

Usage Example: ```python >> import tensorflow as tf >> x = tf.random.normal(shape=(256, 256, 3), dtype=tf.float32) >> tf.image.convert_image_dtype(x, dtype=tf.float16, saturate=False) ```

Tensor crop_and_resize(IGraphNodeBase image, IGraphNodeBase boxes, IGraphNodeBase box_ind, IEnumerable<object> crop_size, string method, int extrapolation_value, string name, object box_indices)

Extracts crops from the input image tensor and resizes them.

Extracts crops from the input image tensor and resizes them using bilinear sampling or nearest neighbor sampling (possibly with aspect ratio change) to a common output size specified by `crop_size`. This is more general than the `crop_to_bounding_box` op which extracts a fixed size slice from the input image and does not allow resizing or aspect ratio change.

Returns a tensor with `crops` from the input `image` at positions defined at the bounding box locations in `boxes`. The cropped boxes are all resized (with bilinear or nearest neighbor interpolation) to a fixed `size = [crop_height, crop_width]`. The result is a 4-D tensor `[num_boxes, crop_height, crop_width, depth]`. The resizing is corner aligned. In particular, if `boxes = [[0, 0, 1, 1]]`, the method will give identical results to using `tf.image.resize_bilinear()` or `tf.image.resize_nearest_neighbor()`(depends on the `method` argument) with `align_corners=True`.
Parameters
IGraphNodeBase image
A `Tensor`. Must be one of the following types: `uint8`, `uint16`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`. A 4-D tensor of shape `[batch, image_height, image_width, depth]`. Both `image_height` and `image_width` need to be positive.
IGraphNodeBase boxes
A `Tensor` of type `float32`. A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor specifies the coordinates of a box in the `box_ind[i]` image and is specified in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of `y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the `[0, 1]` interval of normalized image height is mapped to `[0, image_height - 1]` in image height coordinates. We do allow `y1` > `y2`, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the `[0, 1]` range are allowed, in which case we use `extrapolation_value` to extrapolate the input image values.
IGraphNodeBase box_ind
A `Tensor` of type `int32`. A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`. The value of `box_ind[i]` specifies the image that the `i`-th box refers to.
IEnumerable<object> crop_size
A `Tensor` of type `int32`. A 1-D tensor of 2 elements, `size = [crop_height, crop_width]`. All cropped image patches are resized to this size. The aspect ratio of the image content is not preserved. Both `crop_height` and `crop_width` need to be positive.
string method
An optional `string` from: `"bilinear", "nearest"`. Defaults to `"bilinear"`. A string specifying the sampling method for resizing. It can be either `"bilinear"` or `"nearest"` and default to `"bilinear"`. Currently two sampling methods are supported: Bilinear and Nearest Neighbor.
int extrapolation_value
An optional `float`. Defaults to `0`. Value used for extrapolation, when applicable.
string name
A name for the operation (optional).
object box_indices
Returns
Tensor
A `Tensor` of type `float32`.

object crop_and_resize_dyn(object image, object boxes, object box_ind, object crop_size, ImplicitContainer<T> method, ImplicitContainer<T> extrapolation_value, object name, object box_indices)

Extracts crops from the input image tensor and resizes them.

Extracts crops from the input image tensor and resizes them using bilinear sampling or nearest neighbor sampling (possibly with aspect ratio change) to a common output size specified by `crop_size`. This is more general than the `crop_to_bounding_box` op which extracts a fixed size slice from the input image and does not allow resizing or aspect ratio change.

Returns a tensor with `crops` from the input `image` at positions defined at the bounding box locations in `boxes`. The cropped boxes are all resized (with bilinear or nearest neighbor interpolation) to a fixed `size = [crop_height, crop_width]`. The result is a 4-D tensor `[num_boxes, crop_height, crop_width, depth]`. The resizing is corner aligned. In particular, if `boxes = [[0, 0, 1, 1]]`, the method will give identical results to using `tf.image.resize_bilinear()` or `tf.image.resize_nearest_neighbor()`(depends on the `method` argument) with `align_corners=True`.
Parameters
object image
A `Tensor`. Must be one of the following types: `uint8`, `uint16`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`. A 4-D tensor of shape `[batch, image_height, image_width, depth]`. Both `image_height` and `image_width` need to be positive.
object boxes
A `Tensor` of type `float32`. A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor specifies the coordinates of a box in the `box_ind[i]` image and is specified in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of `y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the `[0, 1]` interval of normalized image height is mapped to `[0, image_height - 1]` in image height coordinates. We do allow `y1` > `y2`, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the `[0, 1]` range are allowed, in which case we use `extrapolation_value` to extrapolate the input image values.
object box_ind
A `Tensor` of type `int32`. A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`. The value of `box_ind[i]` specifies the image that the `i`-th box refers to.
object crop_size
A `Tensor` of type `int32`. A 1-D tensor of 2 elements, `size = [crop_height, crop_width]`. All cropped image patches are resized to this size. The aspect ratio of the image content is not preserved. Both `crop_height` and `crop_width` need to be positive.
ImplicitContainer<T> method
An optional `string` from: `"bilinear", "nearest"`. Defaults to `"bilinear"`. A string specifying the sampling method for resizing. It can be either `"bilinear"` or `"nearest"` and default to `"bilinear"`. Currently two sampling methods are supported: Bilinear and Nearest Neighbor.
ImplicitContainer<T> extrapolation_value
An optional `float`. Defaults to `0`. Value used for extrapolation, when applicable.
object name
A name for the operation (optional).
object box_indices
Returns
object
A `Tensor` of type `float32`.

Tensor crop_to_bounding_box(PythonClassContainer image, int offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
PythonClassContainer image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Vertical coordinate of the top-left corner of the result in the input.
IGraphNodeBase offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor crop_to_bounding_box(PythonClassContainer image, int offset_height, int offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
PythonClassContainer image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Vertical coordinate of the top-left corner of the result in the input.
int offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor crop_to_bounding_box(IGraphNodeBase image, int offset_height, int offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Vertical coordinate of the top-left corner of the result in the input.
int offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor crop_to_bounding_box(PythonClassContainer image, IGraphNodeBase offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
PythonClassContainer image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Vertical coordinate of the top-left corner of the result in the input.
IGraphNodeBase offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor crop_to_bounding_box(IGraphNodeBase image, IGraphNodeBase offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Vertical coordinate of the top-left corner of the result in the input.
IGraphNodeBase offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor crop_to_bounding_box(PythonClassContainer image, IGraphNodeBase offset_height, int offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
PythonClassContainer image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Vertical coordinate of the top-left corner of the result in the input.
int offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor crop_to_bounding_box(IGraphNodeBase image, int offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Vertical coordinate of the top-left corner of the result in the input.
IGraphNodeBase offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor crop_to_bounding_box(IGraphNodeBase image, IGraphNodeBase offset_height, int offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Vertical coordinate of the top-left corner of the result in the input.
int offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor crop_to_bounding_box(CompositeTensor image, IGraphNodeBase offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
CompositeTensor image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Vertical coordinate of the top-left corner of the result in the input.
IGraphNodeBase offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor crop_to_bounding_box(CompositeTensor image, IGraphNodeBase offset_height, int offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
CompositeTensor image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Vertical coordinate of the top-left corner of the result in the input.
int offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor crop_to_bounding_box(CompositeTensor image, int offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
CompositeTensor image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Vertical coordinate of the top-left corner of the result in the input.
IGraphNodeBase offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor crop_to_bounding_box(IEnumerable<PythonClassContainer> image, IGraphNodeBase offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
IEnumerable<PythonClassContainer> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Vertical coordinate of the top-left corner of the result in the input.
IGraphNodeBase offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor crop_to_bounding_box(IEnumerable<PythonClassContainer> image, IGraphNodeBase offset_height, int offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
IEnumerable<PythonClassContainer> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Vertical coordinate of the top-left corner of the result in the input.
int offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor crop_to_bounding_box(CompositeTensor image, int offset_height, int offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
CompositeTensor image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Vertical coordinate of the top-left corner of the result in the input.
int offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor crop_to_bounding_box(IEnumerable<PythonClassContainer> image, int offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
IEnumerable<PythonClassContainer> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Vertical coordinate of the top-left corner of the result in the input.
IGraphNodeBase offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor crop_to_bounding_box(IEnumerable<PythonClassContainer> image, int offset_height, int offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
IEnumerable<PythonClassContainer> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Vertical coordinate of the top-left corner of the result in the input.
int offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

object crop_to_bounding_box_dyn(object image, object offset_height, object offset_width, object target_height, object target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of `image`. The top-left corner of the returned image is at `offset_height, offset_width` in `image`, and its lower-right corner is at `offset_height + target_height, offset_width + target_width`.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
object offset_height
Vertical coordinate of the top-left corner of the result in the input.
object offset_width
Horizontal coordinate of the top-left corner of the result in the input.
object target_height
Height of the result.
object target_width
Width of the result.
Returns
object
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor draw_bounding_boxes(IGraphNodeBase images, IGraphNodeBase boxes, string name, ndarray colors)

Draw bounding boxes on a batch of images.

Outputs a copy of `images` but draws on top of the pixels zero or more bounding boxes specified by the locations in `boxes`. The coordinates of the each bounding box in `boxes` are encoded as `[y_min, x_min, y_max, x_max]`. The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and height of the underlying image.

For example, if an image is 100 x 200 pixels (height x width) and the bounding box is `[0.1, 0.2, 0.5, 0.9]`, the upper-left and bottom-right coordinates of the bounding box will be `(40, 10)` to `(180, 50)` (in (x,y) coordinates).

Parts of the bounding box may fall outside the image.
Parameters
IGraphNodeBase images
A `Tensor`. Must be one of the following types: `float32`, `half`. 4-D with shape `[batch, height, width, depth]`. A batch of images.
IGraphNodeBase boxes
A `Tensor` of type `float32`. 3-D with shape `[batch, num_bounding_boxes, 4]` containing bounding boxes.
string name
A name for the operation (optional).
ndarray colors
Returns
Tensor
A `Tensor`. Has the same type as `images`.

Tensor encode_png(IGraphNodeBase image, int compression, string name)

PNG-encode an image.

`image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]` where `channels` is:

* 1: for grayscale. * 2: for grayscale + alpha. * 3: for RGB. * 4: for RGBA.

The ZLIB compression level, `compression`, can be -1 for the PNG-encoder default or a value from 0 to 9. 9 is the highest compression level, generating the smallest output, but is slower.
Parameters
IGraphNodeBase image
A `Tensor`. Must be one of the following types: `uint8`, `uint16`. 3-D with shape `[height, width, channels]`.
int compression
An optional `int`. Defaults to `-1`. Compression level.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of type `string`.

object encode_png_dyn(object image, ImplicitContainer<T> compression, object name)

PNG-encode an image.

`image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]` where `channels` is:

* 1: for grayscale. * 2: for grayscale + alpha. * 3: for RGB. * 4: for RGBA.

The ZLIB compression level, `compression`, can be -1 for the PNG-encoder default or a value from 0 to 9. 9 is the highest compression level, generating the smallest output, but is slower.
Parameters
object image
A `Tensor`. Must be one of the following types: `uint8`, `uint16`. 3-D with shape `[height, width, channels]`.
ImplicitContainer<T> compression
An optional `int`. Defaults to `-1`. Compression level.
object name
A name for the operation (optional).
Returns
object
A `Tensor` of type `string`.

Tensor extract_glimpse(IGraphNodeBase input, IGraphNodeBase size, IGraphNodeBase offsets, bool centered, bool normalized, bool uniform_noise, string name)

Extracts a glimpse from the input tensor.

Returns a set of windows called glimpses extracted at location `offsets` from the input tensor. If the windows only partially overlaps the inputs, the non overlapping areas will be filled with random noise.

The result is a 4-D tensor of shape `[batch_size, glimpse_height, glimpse_width, channels]`. The channels and batch dimensions are the same as that of the input tensor. The height and width of the output windows are specified in the `size` parameter.

The argument `normalized` and `centered` controls how the windows are built:

* If the coordinates are normalized but not centered, 0.0 and 1.0 correspond to the minimum and maximum of each height and width dimension. * If the coordinates are both normalized and centered, they range from -1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper left corner, the lower right corner is located at (1.0, 1.0) and the center is at (0, 0). * If the coordinates are not normalized they are interpreted as numbers of pixels.
Parameters
IGraphNodeBase input
A `Tensor` of type `float32`. A 4-D float tensor of shape `[batch_size, height, width, channels]`.
IGraphNodeBase size
A `Tensor` of type `int32`. A 1-D tensor of 2 elements containing the size of the glimpses to extract. The glimpse height must be specified first, following by the glimpse width.
IGraphNodeBase offsets
A `Tensor` of type `float32`. A 2-D integer tensor of shape `[batch_size, 2]` containing the y, x locations of the center of each window.
bool centered
An optional `bool`. Defaults to `True`. indicates if the offset coordinates are centered relative to the image, in which case the (0, 0) offset is relative to the center of the input images. If false, the (0,0) offset corresponds to the upper left corner of the input images.
bool normalized
An optional `bool`. Defaults to `True`. indicates if the offset coordinates are normalized.
bool uniform_noise
An optional `bool`. Defaults to `True`. indicates if the noise should be generated using a uniform distribution or a Gaussian distribution.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor` of type `float32`.

Usage Example: ```python BATCH_SIZE = 1 IMAGE_HEIGHT = 3 IMAGE_WIDTH = 3 CHANNELS = 1 GLIMPSE_SIZE = (2, 2) image = tf.reshape(tf.range(9, delta=1, dtype=tf.float32), shape=(BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS)) output = tf.image.extract_glimpse(image, size=GLIMPSE_SIZE, offsets=[[1, 1]], centered=False, normalized=False) ```

Tensor extract_patches(object images, object sizes, object strides, object rates, object padding, string name)

Extract `patches` from `images`.

This op collects patches from the input image, as if applying a convolution. All extracted patches are stacked in the depth (last) dimension of the output.

Specifically, the op extracts patches of shape `sizes` which are `strides` apart in the input image. The output is subsampled using the `rates` argument, in the same manner as "atrous" or "dilated" convolutions.

The result is a 4D tensor which is indexed by batch, row, and column. `output[i, x, y]` contains a flattened patch of size `sizes[1], sizes[2]` which is taken from the input starting at `images[i, x*strides[1], y*strides[2]]`.

Each output patch can be reshaped to `sizes[1], sizes[2], depth`, where `depth` is `images.shape[3]`.

The output elements are taken from the input at intervals given by the `rate` argument, as in dilated convolutions.

The `padding` argument has no effect on the size of each patch, it determines how many patches are extracted. If `VALID`, only patches which are fully contained in the input image are included. If `SAME`, all patches whose starting point is inside the input are included, and areas outside the input default to zero.

Example:

``` n = 10 # images is a 1 x 10 x 10 x 1 array that contains the numbers 1 through 100 images = [[[[x * n + y + 1] for y in range(n)] for x in range(n)]]

# We generate two outputs as follows: # 1. 3x3 patches with stride length 5 # 2. Same as above, but the rate is increased to 2 tf.extract_image_patches(images=images, ksizes=[1, 3, 3, 1], strides=[1, 5, 5, 1], rates=[1, 1, 1, 1], padding='VALID')

# Yields: [[[[ 1 2 3 11 12 13 21 22 23] [ 6 7 8 16 17 18 26 27 28]] [[51 52 53 61 62 63 71 72 73] [56 57 58 66 67 68 76 77 78]]]] ```

If we mark the pixels in the input image which are taken for the output with `*`, we see the pattern:

``` * * * 4 5 * * * 9 10 * * * 14 15 * * * 19 20 * * * 24 25 * * * 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 * * * 54 55 * * * 59 60 * * * 64 65 * * * 69 70 * * * 74 75 * * * 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 ```

``` tf.extract_image_patches(images=images, sizes=[1, 3, 3, 1], strides=[1, 5, 5, 1], rates=[1, 2, 2, 1], padding='VALID')

# Yields: [[[[ 1 3 5 21 23 25 41 43 45] [ 6 8 10 26 28 30 46 48 50]]

[[ 51 53 55 71 73 75 91 93 95] [ 56 58 60 76 78 80 96 98 100]]]] ```

We can again draw the effect, this time using the symbols `*`, `x`, `+` and `o` to distinguish the patches:

``` * 2 * 4 * x 7 x 9 x 11 12 13 14 15 16 17 18 19 20 * 22 * 24 * x 27 x 29 x 31 32 33 34 35 36 37 38 39 40 * 42 * 44 * x 47 x 49 x + 52 + 54 + o 57 o 59 o 61 62 63 64 65 66 67 68 69 70 + 72 + 74 + o 77 o 79 o 81 82 83 84 85 86 87 88 89 90 + 92 + 94 + o 97 o 99 o ```
Parameters
object images
A 4-D Tensor with shape `[batch, in_rows, in_cols, depth]
object sizes
The size of the extracted patches. Must be [1, size_rows, size_cols, 1].
object strides
A 1-D Tensor of length 4. How far the centers of two consecutive patches are in the images. Must be: `[1, stride_rows, stride_cols, 1]`.
object rates
A 1-D Tensor of length 4. Must be: `[1, rate_rows, rate_cols, 1]`. This is the input stride, specifying how far two consecutive patch samples are in the input. Equivalent to extracting patches with `patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1)`, followed by subsampling them spatially by a factor of `rates`. This is equivalent to `rate` in dilated (a.k.a. Atrous) convolutions.
object padding
The type of padding algorithm to use.
string name
A name for the operation (optional).
Returns
Tensor
A 4-D Tensor of the same type as the input.

object extract_patches_dyn(object images, object sizes, object strides, object rates, object padding, object name)

Extract `patches` from `images`.

This op collects patches from the input image, as if applying a convolution. All extracted patches are stacked in the depth (last) dimension of the output.

Specifically, the op extracts patches of shape `sizes` which are `strides` apart in the input image. The output is subsampled using the `rates` argument, in the same manner as "atrous" or "dilated" convolutions.

The result is a 4D tensor which is indexed by batch, row, and column. `output[i, x, y]` contains a flattened patch of size `sizes[1], sizes[2]` which is taken from the input starting at `images[i, x*strides[1], y*strides[2]]`.

Each output patch can be reshaped to `sizes[1], sizes[2], depth`, where `depth` is `images.shape[3]`.

The output elements are taken from the input at intervals given by the `rate` argument, as in dilated convolutions.

The `padding` argument has no effect on the size of each patch, it determines how many patches are extracted. If `VALID`, only patches which are fully contained in the input image are included. If `SAME`, all patches whose starting point is inside the input are included, and areas outside the input default to zero.

Example:

``` n = 10 # images is a 1 x 10 x 10 x 1 array that contains the numbers 1 through 100 images = [[[[x * n + y + 1] for y in range(n)] for x in range(n)]]

# We generate two outputs as follows: # 1. 3x3 patches with stride length 5 # 2. Same as above, but the rate is increased to 2 tf.extract_image_patches(images=images, ksizes=[1, 3, 3, 1], strides=[1, 5, 5, 1], rates=[1, 1, 1, 1], padding='VALID')

# Yields: [[[[ 1 2 3 11 12 13 21 22 23] [ 6 7 8 16 17 18 26 27 28]] [[51 52 53 61 62 63 71 72 73] [56 57 58 66 67 68 76 77 78]]]] ```

If we mark the pixels in the input image which are taken for the output with `*`, we see the pattern:

``` * * * 4 5 * * * 9 10 * * * 14 15 * * * 19 20 * * * 24 25 * * * 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 * * * 54 55 * * * 59 60 * * * 64 65 * * * 69 70 * * * 74 75 * * * 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 ```

``` tf.extract_image_patches(images=images, sizes=[1, 3, 3, 1], strides=[1, 5, 5, 1], rates=[1, 2, 2, 1], padding='VALID')

# Yields: [[[[ 1 3 5 21 23 25 41 43 45] [ 6 8 10 26 28 30 46 48 50]]

[[ 51 53 55 71 73 75 91 93 95] [ 56 58 60 76 78 80 96 98 100]]]] ```

We can again draw the effect, this time using the symbols `*`, `x`, `+` and `o` to distinguish the patches:

``` * 2 * 4 * x 7 x 9 x 11 12 13 14 15 16 17 18 19 20 * 22 * 24 * x 27 x 29 x 31 32 33 34 35 36 37 38 39 40 * 42 * 44 * x 47 x 49 x + 52 + 54 + o 57 o 59 o 61 62 63 64 65 66 67 68 69 70 + 72 + 74 + o 77 o 79 o 81 82 83 84 85 86 87 88 89 90 + 92 + 94 + o 97 o 99 o ```
Parameters
object images
A 4-D Tensor with shape `[batch, in_rows, in_cols, depth]
object sizes
The size of the extracted patches. Must be [1, size_rows, size_cols, 1].
object strides
A 1-D Tensor of length 4. How far the centers of two consecutive patches are in the images. Must be: `[1, stride_rows, stride_cols, 1]`.
object rates
A 1-D Tensor of length 4. Must be: `[1, rate_rows, rate_cols, 1]`. This is the input stride, specifying how far two consecutive patch samples are in the input. Equivalent to extracting patches with `patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1)`, followed by subsampling them spatially by a factor of `rates`. This is equivalent to `rate` in dilated (a.k.a. Atrous) convolutions.
object padding
The type of padding algorithm to use.
object name
A name for the operation (optional).
Returns
object
A 4-D Tensor of the same type as the input.

Tensor flip_left_right(IGraphNodeBase image)

Flip an image horizontally (left to right).

Outputs the contents of `image` flipped along the width dimension.

See also `reverse()`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
Returns
Tensor
A tensor of the same type and shape as `image`.

object flip_left_right_dyn(object image)

Flip an image horizontally (left to right).

Outputs the contents of `image` flipped along the width dimension.

See also `reverse()`.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
Returns
object
A tensor of the same type and shape as `image`.

Tensor flip_up_down(IGraphNodeBase image)

Flip an image vertically (upside down).

Outputs the contents of `image` flipped along the height dimension.

See also `reverse()`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
Returns
Tensor
A `Tensor` of the same type and shape as `image`.

object flip_up_down_dyn(object image)

Flip an image vertically (upside down).

Outputs the contents of `image` flipped along the height dimension.

See also `reverse()`.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
Returns
object
A `Tensor` of the same type and shape as `image`.

Tensor grayscale_to_rgb(IGraphNodeBase images, string name)

Converts one or more images from Grayscale to RGB.

Outputs a tensor of the same `DType` and rank as `images`. The size of the last dimension of the output is 3, containing the RGB value of the pixels. The input images' last dimension must be size 1.
Parameters
IGraphNodeBase images
The Grayscale tensor to convert. Last dimension must be size 1.
string name
A name for the operation (optional).
Returns
Tensor
The converted grayscale image(s).

object grayscale_to_rgb_dyn(object images, object name)

Converts one or more images from Grayscale to RGB.

Outputs a tensor of the same `DType` and rank as `images`. The size of the last dimension of the output is 3, containing the RGB value of the pixels. The input images' last dimension must be size 1.
Parameters
object images
The Grayscale tensor to convert. Last dimension must be size 1.
object name
A name for the operation (optional).
Returns
object
The converted grayscale image(s).

Tensor hsv_to_rgb(IGraphNodeBase images, string name)

Convert one or more images from HSV to RGB.

Outputs a tensor of the same shape as the `images` tensor, containing the RGB value of the pixels. The output is only well defined if the value in `images` are in `[0,1]`.

See `rgb_to_hsv` for a description of the HSV encoding.
Parameters
IGraphNodeBase images
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 1-D or higher rank. HSV data to convert. Last dimension must be size 3.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `images`.

object hsv_to_rgb_dyn(object images, object name)

Convert one or more images from HSV to RGB.

Outputs a tensor of the same shape as the `images` tensor, containing the RGB value of the pixels. The output is only well defined if the value in `images` are in `[0,1]`.

See `rgb_to_hsv` for a description of the HSV encoding.
Parameters
object images
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 1-D or higher rank. HSV data to convert. Last dimension must be size 3.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `images`.

ValueTuple<object, object> image_gradients(IGraphNodeBase image)

Returns image gradients (dy, dx) for each color channel.

Both output tensors have the same shape as the input: [batch_size, h, w, d]. The gradient values are organized so that [I(x+1, y) - I(x, y)] is in location (x, y). That means that dy will always have zeros in the last row, and dx will always have zeros in the last column.
Parameters
IGraphNodeBase image
Tensor with shape [batch_size, h, w, d].
Returns
ValueTuple<object, object>
Pair of tensors (dy, dx) holding the vertical and horizontal image gradients (1-step finite difference).

Usage Example: ```python BATCH_SIZE = 1 IMAGE_HEIGHT = 5 IMAGE_WIDTH = 5 CHANNELS = 1 image = tf.reshape(tf.range(IMAGE_HEIGHT * IMAGE_WIDTH * CHANNELS, delta=1, dtype=tf.float32), shape=(BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS)) dx, dy = tf.image.image_gradients(image) print(image[0, :,:,0]) tf.Tensor( [[ 0. 1. 2. 3. 4.] [ 5. 6. 7. 8. 9.] [10. 11. 12. 13. 14.] [15. 16. 17. 18. 19.] [20. 21. 22. 23. 24.]], shape=(5, 5), dtype=float32) print(dx[0, :,:,0]) tf.Tensor( [[5. 5. 5. 5. 5.] [5. 5. 5. 5. 5.] [5. 5. 5. 5. 5.] [5. 5. 5. 5. 5.] [0. 0. 0. 0. 0.]], shape=(5, 5), dtype=float32) print(dy[0, :,:,0]) tf.Tensor( [[1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.]], shape=(5, 5), dtype=float32) ```

ValueTuple<object, object> image_gradients(IEnumerable<object> image)

Returns image gradients (dy, dx) for each color channel.

Both output tensors have the same shape as the input: [batch_size, h, w, d]. The gradient values are organized so that [I(x+1, y) - I(x, y)] is in location (x, y). That means that dy will always have zeros in the last row, and dx will always have zeros in the last column.
Parameters
IEnumerable<object> image
Tensor with shape [batch_size, h, w, d].
Returns
ValueTuple<object, object>
Pair of tensors (dy, dx) holding the vertical and horizontal image gradients (1-step finite difference).

Usage Example: ```python BATCH_SIZE = 1 IMAGE_HEIGHT = 5 IMAGE_WIDTH = 5 CHANNELS = 1 image = tf.reshape(tf.range(IMAGE_HEIGHT * IMAGE_WIDTH * CHANNELS, delta=1, dtype=tf.float32), shape=(BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS)) dx, dy = tf.image.image_gradients(image) print(image[0, :,:,0]) tf.Tensor( [[ 0. 1. 2. 3. 4.] [ 5. 6. 7. 8. 9.] [10. 11. 12. 13. 14.] [15. 16. 17. 18. 19.] [20. 21. 22. 23. 24.]], shape=(5, 5), dtype=float32) print(dx[0, :,:,0]) tf.Tensor( [[5. 5. 5. 5. 5.] [5. 5. 5. 5. 5.] [5. 5. 5. 5. 5.] [5. 5. 5. 5. 5.] [0. 0. 0. 0. 0.]], shape=(5, 5), dtype=float32) print(dy[0, :,:,0]) tf.Tensor( [[1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.]], shape=(5, 5), dtype=float32) ```

object image_gradients_dyn(object image)

Returns image gradients (dy, dx) for each color channel.

Both output tensors have the same shape as the input: [batch_size, h, w, d]. The gradient values are organized so that [I(x+1, y) - I(x, y)] is in location (x, y). That means that dy will always have zeros in the last row, and dx will always have zeros in the last column.
Parameters
object image
Tensor with shape [batch_size, h, w, d].
Returns
object
Pair of tensors (dy, dx) holding the vertical and horizontal image gradients (1-step finite difference).

Usage Example: ```python BATCH_SIZE = 1 IMAGE_HEIGHT = 5 IMAGE_WIDTH = 5 CHANNELS = 1 image = tf.reshape(tf.range(IMAGE_HEIGHT * IMAGE_WIDTH * CHANNELS, delta=1, dtype=tf.float32), shape=(BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS)) dx, dy = tf.image.image_gradients(image) print(image[0, :,:,0]) tf.Tensor( [[ 0. 1. 2. 3. 4.] [ 5. 6. 7. 8. 9.] [10. 11. 12. 13. 14.] [15. 16. 17. 18. 19.] [20. 21. 22. 23. 24.]], shape=(5, 5), dtype=float32) print(dx[0, :,:,0]) tf.Tensor( [[5. 5. 5. 5. 5.] [5. 5. 5. 5. 5.] [5. 5. 5. 5. 5.] [5. 5. 5. 5. 5.] [0. 0. 0. 0. 0.]], shape=(5, 5), dtype=float32) print(dy[0, :,:,0]) tf.Tensor( [[1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.]], shape=(5, 5), dtype=float32) ```

Tensor non_max_suppression(IGraphNodeBase boxes, IGraphNodeBase scores, int max_output_size, double iou_threshold, ImplicitContainer<T> score_threshold, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
int max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
double iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices = tf.image.non_max_suppression(
                boxes, scores, max_output_size, iou_threshold)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression(IGraphNodeBase boxes, IGraphNodeBase scores, IGraphNodeBase max_output_size, IGraphNodeBase iou_threshold, ImplicitContainer<T> score_threshold, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
IGraphNodeBase iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices = tf.image.non_max_suppression(
                boxes, scores, max_output_size, iou_threshold)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression(IGraphNodeBase boxes, IGraphNodeBase scores, int max_output_size, IEnumerable<object> iou_threshold, ImplicitContainer<T> score_threshold, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
int max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
IEnumerable<object> iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices = tf.image.non_max_suppression(
                boxes, scores, max_output_size, iou_threshold)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression(IGraphNodeBase boxes, IGraphNodeBase scores, IGraphNodeBase max_output_size, double iou_threshold, ImplicitContainer<T> score_threshold, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
double iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices = tf.image.non_max_suppression(
                boxes, scores, max_output_size, iou_threshold)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression(IGraphNodeBase boxes, IGraphNodeBase scores, IEnumerable<int> max_output_size, IGraphNodeBase iou_threshold, ImplicitContainer<T> score_threshold, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IEnumerable<int> max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
IGraphNodeBase iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices = tf.image.non_max_suppression(
                boxes, scores, max_output_size, iou_threshold)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression(IGraphNodeBase boxes, IGraphNodeBase scores, int max_output_size, IGraphNodeBase iou_threshold, ImplicitContainer<T> score_threshold, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
int max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
IGraphNodeBase iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices = tf.image.non_max_suppression(
                boxes, scores, max_output_size, iou_threshold)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression(IGraphNodeBase boxes, IGraphNodeBase scores, IEnumerable<int> max_output_size, IEnumerable<object> iou_threshold, ImplicitContainer<T> score_threshold, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IEnumerable<int> max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
IEnumerable<object> iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices = tf.image.non_max_suppression(
                boxes, scores, max_output_size, iou_threshold)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression(IGraphNodeBase boxes, IGraphNodeBase scores, IEnumerable<int> max_output_size, double iou_threshold, ImplicitContainer<T> score_threshold, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IEnumerable<int> max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
double iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices = tf.image.non_max_suppression(
                boxes, scores, max_output_size, iou_threshold)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression(IGraphNodeBase boxes, IGraphNodeBase scores, IGraphNodeBase max_output_size, IEnumerable<object> iou_threshold, ImplicitContainer<T> score_threshold, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
IEnumerable<object> iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices = tf.image.non_max_suppression(
                boxes, scores, max_output_size, iou_threshold)
            selected_boxes = tf.gather(boxes, selected_indices) 

object non_max_suppression_dyn(object boxes, object scores, object max_output_size, ImplicitContainer<T> iou_threshold, ImplicitContainer<T> score_threshold, object name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation.
Parameters
object boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
object scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
object max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
ImplicitContainer<T> iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
object name
A name for the operation (optional).
Returns
object

Show Example
selected_indices = tf.image.non_max_suppression(
                boxes, scores, max_output_size, iou_threshold)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression_overlaps(IGraphNodeBase overlaps, IGraphNodeBase scores, IGraphNodeBase max_output_size, double overlap_threshold, ImplicitContainer<T> score_threshold, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high overlap with previously selected boxes. N-by-n overlap values are supplied as square matrix. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation.
Parameters
IGraphNodeBase overlaps
A 2-D float `Tensor` of shape `[num_boxes, num_boxes]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
double overlap_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to the provided overlap values.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices = tf.image.non_max_suppression_overlaps(
                overlaps, scores, max_output_size, iou_threshold)
            selected_boxes = tf.gather(boxes, selected_indices) 

object non_max_suppression_overlaps_dyn(object overlaps, object scores, object max_output_size, ImplicitContainer<T> overlap_threshold, ImplicitContainer<T> score_threshold, object name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high overlap with previously selected boxes. N-by-n overlap values are supplied as square matrix. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation.
Parameters
object overlaps
A 2-D float `Tensor` of shape `[num_boxes, num_boxes]`.
object scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
object max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
ImplicitContainer<T> overlap_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to the provided overlap values.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
object name
A name for the operation (optional).
Returns
object

Show Example
selected_indices = tf.image.non_max_suppression_overlaps(
                overlaps, scores, max_output_size, iou_threshold)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression_padded(IGraphNodeBase boxes, IGraphNodeBase scores, IGraphNodeBase max_output_size, double iou_threshold, ImplicitContainer<T> score_threshold, bool pad_to_max_output_size, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Performs algorithmically equivalent operation to tf.image.non_max_suppression, with the addition of an optional parameter which zero-pads the output to be of size `max_output_size`. The output of this operation is a tuple containing the set of integers indexing into the input collection of bounding boxes representing the selected boxes and the number of valid indices in the index set. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.slice and tf.gather operations.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
double iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
bool pad_to_max_output_size
bool. If True, size of `selected_indices` output is padded to `max_output_size`.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices_padded, num_valid = tf.image.non_max_suppression_padded(
                boxes, scores, max_output_size, iou_threshold,
                score_threshold, pad_to_max_output_size=True)
            selected_indices = tf.slice(
                selected_indices_padded, tf.constant([0]), num_valid)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression_padded(IGraphNodeBase boxes, IGraphNodeBase scores, int max_output_size, IGraphNodeBase iou_threshold, ImplicitContainer<T> score_threshold, bool pad_to_max_output_size, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Performs algorithmically equivalent operation to tf.image.non_max_suppression, with the addition of an optional parameter which zero-pads the output to be of size `max_output_size`. The output of this operation is a tuple containing the set of integers indexing into the input collection of bounding boxes representing the selected boxes and the number of valid indices in the index set. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.slice and tf.gather operations.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
int max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
IGraphNodeBase iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
bool pad_to_max_output_size
bool. If True, size of `selected_indices` output is padded to `max_output_size`.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices_padded, num_valid = tf.image.non_max_suppression_padded(
                boxes, scores, max_output_size, iou_threshold,
                score_threshold, pad_to_max_output_size=True)
            selected_indices = tf.slice(
                selected_indices_padded, tf.constant([0]), num_valid)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression_padded(IGraphNodeBase boxes, IGraphNodeBase scores, int max_output_size, double iou_threshold, IGraphNodeBase score_threshold, bool pad_to_max_output_size, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Performs algorithmically equivalent operation to tf.image.non_max_suppression, with the addition of an optional parameter which zero-pads the output to be of size `max_output_size`. The output of this operation is a tuple containing the set of integers indexing into the input collection of bounding boxes representing the selected boxes and the number of valid indices in the index set. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.slice and tf.gather operations.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
int max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
double iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
IGraphNodeBase score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
bool pad_to_max_output_size
bool. If True, size of `selected_indices` output is padded to `max_output_size`.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices_padded, num_valid = tf.image.non_max_suppression_padded(
                boxes, scores, max_output_size, iou_threshold,
                score_threshold, pad_to_max_output_size=True)
            selected_indices = tf.slice(
                selected_indices_padded, tf.constant([0]), num_valid)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression_padded(IGraphNodeBase boxes, IGraphNodeBase scores, int max_output_size, double iou_threshold, ImplicitContainer<T> score_threshold, bool pad_to_max_output_size, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Performs algorithmically equivalent operation to tf.image.non_max_suppression, with the addition of an optional parameter which zero-pads the output to be of size `max_output_size`. The output of this operation is a tuple containing the set of integers indexing into the input collection of bounding boxes representing the selected boxes and the number of valid indices in the index set. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.slice and tf.gather operations.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
int max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
double iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
bool pad_to_max_output_size
bool. If True, size of `selected_indices` output is padded to `max_output_size`.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices_padded, num_valid = tf.image.non_max_suppression_padded(
                boxes, scores, max_output_size, iou_threshold,
                score_threshold, pad_to_max_output_size=True)
            selected_indices = tf.slice(
                selected_indices_padded, tf.constant([0]), num_valid)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression_padded(IGraphNodeBase boxes, IGraphNodeBase scores, IGraphNodeBase max_output_size, double iou_threshold, IGraphNodeBase score_threshold, bool pad_to_max_output_size, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Performs algorithmically equivalent operation to tf.image.non_max_suppression, with the addition of an optional parameter which zero-pads the output to be of size `max_output_size`. The output of this operation is a tuple containing the set of integers indexing into the input collection of bounding boxes representing the selected boxes and the number of valid indices in the index set. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.slice and tf.gather operations.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
double iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
IGraphNodeBase score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
bool pad_to_max_output_size
bool. If True, size of `selected_indices` output is padded to `max_output_size`.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices_padded, num_valid = tf.image.non_max_suppression_padded(
                boxes, scores, max_output_size, iou_threshold,
                score_threshold, pad_to_max_output_size=True)
            selected_indices = tf.slice(
                selected_indices_padded, tf.constant([0]), num_valid)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression_padded(IGraphNodeBase boxes, IGraphNodeBase scores, IGraphNodeBase max_output_size, IGraphNodeBase iou_threshold, IGraphNodeBase score_threshold, bool pad_to_max_output_size, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Performs algorithmically equivalent operation to tf.image.non_max_suppression, with the addition of an optional parameter which zero-pads the output to be of size `max_output_size`. The output of this operation is a tuple containing the set of integers indexing into the input collection of bounding boxes representing the selected boxes and the number of valid indices in the index set. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.slice and tf.gather operations.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
IGraphNodeBase iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
IGraphNodeBase score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
bool pad_to_max_output_size
bool. If True, size of `selected_indices` output is padded to `max_output_size`.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices_padded, num_valid = tf.image.non_max_suppression_padded(
                boxes, scores, max_output_size, iou_threshold,
                score_threshold, pad_to_max_output_size=True)
            selected_indices = tf.slice(
                selected_indices_padded, tf.constant([0]), num_valid)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression_padded(IGraphNodeBase boxes, IGraphNodeBase scores, int max_output_size, IGraphNodeBase iou_threshold, IGraphNodeBase score_threshold, bool pad_to_max_output_size, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Performs algorithmically equivalent operation to tf.image.non_max_suppression, with the addition of an optional parameter which zero-pads the output to be of size `max_output_size`. The output of this operation is a tuple containing the set of integers indexing into the input collection of bounding boxes representing the selected boxes and the number of valid indices in the index set. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.slice and tf.gather operations.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
int max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
IGraphNodeBase iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
IGraphNodeBase score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
bool pad_to_max_output_size
bool. If True, size of `selected_indices` output is padded to `max_output_size`.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices_padded, num_valid = tf.image.non_max_suppression_padded(
                boxes, scores, max_output_size, iou_threshold,
                score_threshold, pad_to_max_output_size=True)
            selected_indices = tf.slice(
                selected_indices_padded, tf.constant([0]), num_valid)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor non_max_suppression_padded(IGraphNodeBase boxes, IGraphNodeBase scores, IGraphNodeBase max_output_size, IGraphNodeBase iou_threshold, ImplicitContainer<T> score_threshold, bool pad_to_max_output_size, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Performs algorithmically equivalent operation to tf.image.non_max_suppression, with the addition of an optional parameter which zero-pads the output to be of size `max_output_size`. The output of this operation is a tuple containing the set of integers indexing into the input collection of bounding boxes representing the selected boxes and the number of valid indices in the index set. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.slice and tf.gather operations.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
IGraphNodeBase iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
bool pad_to_max_output_size
bool. If True, size of `selected_indices` output is padded to `max_output_size`.
string name
A name for the operation (optional).
Returns
Tensor

Show Example
selected_indices_padded, num_valid = tf.image.non_max_suppression_padded(
                boxes, scores, max_output_size, iou_threshold,
                score_threshold, pad_to_max_output_size=True)
            selected_indices = tf.slice(
                selected_indices_padded, tf.constant([0]), num_valid)
            selected_boxes = tf.gather(boxes, selected_indices) 

object non_max_suppression_padded_dyn(object boxes, object scores, object max_output_size, ImplicitContainer<T> iou_threshold, ImplicitContainer<T> score_threshold, ImplicitContainer<T> pad_to_max_output_size, object name)

Greedily selects a subset of bounding boxes in descending order of score.

Performs algorithmically equivalent operation to tf.image.non_max_suppression, with the addition of an optional parameter which zero-pads the output to be of size `max_output_size`. The output of this operation is a tuple containing the set of integers indexing into the input collection of bounding boxes representing the selected boxes and the number of valid indices in the index set. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.slice and tf.gather operations.
Parameters
object boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
object scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
object max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
ImplicitContainer<T> iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
ImplicitContainer<T> pad_to_max_output_size
bool. If True, size of `selected_indices` output is padded to `max_output_size`.
object name
A name for the operation (optional).
Returns
object

Show Example
selected_indices_padded, num_valid = tf.image.non_max_suppression_padded(
                boxes, scores, max_output_size, iou_threshold,
                score_threshold, pad_to_max_output_size=True)
            selected_indices = tf.slice(
                selected_indices_padded, tf.constant([0]), num_valid)
            selected_boxes = tf.gather(boxes, selected_indices) 

ValueTuple<object, object> non_max_suppression_with_scores(IGraphNodeBase boxes, IGraphNodeBase scores, IGraphNodeBase max_output_size, IGraphNodeBase iou_threshold, IGraphNodeBase score_threshold, double soft_nms_sigma, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. This function generalizes the tf.image.non_max_suppression op by also supporting a Soft-NMS (with Gaussian weighting) mode (c.f. Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score of other overlapping boxes instead of directly causing them to be pruned. Consequently, in contrast to tf.image.non_max_suppression, `tf.image.non_max_suppression_v2` returns the new scores of each input box in the second output, `selected_scores`.

To enable this Soft-NMS mode, set the `soft_nms_sigma` parameter to be larger than 0. When `soft_nms_sigma` equals 0, the behavior of `tf.image.non_max_suppression_v2` is identical to that of tf.image.non_max_suppression (except for the extra output) both in function and in running time.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
IGraphNodeBase iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
IGraphNodeBase score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
double soft_nms_sigma
A scalar float representing the Soft NMS sigma parameter; See Bodla et al, https://arxiv.org/abs/1704.04503). When `soft_nms_sigma=0.0` (which is default), we fall back to standard (hard) NMS.
string name
A name for the operation (optional).
Returns
ValueTuple<object, object>

Show Example
selected_indices, selected_scores = tf.image.non_max_suppression_v2(
                boxes, scores, max_output_size, iou_threshold=1.0, score_threshold=0.1,
                soft_nms_sigma=0.5)
            selected_boxes = tf.gather(boxes, selected_indices) 

ValueTuple<object, object> non_max_suppression_with_scores(IGraphNodeBase boxes, IGraphNodeBase scores, IGraphNodeBase max_output_size, IGraphNodeBase iou_threshold, ImplicitContainer<T> score_threshold, IGraphNodeBase soft_nms_sigma, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. This function generalizes the tf.image.non_max_suppression op by also supporting a Soft-NMS (with Gaussian weighting) mode (c.f. Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score of other overlapping boxes instead of directly causing them to be pruned. Consequently, in contrast to tf.image.non_max_suppression, `tf.image.non_max_suppression_v2` returns the new scores of each input box in the second output, `selected_scores`.

To enable this Soft-NMS mode, set the `soft_nms_sigma` parameter to be larger than 0. When `soft_nms_sigma` equals 0, the behavior of `tf.image.non_max_suppression_v2` is identical to that of tf.image.non_max_suppression (except for the extra output) both in function and in running time.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
IGraphNodeBase iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
IGraphNodeBase soft_nms_sigma
A scalar float representing the Soft NMS sigma parameter; See Bodla et al, https://arxiv.org/abs/1704.04503). When `soft_nms_sigma=0.0` (which is default), we fall back to standard (hard) NMS.
string name
A name for the operation (optional).
Returns
ValueTuple<object, object>

Show Example
selected_indices, selected_scores = tf.image.non_max_suppression_v2(
                boxes, scores, max_output_size, iou_threshold=1.0, score_threshold=0.1,
                soft_nms_sigma=0.5)
            selected_boxes = tf.gather(boxes, selected_indices) 

ValueTuple<object, object> non_max_suppression_with_scores(IGraphNodeBase boxes, IGraphNodeBase scores, IGraphNodeBase max_output_size, IGraphNodeBase iou_threshold, ImplicitContainer<T> score_threshold, double soft_nms_sigma, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. This function generalizes the tf.image.non_max_suppression op by also supporting a Soft-NMS (with Gaussian weighting) mode (c.f. Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score of other overlapping boxes instead of directly causing them to be pruned. Consequently, in contrast to tf.image.non_max_suppression, `tf.image.non_max_suppression_v2` returns the new scores of each input box in the second output, `selected_scores`.

To enable this Soft-NMS mode, set the `soft_nms_sigma` parameter to be larger than 0. When `soft_nms_sigma` equals 0, the behavior of `tf.image.non_max_suppression_v2` is identical to that of tf.image.non_max_suppression (except for the extra output) both in function and in running time.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
IGraphNodeBase iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
double soft_nms_sigma
A scalar float representing the Soft NMS sigma parameter; See Bodla et al, https://arxiv.org/abs/1704.04503). When `soft_nms_sigma=0.0` (which is default), we fall back to standard (hard) NMS.
string name
A name for the operation (optional).
Returns
ValueTuple<object, object>

Show Example
selected_indices, selected_scores = tf.image.non_max_suppression_v2(
                boxes, scores, max_output_size, iou_threshold=1.0, score_threshold=0.1,
                soft_nms_sigma=0.5)
            selected_boxes = tf.gather(boxes, selected_indices) 

ValueTuple<object, object> non_max_suppression_with_scores(IGraphNodeBase boxes, IGraphNodeBase scores, IGraphNodeBase max_output_size, double iou_threshold, IGraphNodeBase score_threshold, IGraphNodeBase soft_nms_sigma, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. This function generalizes the tf.image.non_max_suppression op by also supporting a Soft-NMS (with Gaussian weighting) mode (c.f. Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score of other overlapping boxes instead of directly causing them to be pruned. Consequently, in contrast to tf.image.non_max_suppression, `tf.image.non_max_suppression_v2` returns the new scores of each input box in the second output, `selected_scores`.

To enable this Soft-NMS mode, set the `soft_nms_sigma` parameter to be larger than 0. When `soft_nms_sigma` equals 0, the behavior of `tf.image.non_max_suppression_v2` is identical to that of tf.image.non_max_suppression (except for the extra output) both in function and in running time.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
double iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
IGraphNodeBase score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
IGraphNodeBase soft_nms_sigma
A scalar float representing the Soft NMS sigma parameter; See Bodla et al, https://arxiv.org/abs/1704.04503). When `soft_nms_sigma=0.0` (which is default), we fall back to standard (hard) NMS.
string name
A name for the operation (optional).
Returns
ValueTuple<object, object>

Show Example
selected_indices, selected_scores = tf.image.non_max_suppression_v2(
                boxes, scores, max_output_size, iou_threshold=1.0, score_threshold=0.1,
                soft_nms_sigma=0.5)
            selected_boxes = tf.gather(boxes, selected_indices) 

ValueTuple<object, object> non_max_suppression_with_scores(IGraphNodeBase boxes, IGraphNodeBase scores, IGraphNodeBase max_output_size, double iou_threshold, IGraphNodeBase score_threshold, double soft_nms_sigma, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. This function generalizes the tf.image.non_max_suppression op by also supporting a Soft-NMS (with Gaussian weighting) mode (c.f. Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score of other overlapping boxes instead of directly causing them to be pruned. Consequently, in contrast to tf.image.non_max_suppression, `tf.image.non_max_suppression_v2` returns the new scores of each input box in the second output, `selected_scores`.

To enable this Soft-NMS mode, set the `soft_nms_sigma` parameter to be larger than 0. When `soft_nms_sigma` equals 0, the behavior of `tf.image.non_max_suppression_v2` is identical to that of tf.image.non_max_suppression (except for the extra output) both in function and in running time.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
double iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
IGraphNodeBase score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
double soft_nms_sigma
A scalar float representing the Soft NMS sigma parameter; See Bodla et al, https://arxiv.org/abs/1704.04503). When `soft_nms_sigma=0.0` (which is default), we fall back to standard (hard) NMS.
string name
A name for the operation (optional).
Returns
ValueTuple<object, object>

Show Example
selected_indices, selected_scores = tf.image.non_max_suppression_v2(
                boxes, scores, max_output_size, iou_threshold=1.0, score_threshold=0.1,
                soft_nms_sigma=0.5)
            selected_boxes = tf.gather(boxes, selected_indices) 

ValueTuple<object, object> non_max_suppression_with_scores(IGraphNodeBase boxes, IGraphNodeBase scores, IGraphNodeBase max_output_size, double iou_threshold, ImplicitContainer<T> score_threshold, IGraphNodeBase soft_nms_sigma, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. This function generalizes the tf.image.non_max_suppression op by also supporting a Soft-NMS (with Gaussian weighting) mode (c.f. Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score of other overlapping boxes instead of directly causing them to be pruned. Consequently, in contrast to tf.image.non_max_suppression, `tf.image.non_max_suppression_v2` returns the new scores of each input box in the second output, `selected_scores`.

To enable this Soft-NMS mode, set the `soft_nms_sigma` parameter to be larger than 0. When `soft_nms_sigma` equals 0, the behavior of `tf.image.non_max_suppression_v2` is identical to that of tf.image.non_max_suppression (except for the extra output) both in function and in running time.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
double iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
IGraphNodeBase soft_nms_sigma
A scalar float representing the Soft NMS sigma parameter; See Bodla et al, https://arxiv.org/abs/1704.04503). When `soft_nms_sigma=0.0` (which is default), we fall back to standard (hard) NMS.
string name
A name for the operation (optional).
Returns
ValueTuple<object, object>

Show Example
selected_indices, selected_scores = tf.image.non_max_suppression_v2(
                boxes, scores, max_output_size, iou_threshold=1.0, score_threshold=0.1,
                soft_nms_sigma=0.5)
            selected_boxes = tf.gather(boxes, selected_indices) 

ValueTuple<object, object> non_max_suppression_with_scores(IGraphNodeBase boxes, IGraphNodeBase scores, IGraphNodeBase max_output_size, double iou_threshold, ImplicitContainer<T> score_threshold, double soft_nms_sigma, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. This function generalizes the tf.image.non_max_suppression op by also supporting a Soft-NMS (with Gaussian weighting) mode (c.f. Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score of other overlapping boxes instead of directly causing them to be pruned. Consequently, in contrast to tf.image.non_max_suppression, `tf.image.non_max_suppression_v2` returns the new scores of each input box in the second output, `selected_scores`.

To enable this Soft-NMS mode, set the `soft_nms_sigma` parameter to be larger than 0. When `soft_nms_sigma` equals 0, the behavior of `tf.image.non_max_suppression_v2` is identical to that of tf.image.non_max_suppression (except for the extra output) both in function and in running time.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
double iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
double soft_nms_sigma
A scalar float representing the Soft NMS sigma parameter; See Bodla et al, https://arxiv.org/abs/1704.04503). When `soft_nms_sigma=0.0` (which is default), we fall back to standard (hard) NMS.
string name
A name for the operation (optional).
Returns
ValueTuple<object, object>

Show Example
selected_indices, selected_scores = tf.image.non_max_suppression_v2(
                boxes, scores, max_output_size, iou_threshold=1.0, score_threshold=0.1,
                soft_nms_sigma=0.5)
            selected_boxes = tf.gather(boxes, selected_indices) 

ValueTuple<object, object> non_max_suppression_with_scores(IGraphNodeBase boxes, IGraphNodeBase scores, IGraphNodeBase max_output_size, IGraphNodeBase iou_threshold, IGraphNodeBase score_threshold, IGraphNodeBase soft_nms_sigma, string name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. This function generalizes the tf.image.non_max_suppression op by also supporting a Soft-NMS (with Gaussian weighting) mode (c.f. Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score of other overlapping boxes instead of directly causing them to be pruned. Consequently, in contrast to tf.image.non_max_suppression, `tf.image.non_max_suppression_v2` returns the new scores of each input box in the second output, `selected_scores`.

To enable this Soft-NMS mode, set the `soft_nms_sigma` parameter to be larger than 0. When `soft_nms_sigma` equals 0, the behavior of `tf.image.non_max_suppression_v2` is identical to that of tf.image.non_max_suppression (except for the extra output) both in function and in running time.
Parameters
IGraphNodeBase boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
IGraphNodeBase scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
IGraphNodeBase max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
IGraphNodeBase iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
IGraphNodeBase score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
IGraphNodeBase soft_nms_sigma
A scalar float representing the Soft NMS sigma parameter; See Bodla et al, https://arxiv.org/abs/1704.04503). When `soft_nms_sigma=0.0` (which is default), we fall back to standard (hard) NMS.
string name
A name for the operation (optional).
Returns
ValueTuple<object, object>

Show Example
selected_indices, selected_scores = tf.image.non_max_suppression_v2(
                boxes, scores, max_output_size, iou_threshold=1.0, score_threshold=0.1,
                soft_nms_sigma=0.5)
            selected_boxes = tf.gather(boxes, selected_indices) 

object non_max_suppression_with_scores_dyn(object boxes, object scores, object max_output_size, ImplicitContainer<T> iou_threshold, ImplicitContainer<T> score_threshold, ImplicitContainer<T> soft_nms_sigma, object name)

Greedily selects a subset of bounding boxes in descending order of score.

Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. This function generalizes the tf.image.non_max_suppression op by also supporting a Soft-NMS (with Gaussian weighting) mode (c.f. Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score of other overlapping boxes instead of directly causing them to be pruned. Consequently, in contrast to tf.image.non_max_suppression, `tf.image.non_max_suppression_v2` returns the new scores of each input box in the second output, `selected_scores`.

To enable this Soft-NMS mode, set the `soft_nms_sigma` parameter to be larger than 0. When `soft_nms_sigma` equals 0, the behavior of `tf.image.non_max_suppression_v2` is identical to that of tf.image.non_max_suppression (except for the extra output) both in function and in running time.
Parameters
object boxes
A 2-D float `Tensor` of shape `[num_boxes, 4]`.
object scores
A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes).
object max_output_size
A scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression.
ImplicitContainer<T> iou_threshold
A float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
ImplicitContainer<T> score_threshold
A float representing the threshold for deciding when to remove boxes based on score.
ImplicitContainer<T> soft_nms_sigma
A scalar float representing the Soft NMS sigma parameter; See Bodla et al, https://arxiv.org/abs/1704.04503). When `soft_nms_sigma=0.0` (which is default), we fall back to standard (hard) NMS.
object name
A name for the operation (optional).
Returns
object

Show Example
selected_indices, selected_scores = tf.image.non_max_suppression_v2(
                boxes, scores, max_output_size, iou_threshold=1.0, score_threshold=0.1,
                soft_nms_sigma=0.5)
            selected_boxes = tf.gather(boxes, selected_indices) 

Tensor pad_to_bounding_box(IEnumerable<int> image, ValueTuple<PythonClassContainer, PythonClassContainer> offset_height, ValueTuple<PythonClassContainer, PythonClassContainer> offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_height
Number of rows of zeros to add on top.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, IGraphNodeBase offset_height, IndexedSlices offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Number of rows of zeros to add on top.
IndexedSlices offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, IGraphNodeBase offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Number of rows of zeros to add on top.
IGraphNodeBase offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IEnumerable<int> image, int offset_height, int offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Number of rows of zeros to add on top.
int offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IEnumerable<int> image, IGraphNodeBase offset_height, IndexedSlices offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Number of rows of zeros to add on top.
IndexedSlices offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IEnumerable<int> image, IGraphNodeBase offset_height, int offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Number of rows of zeros to add on top.
int offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IEnumerable<int> image, IGraphNodeBase offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Number of rows of zeros to add on top.
IGraphNodeBase offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, ValueTuple<PythonClassContainer, PythonClassContainer> offset_height, ValueTuple<PythonClassContainer, PythonClassContainer> offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_height
Number of rows of zeros to add on top.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IEnumerable<int> image, int offset_height, IndexedSlices offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Number of rows of zeros to add on top.
IndexedSlices offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IEnumerable<int> image, int offset_height, ValueTuple<PythonClassContainer, PythonClassContainer> offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Number of rows of zeros to add on top.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, ValueTuple<PythonClassContainer, PythonClassContainer> offset_height, IndexedSlices offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_height
Number of rows of zeros to add on top.
IndexedSlices offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, ValueTuple<PythonClassContainer, PythonClassContainer> offset_height, int offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_height
Number of rows of zeros to add on top.
int offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, ValueTuple<PythonClassContainer, PythonClassContainer> offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_height
Number of rows of zeros to add on top.
IGraphNodeBase offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, IndexedSlices offset_height, ValueTuple<PythonClassContainer, PythonClassContainer> offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IndexedSlices offset_height
Number of rows of zeros to add on top.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, IndexedSlices offset_height, IndexedSlices offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IndexedSlices offset_height
Number of rows of zeros to add on top.
IndexedSlices offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IEnumerable<int> image, IGraphNodeBase offset_height, ValueTuple<PythonClassContainer, PythonClassContainer> offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Number of rows of zeros to add on top.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, IndexedSlices offset_height, int offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IndexedSlices offset_height
Number of rows of zeros to add on top.
int offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, int offset_height, ValueTuple<PythonClassContainer, PythonClassContainer> offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Number of rows of zeros to add on top.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IEnumerable<int> image, IndexedSlices offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IndexedSlices offset_height
Number of rows of zeros to add on top.
IGraphNodeBase offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, int offset_height, IndexedSlices offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Number of rows of zeros to add on top.
IndexedSlices offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IEnumerable<int> image, IndexedSlices offset_height, int offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IndexedSlices offset_height
Number of rows of zeros to add on top.
int offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IEnumerable<int> image, IndexedSlices offset_height, IndexedSlices offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IndexedSlices offset_height
Number of rows of zeros to add on top.
IndexedSlices offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IEnumerable<int> image, IndexedSlices offset_height, ValueTuple<PythonClassContainer, PythonClassContainer> offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IndexedSlices offset_height
Number of rows of zeros to add on top.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IEnumerable<int> image, ValueTuple<PythonClassContainer, PythonClassContainer> offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_height
Number of rows of zeros to add on top.
IGraphNodeBase offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IEnumerable<int> image, ValueTuple<PythonClassContainer, PythonClassContainer> offset_height, int offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_height
Number of rows of zeros to add on top.
int offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, int offset_height, int offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Number of rows of zeros to add on top.
int offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, int offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Number of rows of zeros to add on top.
IGraphNodeBase offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, IGraphNodeBase offset_height, ValueTuple<PythonClassContainer, PythonClassContainer> offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Number of rows of zeros to add on top.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IEnumerable<int> image, ValueTuple<PythonClassContainer, PythonClassContainer> offset_height, IndexedSlices offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
ValueTuple<PythonClassContainer, PythonClassContainer> offset_height
Number of rows of zeros to add on top.
IndexedSlices offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, IGraphNodeBase offset_height, int offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase offset_height
Number of rows of zeros to add on top.
int offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IGraphNodeBase image, IndexedSlices offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IndexedSlices offset_height
Number of rows of zeros to add on top.
IGraphNodeBase offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor pad_to_bounding_box(IEnumerable<int> image, int offset_height, IGraphNodeBase offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int offset_height
Number of rows of zeros to add on top.
IGraphNodeBase offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

object pad_to_bounding_box_dyn(object image, object offset_height, object offset_width, object target_height, object target_width)

Pad `image` with zeros to the specified `height` and `width`.

Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`.

This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
object offset_height
Number of rows of zeros to add on top.
object offset_width
Number of columns of zeros to add on the left.
object target_height
Height of output image.
object target_width
Width of output image.
Returns
object
If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]`

Tensor per_image_standardization(IGraphNodeBase image)

Linearly scales each image in `image` to have mean 0 and variance 1.

For each 3-D image `x` in `image`, computes `(x - mean) / adjusted_stddev`, where

- `mean` is the average of all values in `x` - `adjusted_stddev = max(stddev, 1.0/sqrt(N))` is capped away from 0 to protect against division by 0 when handling uniform images - `N` is the number of elements in `x` - `stddev` is the standard deviation of all values in `x`
Parameters
IGraphNodeBase image
An n-D Tensor with at least 3 dimensions, the last 3 of which are the dimensions of each image.
Returns
Tensor
A `Tensor` with same shape and dtype as `image`.

object per_image_standardization_dyn(object image)

Linearly scales each image in `image` to have mean 0 and variance 1.

For each 3-D image `x` in `image`, computes `(x - mean) / adjusted_stddev`, where

- `mean` is the average of all values in `x` - `adjusted_stddev = max(stddev, 1.0/sqrt(N))` is capped away from 0 to protect against division by 0 when handling uniform images - `N` is the number of elements in `x` - `stddev` is the standard deviation of all values in `x`
Parameters
object image
An n-D Tensor with at least 3 dimensions, the last 3 of which are the dimensions of each image.
Returns
object
A `Tensor` with same shape and dtype as `image`.

Tensor psnr(ValueTuple<PythonClassContainer, PythonClassContainer> a, ValueTuple<PythonClassContainer, PythonClassContainer> b, int max_val, string name)

Returns the Peak Signal-to-Noise Ratio between a and b.

This is intended to be used on signals (or images). Produces a PSNR value for each image in batch.

The last three dimensions of input are expected to be [height, width, depth].

Example:
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> a
First set of images.
ValueTuple<PythonClassContainer, PythonClassContainer> b
Second set of images.
int max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
string name
Namespace to embed the computation in.
Returns
Tensor
The scalar PSNR between a and b. The returned tensor has type tf.float32 and shape [batch_size, 1].
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute PSNR over tf.uint8 Tensors.
            psnr1 = tf.image.psnr(im1, im2, max_val=255) 

# Compute PSNR over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) psnr2 = tf.image.psnr(im1, im2, max_val=1.0) # psnr1 and psnr2 both have type tf.float32 and are almost equal.

Tensor psnr(ValueTuple<PythonClassContainer, PythonClassContainer> a, ValueTuple<PythonClassContainer, PythonClassContainer> b, double max_val, string name)

Returns the Peak Signal-to-Noise Ratio between a and b.

This is intended to be used on signals (or images). Produces a PSNR value for each image in batch.

The last three dimensions of input are expected to be [height, width, depth].

Example:
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> a
First set of images.
ValueTuple<PythonClassContainer, PythonClassContainer> b
Second set of images.
double max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
string name
Namespace to embed the computation in.
Returns
Tensor
The scalar PSNR between a and b. The returned tensor has type tf.float32 and shape [batch_size, 1].
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute PSNR over tf.uint8 Tensors.
            psnr1 = tf.image.psnr(im1, im2, max_val=255) 

# Compute PSNR over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) psnr2 = tf.image.psnr(im1, im2, max_val=1.0) # psnr1 and psnr2 both have type tf.float32 and are almost equal.

Tensor psnr(IGraphNodeBase a, IGraphNodeBase b, double max_val, string name)

Returns the Peak Signal-to-Noise Ratio between a and b.

This is intended to be used on signals (or images). Produces a PSNR value for each image in batch.

The last three dimensions of input are expected to be [height, width, depth].

Example:
Parameters
IGraphNodeBase a
First set of images.
IGraphNodeBase b
Second set of images.
double max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
string name
Namespace to embed the computation in.
Returns
Tensor
The scalar PSNR between a and b. The returned tensor has type tf.float32 and shape [batch_size, 1].
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute PSNR over tf.uint8 Tensors.
            psnr1 = tf.image.psnr(im1, im2, max_val=255) 

# Compute PSNR over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) psnr2 = tf.image.psnr(im1, im2, max_val=1.0) # psnr1 and psnr2 both have type tf.float32 and are almost equal.

Tensor psnr(IGraphNodeBase a, IGraphNodeBase b, int max_val, string name)

Returns the Peak Signal-to-Noise Ratio between a and b.

This is intended to be used on signals (or images). Produces a PSNR value for each image in batch.

The last three dimensions of input are expected to be [height, width, depth].

Example:
Parameters
IGraphNodeBase a
First set of images.
IGraphNodeBase b
Second set of images.
int max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
string name
Namespace to embed the computation in.
Returns
Tensor
The scalar PSNR between a and b. The returned tensor has type tf.float32 and shape [batch_size, 1].
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute PSNR over tf.uint8 Tensors.
            psnr1 = tf.image.psnr(im1, im2, max_val=255) 

# Compute PSNR over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) psnr2 = tf.image.psnr(im1, im2, max_val=1.0) # psnr1 and psnr2 both have type tf.float32 and are almost equal.

Tensor psnr(IGraphNodeBase a, ValueTuple<PythonClassContainer, PythonClassContainer> b, double max_val, string name)

Returns the Peak Signal-to-Noise Ratio between a and b.

This is intended to be used on signals (or images). Produces a PSNR value for each image in batch.

The last three dimensions of input are expected to be [height, width, depth].

Example:
Parameters
IGraphNodeBase a
First set of images.
ValueTuple<PythonClassContainer, PythonClassContainer> b
Second set of images.
double max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
string name
Namespace to embed the computation in.
Returns
Tensor
The scalar PSNR between a and b. The returned tensor has type tf.float32 and shape [batch_size, 1].
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute PSNR over tf.uint8 Tensors.
            psnr1 = tf.image.psnr(im1, im2, max_val=255) 

# Compute PSNR over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) psnr2 = tf.image.psnr(im1, im2, max_val=1.0) # psnr1 and psnr2 both have type tf.float32 and are almost equal.

Tensor psnr(IGraphNodeBase a, ValueTuple<PythonClassContainer, PythonClassContainer> b, int max_val, string name)

Returns the Peak Signal-to-Noise Ratio between a and b.

This is intended to be used on signals (or images). Produces a PSNR value for each image in batch.

The last three dimensions of input are expected to be [height, width, depth].

Example:
Parameters
IGraphNodeBase a
First set of images.
ValueTuple<PythonClassContainer, PythonClassContainer> b
Second set of images.
int max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
string name
Namespace to embed the computation in.
Returns
Tensor
The scalar PSNR between a and b. The returned tensor has type tf.float32 and shape [batch_size, 1].
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute PSNR over tf.uint8 Tensors.
            psnr1 = tf.image.psnr(im1, im2, max_val=255) 

# Compute PSNR over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) psnr2 = tf.image.psnr(im1, im2, max_val=1.0) # psnr1 and psnr2 both have type tf.float32 and are almost equal.

Tensor psnr(ValueTuple<PythonClassContainer, PythonClassContainer> a, IGraphNodeBase b, double max_val, string name)

Returns the Peak Signal-to-Noise Ratio between a and b.

This is intended to be used on signals (or images). Produces a PSNR value for each image in batch.

The last three dimensions of input are expected to be [height, width, depth].

Example:
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> a
First set of images.
IGraphNodeBase b
Second set of images.
double max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
string name
Namespace to embed the computation in.
Returns
Tensor
The scalar PSNR between a and b. The returned tensor has type tf.float32 and shape [batch_size, 1].
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute PSNR over tf.uint8 Tensors.
            psnr1 = tf.image.psnr(im1, im2, max_val=255) 

# Compute PSNR over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) psnr2 = tf.image.psnr(im1, im2, max_val=1.0) # psnr1 and psnr2 both have type tf.float32 and are almost equal.

Tensor psnr(ValueTuple<PythonClassContainer, PythonClassContainer> a, IGraphNodeBase b, int max_val, string name)

Returns the Peak Signal-to-Noise Ratio between a and b.

This is intended to be used on signals (or images). Produces a PSNR value for each image in batch.

The last three dimensions of input are expected to be [height, width, depth].

Example:
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> a
First set of images.
IGraphNodeBase b
Second set of images.
int max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
string name
Namespace to embed the computation in.
Returns
Tensor
The scalar PSNR between a and b. The returned tensor has type tf.float32 and shape [batch_size, 1].
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute PSNR over tf.uint8 Tensors.
            psnr1 = tf.image.psnr(im1, im2, max_val=255) 

# Compute PSNR over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) psnr2 = tf.image.psnr(im1, im2, max_val=1.0) # psnr1 and psnr2 both have type tf.float32 and are almost equal.

object psnr_dyn(object a, object b, object max_val, object name)

Returns the Peak Signal-to-Noise Ratio between a and b.

This is intended to be used on signals (or images). Produces a PSNR value for each image in batch.

The last three dimensions of input are expected to be [height, width, depth].

Example:
Parameters
object a
First set of images.
object b
Second set of images.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
object name
Namespace to embed the computation in.
Returns
object
The scalar PSNR between a and b. The returned tensor has type tf.float32 and shape [batch_size, 1].
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute PSNR over tf.uint8 Tensors.
            psnr1 = tf.image.psnr(im1, im2, max_val=255) 

# Compute PSNR over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) psnr2 = tf.image.psnr(im1, im2, max_val=1.0) # psnr1 and psnr2 both have type tf.float32 and are almost equal.

object random_brightness(IGraphNodeBase image, int max_delta, object seed)

Adjust the brightness of images by a random factor.

Equivalent to `adjust_brightness()` using a `delta` randomly picked in the interval `[-max_delta, max_delta)`.
Parameters
IGraphNodeBase image
An image or images to adjust.
int max_delta
float, must be non-negative.
object seed
A Python integer. Used to create a random seed. See `tf.compat.v1.set_random_seed` for behavior.
Returns
object
The brightness-adjusted image(s).

object random_brightness_dyn(object image, object max_delta, object seed)

Adjust the brightness of images by a random factor.

Equivalent to `adjust_brightness()` using a `delta` randomly picked in the interval `[-max_delta, max_delta)`.
Parameters
object image
An image or images to adjust.
object max_delta
float, must be non-negative.
object seed
A Python integer. Used to create a random seed. See `tf.compat.v1.set_random_seed` for behavior.
Returns
object
The brightness-adjusted image(s).

object random_contrast(IGraphNodeBase image, double lower, double upper, object seed)

Adjust the contrast of an image or images by a random factor.

Equivalent to `adjust_contrast()` but uses a `contrast_factor` randomly picked in the interval `[lower, upper]`.
Parameters
IGraphNodeBase image
An image tensor with 3 or more dimensions.
double lower
float. Lower bound for the random contrast factor.
double upper
float. Upper bound for the random contrast factor.
object seed
A Python integer. Used to create a random seed. See `tf.compat.v1.set_random_seed` for behavior.
Returns
object
The contrast-adjusted image(s).

object random_contrast_dyn(object image, object lower, object upper, object seed)

Adjust the contrast of an image or images by a random factor.

Equivalent to `adjust_contrast()` but uses a `contrast_factor` randomly picked in the interval `[lower, upper]`.
Parameters
object image
An image tensor with 3 or more dimensions.
object lower
float. Lower bound for the random contrast factor.
object upper
float. Upper bound for the random contrast factor.
object seed
A Python integer. Used to create a random seed. See `tf.compat.v1.set_random_seed` for behavior.
Returns
object
The contrast-adjusted image(s).

object random_flip_left_right(IGraphNodeBase image, Nullable<int> seed)

Randomly flip an image horizontally (left to right).

With a 1 in 2 chance, outputs the contents of `image` flipped along the second dimension, which is `width`. Otherwise output the image as-is.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
Nullable<int> seed
A Python integer. Used to create a random seed. See `tf.compat.v1.set_random_seed` for behavior.
Returns
object
A tensor of the same type and shape as `image`.

object random_flip_left_right_dyn(object image, object seed)

Randomly flip an image horizontally (left to right).

With a 1 in 2 chance, outputs the contents of `image` flipped along the second dimension, which is `width`. Otherwise output the image as-is.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
object seed
A Python integer. Used to create a random seed. See `tf.compat.v1.set_random_seed` for behavior.
Returns
object
A tensor of the same type and shape as `image`.

object random_flip_up_down(IGraphNodeBase image, Nullable<int> seed)

Randomly flips an image vertically (upside down).

With a 1 in 2 chance, outputs the contents of `image` flipped along the first dimension, which is `height`. Otherwise output the image as-is.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
Nullable<int> seed
A Python integer. Used to create a random seed. See `tf.compat.v1.set_random_seed` for behavior.
Returns
object
A tensor of the same type and shape as `image`.

object random_flip_up_down_dyn(object image, object seed)

Randomly flips an image vertically (upside down).

With a 1 in 2 chance, outputs the contents of `image` flipped along the first dimension, which is `height`. Otherwise output the image as-is.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
object seed
A Python integer. Used to create a random seed. See `tf.compat.v1.set_random_seed` for behavior.
Returns
object
A tensor of the same type and shape as `image`.

object random_hue(ValueTuple<PythonClassContainer, PythonClassContainer> image, double max_delta, object seed)

Adjust the hue of RGB images by a random factor.

Equivalent to `adjust_hue()` but uses a `delta` randomly picked in the interval `[-max_delta, max_delta]`.

`max_delta` must be in the interval `[0, 0.5]`.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> image
RGB image or images. Size of the last dimension must be 3.
double max_delta
float. Maximum value for the random delta.
object seed
An operation-specific seed. It will be used in conjunction with the graph-level seed to determine the real seeds that will be used in this operation. Please see the documentation of set_random_seed for its interaction with the graph-level random seed.
Returns
object
Adjusted image(s), same shape and DType as `image`.

object random_hue(IGraphNodeBase image, double max_delta, object seed)

Adjust the hue of RGB images by a random factor.

Equivalent to `adjust_hue()` but uses a `delta` randomly picked in the interval `[-max_delta, max_delta]`.

`max_delta` must be in the interval `[0, 0.5]`.
Parameters
IGraphNodeBase image
RGB image or images. Size of the last dimension must be 3.
double max_delta
float. Maximum value for the random delta.
object seed
An operation-specific seed. It will be used in conjunction with the graph-level seed to determine the real seeds that will be used in this operation. Please see the documentation of set_random_seed for its interaction with the graph-level random seed.
Returns
object
Adjusted image(s), same shape and DType as `image`.

object random_hue(IndexedSlices image, double max_delta, object seed)

Adjust the hue of RGB images by a random factor.

Equivalent to `adjust_hue()` but uses a `delta` randomly picked in the interval `[-max_delta, max_delta]`.

`max_delta` must be in the interval `[0, 0.5]`.
Parameters
IndexedSlices image
RGB image or images. Size of the last dimension must be 3.
double max_delta
float. Maximum value for the random delta.
object seed
An operation-specific seed. It will be used in conjunction with the graph-level seed to determine the real seeds that will be used in this operation. Please see the documentation of set_random_seed for its interaction with the graph-level random seed.
Returns
object
Adjusted image(s), same shape and DType as `image`.

object random_hue_dyn(object image, object max_delta, object seed)

Adjust the hue of RGB images by a random factor.

Equivalent to `adjust_hue()` but uses a `delta` randomly picked in the interval `[-max_delta, max_delta]`.

`max_delta` must be in the interval `[0, 0.5]`.
Parameters
object image
RGB image or images. Size of the last dimension must be 3.
object max_delta
float. Maximum value for the random delta.
object seed
An operation-specific seed. It will be used in conjunction with the graph-level seed to determine the real seeds that will be used in this operation. Please see the documentation of set_random_seed for its interaction with the graph-level random seed.
Returns
object
Adjusted image(s), same shape and DType as `image`.

object random_jpeg_quality(IGraphNodeBase image, int min_jpeg_quality, int max_jpeg_quality, object seed)

Randomly changes jpeg encoding quality for inducing jpeg noise.

`min_jpeg_quality` must be in the interval `[0, 100]` and less than `max_jpeg_quality`. `max_jpeg_quality` must be in the interval `[0, 100]`.
Parameters
IGraphNodeBase image
RGB image or images. Size of the last dimension must be 3.
int min_jpeg_quality
Minimum jpeg encoding quality to use.
int max_jpeg_quality
Maximum jpeg encoding quality to use.
object seed
An operation-specific seed. It will be used in conjunction with the graph-level seed to determine the real seeds that will be used in this operation. Please see the documentation of set_random_seed for its interaction with the graph-level random seed.
Returns
object
Adjusted image(s), same shape and DType as `image`.

object random_jpeg_quality_dyn(object image, object min_jpeg_quality, object max_jpeg_quality, object seed)

Randomly changes jpeg encoding quality for inducing jpeg noise.

`min_jpeg_quality` must be in the interval `[0, 100]` and less than `max_jpeg_quality`. `max_jpeg_quality` must be in the interval `[0, 100]`.
Parameters
object image
RGB image or images. Size of the last dimension must be 3.
object min_jpeg_quality
Minimum jpeg encoding quality to use.
object max_jpeg_quality
Maximum jpeg encoding quality to use.
object seed
An operation-specific seed. It will be used in conjunction with the graph-level seed to determine the real seeds that will be used in this operation. Please see the documentation of set_random_seed for its interaction with the graph-level random seed.
Returns
object
Adjusted image(s), same shape and DType as `image`.

object random_saturation(object image, object lower, object upper, object seed)

Adjust the saturation of RGB images by a random factor.

Equivalent to `adjust_saturation()` but uses a `saturation_factor` randomly picked in the interval `[lower, upper]`.
Parameters
object image
RGB image or images. Size of the last dimension must be 3.
object lower
float. Lower bound for the random saturation factor.
object upper
float. Upper bound for the random saturation factor.
object seed
An operation-specific seed. It will be used in conjunction with the graph-level seed to determine the real seeds that will be used in this operation. Please see the documentation of set_random_seed for its interaction with the graph-level random seed.
Returns
object
Adjusted image(s), same shape and DType as `image`.

object random_saturation_dyn(object image, object lower, object upper, object seed)

Adjust the saturation of RGB images by a random factor.

Equivalent to `adjust_saturation()` but uses a `saturation_factor` randomly picked in the interval `[lower, upper]`.
Parameters
object image
RGB image or images. Size of the last dimension must be 3.
object lower
float. Lower bound for the random saturation factor.
object upper
float. Upper bound for the random saturation factor.
object seed
An operation-specific seed. It will be used in conjunction with the graph-level seed to determine the real seeds that will be used in this operation. Please see the documentation of set_random_seed for its interaction with the graph-level random seed.
Returns
object
Adjusted image(s), same shape and DType as `image`.

object resize(ValueTuple<PythonClassContainer, PythonClassContainer> images, IEnumerable<object> size, ImplicitContainer<T> method, bool align_corners, bool preserve_aspect_ratio, string name)

Resize `images` to `size` using the specified `method`.

Resized images will be distorted if their original aspect ratio is not the same as `size`. To avoid distortions see `tf.compat.v1.image.resize_image_with_pad`.

`method` can be one of:

* `ResizeMethod.BILINEAR`: [Bilinear interpolation.]( https://en.wikipedia.org/wiki/Bilinear_interpolation) * `ResizeMethod.NEAREST_NEIGHBOR`: [Nearest neighbor interpolation.]( https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation) * `ResizeMethod.BICUBIC`: [Bicubic interpolation.]( https://en.wikipedia.org/wiki/Bicubic_interpolation) * `ResizeMethod.AREA`: Area interpolation.

The return value has the same type as `images` if `method` is `ResizeMethod.NEAREST_NEIGHBOR`. It will also have the same type as `images` if the size of `images` can be statically determined to be the same as `size`, because `images` is returned in this case. Otherwise, the return value has type `float32`.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> images
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IEnumerable<object> size
A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The new size for the images.
ImplicitContainer<T> method
ResizeMethod. Defaults to `ResizeMethod.BILINEAR`.
bool align_corners
bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to `False`.
bool preserve_aspect_ratio
Whether to preserve the aspect ratio. If this is set, then `images` will be resized to a size that fits in `size` while preserving the aspect ratio of the original image. Scales up the image if `size` is bigger than the current size of the `image`. Defaults to False.
string name
A name for this operation (optional).
Returns
object
If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize(ValueTuple<PythonClassContainer, PythonClassContainer> images, IEnumerable<object> size, string method, bool align_corners, bool preserve_aspect_ratio, string name)

Resize `images` to `size` using the specified `method`.

Resized images will be distorted if their original aspect ratio is not the same as `size`. To avoid distortions see `tf.compat.v1.image.resize_image_with_pad`.

`method` can be one of:

* `ResizeMethod.BILINEAR`: [Bilinear interpolation.]( https://en.wikipedia.org/wiki/Bilinear_interpolation) * `ResizeMethod.NEAREST_NEIGHBOR`: [Nearest neighbor interpolation.]( https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation) * `ResizeMethod.BICUBIC`: [Bicubic interpolation.]( https://en.wikipedia.org/wiki/Bicubic_interpolation) * `ResizeMethod.AREA`: Area interpolation.

The return value has the same type as `images` if `method` is `ResizeMethod.NEAREST_NEIGHBOR`. It will also have the same type as `images` if the size of `images` can be statically determined to be the same as `size`, because `images` is returned in this case. Otherwise, the return value has type `float32`.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> images
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IEnumerable<object> size
A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The new size for the images.
string method
ResizeMethod. Defaults to `ResizeMethod.BILINEAR`.
bool align_corners
bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to `False`.
bool preserve_aspect_ratio
Whether to preserve the aspect ratio. If this is set, then `images` will be resized to a size that fits in `size` while preserving the aspect ratio of the original image. Scales up the image if `size` is bigger than the current size of the `image`. Defaults to False.
string name
A name for this operation (optional).
Returns
object
If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize(ValueTuple<PythonClassContainer, PythonClassContainer> images, IGraphNodeBase size, ImplicitContainer<T> method, bool align_corners, bool preserve_aspect_ratio, string name)

Resize `images` to `size` using the specified `method`.

Resized images will be distorted if their original aspect ratio is not the same as `size`. To avoid distortions see `tf.compat.v1.image.resize_image_with_pad`.

`method` can be one of:

* `ResizeMethod.BILINEAR`: [Bilinear interpolation.]( https://en.wikipedia.org/wiki/Bilinear_interpolation) * `ResizeMethod.NEAREST_NEIGHBOR`: [Nearest neighbor interpolation.]( https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation) * `ResizeMethod.BICUBIC`: [Bicubic interpolation.]( https://en.wikipedia.org/wiki/Bicubic_interpolation) * `ResizeMethod.AREA`: Area interpolation.

The return value has the same type as `images` if `method` is `ResizeMethod.NEAREST_NEIGHBOR`. It will also have the same type as `images` if the size of `images` can be statically determined to be the same as `size`, because `images` is returned in this case. Otherwise, the return value has type `float32`.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> images
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase size
A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The new size for the images.
ImplicitContainer<T> method
ResizeMethod. Defaults to `ResizeMethod.BILINEAR`.
bool align_corners
bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to `False`.
bool preserve_aspect_ratio
Whether to preserve the aspect ratio. If this is set, then `images` will be resized to a size that fits in `size` while preserving the aspect ratio of the original image. Scales up the image if `size` is bigger than the current size of the `image`. Defaults to False.
string name
A name for this operation (optional).
Returns
object
If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize(IGraphNodeBase images, IEnumerable<object> size, string method, bool align_corners, bool preserve_aspect_ratio, string name)

Resize `images` to `size` using the specified `method`.

Resized images will be distorted if their original aspect ratio is not the same as `size`. To avoid distortions see `tf.compat.v1.image.resize_image_with_pad`.

`method` can be one of:

* `ResizeMethod.BILINEAR`: [Bilinear interpolation.]( https://en.wikipedia.org/wiki/Bilinear_interpolation) * `ResizeMethod.NEAREST_NEIGHBOR`: [Nearest neighbor interpolation.]( https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation) * `ResizeMethod.BICUBIC`: [Bicubic interpolation.]( https://en.wikipedia.org/wiki/Bicubic_interpolation) * `ResizeMethod.AREA`: Area interpolation.

The return value has the same type as `images` if `method` is `ResizeMethod.NEAREST_NEIGHBOR`. It will also have the same type as `images` if the size of `images` can be statically determined to be the same as `size`, because `images` is returned in this case. Otherwise, the return value has type `float32`.
Parameters
IGraphNodeBase images
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IEnumerable<object> size
A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The new size for the images.
string method
ResizeMethod. Defaults to `ResizeMethod.BILINEAR`.
bool align_corners
bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to `False`.
bool preserve_aspect_ratio
Whether to preserve the aspect ratio. If this is set, then `images` will be resized to a size that fits in `size` while preserving the aspect ratio of the original image. Scales up the image if `size` is bigger than the current size of the `image`. Defaults to False.
string name
A name for this operation (optional).
Returns
object
If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize(IGraphNodeBase images, IGraphNodeBase size, string method, bool align_corners, bool preserve_aspect_ratio, string name)

Resize `images` to `size` using the specified `method`.

Resized images will be distorted if their original aspect ratio is not the same as `size`. To avoid distortions see `tf.compat.v1.image.resize_image_with_pad`.

`method` can be one of:

* `ResizeMethod.BILINEAR`: [Bilinear interpolation.]( https://en.wikipedia.org/wiki/Bilinear_interpolation) * `ResizeMethod.NEAREST_NEIGHBOR`: [Nearest neighbor interpolation.]( https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation) * `ResizeMethod.BICUBIC`: [Bicubic interpolation.]( https://en.wikipedia.org/wiki/Bicubic_interpolation) * `ResizeMethod.AREA`: Area interpolation.

The return value has the same type as `images` if `method` is `ResizeMethod.NEAREST_NEIGHBOR`. It will also have the same type as `images` if the size of `images` can be statically determined to be the same as `size`, because `images` is returned in this case. Otherwise, the return value has type `float32`.
Parameters
IGraphNodeBase images
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase size
A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The new size for the images.
string method
ResizeMethod. Defaults to `ResizeMethod.BILINEAR`.
bool align_corners
bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to `False`.
bool preserve_aspect_ratio
Whether to preserve the aspect ratio. If this is set, then `images` will be resized to a size that fits in `size` while preserving the aspect ratio of the original image. Scales up the image if `size` is bigger than the current size of the `image`. Defaults to False.
string name
A name for this operation (optional).
Returns
object
If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize(ValueTuple<PythonClassContainer, PythonClassContainer> images, IGraphNodeBase size, string method, bool align_corners, bool preserve_aspect_ratio, string name)

Resize `images` to `size` using the specified `method`.

Resized images will be distorted if their original aspect ratio is not the same as `size`. To avoid distortions see `tf.compat.v1.image.resize_image_with_pad`.

`method` can be one of:

* `ResizeMethod.BILINEAR`: [Bilinear interpolation.]( https://en.wikipedia.org/wiki/Bilinear_interpolation) * `ResizeMethod.NEAREST_NEIGHBOR`: [Nearest neighbor interpolation.]( https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation) * `ResizeMethod.BICUBIC`: [Bicubic interpolation.]( https://en.wikipedia.org/wiki/Bicubic_interpolation) * `ResizeMethod.AREA`: Area interpolation.

The return value has the same type as `images` if `method` is `ResizeMethod.NEAREST_NEIGHBOR`. It will also have the same type as `images` if the size of `images` can be statically determined to be the same as `size`, because `images` is returned in this case. Otherwise, the return value has type `float32`.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> images
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase size
A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The new size for the images.
string method
ResizeMethod. Defaults to `ResizeMethod.BILINEAR`.
bool align_corners
bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to `False`.
bool preserve_aspect_ratio
Whether to preserve the aspect ratio. If this is set, then `images` will be resized to a size that fits in `size` while preserving the aspect ratio of the original image. Scales up the image if `size` is bigger than the current size of the `image`. Defaults to False.
string name
A name for this operation (optional).
Returns
object
If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

Tensor resize_image_with_pad(IGraphNodeBase image, IGraphNodeBase target_height, IGraphNodeBase target_width, ImplicitContainer<T> method, bool align_corners)

Resizes and pads an image to a target width and height.

Resizes an image to a target width and height by keeping the aspect ratio the same without distortion. If the target dimensions don't match the image dimensions, the image is resized and then padded with zeroes to match requested dimensions.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase target_height
Target height.
IGraphNodeBase target_width
Target width.
ImplicitContainer<T> method
Method to use for resizing image. See `resize_images()`
bool align_corners
bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to `False`.
Returns
Tensor
Resized and padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

Tensor resize_image_with_pad(IGraphNodeBase image, int target_height, IGraphNodeBase target_width, ImplicitContainer<T> method, bool align_corners)

Resizes and pads an image to a target width and height.

Resizes an image to a target width and height by keeping the aspect ratio the same without distortion. If the target dimensions don't match the image dimensions, the image is resized and then padded with zeroes to match requested dimensions.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int target_height
Target height.
IGraphNodeBase target_width
Target width.
ImplicitContainer<T> method
Method to use for resizing image. See `resize_images()`
bool align_corners
bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to `False`.
Returns
Tensor
Resized and padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

Tensor resize_image_with_pad(IGraphNodeBase image, int target_height, int target_width, ImplicitContainer<T> method, bool align_corners)

Resizes and pads an image to a target width and height.

Resizes an image to a target width and height by keeping the aspect ratio the same without distortion. If the target dimensions don't match the image dimensions, the image is resized and then padded with zeroes to match requested dimensions.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int target_height
Target height.
int target_width
Target width.
ImplicitContainer<T> method
Method to use for resizing image. See `resize_images()`
bool align_corners
bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to `False`.
Returns
Tensor
Resized and padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

Tensor resize_image_with_pad(IEnumerable<int> image, IGraphNodeBase target_height, IGraphNodeBase target_width, ImplicitContainer<T> method, bool align_corners)

Resizes and pads an image to a target width and height.

Resizes an image to a target width and height by keeping the aspect ratio the same without distortion. If the target dimensions don't match the image dimensions, the image is resized and then padded with zeroes to match requested dimensions.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase target_height
Target height.
IGraphNodeBase target_width
Target width.
ImplicitContainer<T> method
Method to use for resizing image. See `resize_images()`
bool align_corners
bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to `False`.
Returns
Tensor
Resized and padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

Tensor resize_image_with_pad(IEnumerable<int> image, IGraphNodeBase target_height, int target_width, ImplicitContainer<T> method, bool align_corners)

Resizes and pads an image to a target width and height.

Resizes an image to a target width and height by keeping the aspect ratio the same without distortion. If the target dimensions don't match the image dimensions, the image is resized and then padded with zeroes to match requested dimensions.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase target_height
Target height.
int target_width
Target width.
ImplicitContainer<T> method
Method to use for resizing image. See `resize_images()`
bool align_corners
bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to `False`.
Returns
Tensor
Resized and padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

Tensor resize_image_with_pad(IEnumerable<int> image, int target_height, IGraphNodeBase target_width, ImplicitContainer<T> method, bool align_corners)

Resizes and pads an image to a target width and height.

Resizes an image to a target width and height by keeping the aspect ratio the same without distortion. If the target dimensions don't match the image dimensions, the image is resized and then padded with zeroes to match requested dimensions.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int target_height
Target height.
IGraphNodeBase target_width
Target width.
ImplicitContainer<T> method
Method to use for resizing image. See `resize_images()`
bool align_corners
bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to `False`.
Returns
Tensor
Resized and padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

Tensor resize_image_with_pad(IEnumerable<int> image, int target_height, int target_width, ImplicitContainer<T> method, bool align_corners)

Resizes and pads an image to a target width and height.

Resizes an image to a target width and height by keeping the aspect ratio the same without distortion. If the target dimensions don't match the image dimensions, the image is resized and then padded with zeroes to match requested dimensions.
Parameters
IEnumerable<int> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int target_height
Target height.
int target_width
Target width.
ImplicitContainer<T> method
Method to use for resizing image. See `resize_images()`
bool align_corners
bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to `False`.
Returns
Tensor
Resized and padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

Tensor resize_image_with_pad(IGraphNodeBase image, IGraphNodeBase target_height, int target_width, ImplicitContainer<T> method, bool align_corners)

Resizes and pads an image to a target width and height.

Resizes an image to a target width and height by keeping the aspect ratio the same without distortion. If the target dimensions don't match the image dimensions, the image is resized and then padded with zeroes to match requested dimensions.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase target_height
Target height.
int target_width
Target width.
ImplicitContainer<T> method
Method to use for resizing image. See `resize_images()`
bool align_corners
bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to `False`.
Returns
Tensor
Resized and padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_image_with_pad_dyn(object image, object target_height, object target_width, ImplicitContainer<T> method, ImplicitContainer<T> align_corners)

Resizes and pads an image to a target width and height.

Resizes an image to a target width and height by keeping the aspect ratio the same without distortion. If the target dimensions don't match the image dimensions, the image is resized and then padded with zeroes to match requested dimensions.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
object target_height
Target height.
object target_width
Target width.
ImplicitContainer<T> method
Method to use for resizing image. See `resize_images()`
ImplicitContainer<T> align_corners
bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to `False`.
Returns
object
Resized and padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(object image, IEnumerable<object> target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IEnumerable<object> target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(CompositeTensor image, IEnumerable<object> target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
CompositeTensor image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IEnumerable<object> target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(IGraphNodeBase image, IGraphNodeBase target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(IGraphNodeBase image, PythonClassContainer target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
PythonClassContainer target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(IGraphNodeBase image, object target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
object target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(PythonClassContainer image, IEnumerable<object> target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
PythonClassContainer image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IEnumerable<object> target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(PythonClassContainer image, CompositeTensor target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
PythonClassContainer image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
CompositeTensor target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(PythonClassContainer image, int target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
PythonClassContainer image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(PythonClassContainer image, IGraphNodeBase target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
PythonClassContainer image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(PythonClassContainer image, PythonClassContainer target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
PythonClassContainer image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
PythonClassContainer target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(PythonClassContainer image, object target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
PythonClassContainer image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
object target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(object image, CompositeTensor target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
CompositeTensor target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(object image, int target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(object image, IGraphNodeBase target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(object image, PythonClassContainer target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
PythonClassContainer target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(object image, object target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
object target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(IGraphNodeBase image, IEnumerable<object> target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IEnumerable<object> target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(CompositeTensor image, object target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
CompositeTensor image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
object target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(CompositeTensor image, PythonClassContainer target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
CompositeTensor image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
PythonClassContainer target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(CompositeTensor image, IGraphNodeBase target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
CompositeTensor image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(CompositeTensor image, int target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
CompositeTensor image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(CompositeTensor image, CompositeTensor target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
CompositeTensor image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
CompositeTensor target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(IEnumerable<object> image, object target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
IEnumerable<object> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
object target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(IEnumerable<PythonClassContainer> image, PythonClassContainer target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
IEnumerable<PythonClassContainer> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
PythonClassContainer target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(IEnumerable<PythonClassContainer> image, IGraphNodeBase target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
IEnumerable<PythonClassContainer> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(IEnumerable<PythonClassContainer> image, int target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
IEnumerable<PythonClassContainer> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(IEnumerable<PythonClassContainer> image, CompositeTensor target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
IEnumerable<PythonClassContainer> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
CompositeTensor target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(IEnumerable<PythonClassContainer> image, IEnumerable<object> target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
IEnumerable<PythonClassContainer> image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IEnumerable<object> target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(IGraphNodeBase image, int target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad(IGraphNodeBase image, CompositeTensor target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
CompositeTensor target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

object resize_with_crop_or_pad_dyn(object image, object target_height, object target_width)

Crops and/or pads an image to a target width and height.

Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.

If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
object target_height
Target height.
object target_width
Target width.
Returns
object
Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`.

Tensor resize_with_pad(IGraphNodeBase image, IGraphNodeBase target_height, IGraphNodeBase target_width, ImplicitContainer<T> method, bool antialias)

Tensor resize_with_pad(IGraphNodeBase image, int target_height, IGraphNodeBase target_width, ImplicitContainer<T> method, bool antialias)

Tensor resize_with_pad(IGraphNodeBase image, int target_height, int target_width, ImplicitContainer<T> method, bool antialias)

Tensor resize_with_pad(IEnumerable<int> image, IGraphNodeBase target_height, IGraphNodeBase target_width, ImplicitContainer<T> method, bool antialias)

Tensor resize_with_pad(IEnumerable<int> image, IGraphNodeBase target_height, int target_width, ImplicitContainer<T> method, bool antialias)

Tensor resize_with_pad(IEnumerable<int> image, int target_height, IGraphNodeBase target_width, ImplicitContainer<T> method, bool antialias)

Tensor resize_with_pad(IEnumerable<int> image, int target_height, int target_width, ImplicitContainer<T> method, bool antialias)

Tensor resize_with_pad(IGraphNodeBase image, IGraphNodeBase target_height, int target_width, ImplicitContainer<T> method, bool antialias)

object resize_with_pad_dyn(object image, object target_height, object target_width, ImplicitContainer<T> method, ImplicitContainer<T> antialias)

object rgb_to_grayscale(IGraphNodeBase images, string name)

Converts one or more images from RGB to Grayscale.

Outputs a tensor of the same `DType` and rank as `images`. The size of the last dimension of the output is 1, containing the Grayscale value of the pixels.
Parameters
IGraphNodeBase images
The RGB tensor to convert. Last dimension must have size 3 and should contain RGB values.
string name
A name for the operation (optional).
Returns
object
The converted grayscale image(s).

object rgb_to_grayscale_dyn(object images, object name)

Converts one or more images from RGB to Grayscale.

Outputs a tensor of the same `DType` and rank as `images`. The size of the last dimension of the output is 1, containing the Grayscale value of the pixels.
Parameters
object images
The RGB tensor to convert. Last dimension must have size 3 and should contain RGB values.
object name
A name for the operation (optional).
Returns
object
The converted grayscale image(s).

Tensor rgb_to_hsv(IGraphNodeBase images, string name)

Converts one or more images from RGB to HSV.

Outputs a tensor of the same shape as the `images` tensor, containing the HSV value of the pixels. The output is only well defined if the value in `images` are in `[0,1]`.

`output[..., 0]` contains hue, `output[..., 1]` contains saturation, and `output[..., 2]` contains value. All HSV values are in `[0,1]`. A hue of 0 corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue.
Parameters
IGraphNodeBase images
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 1-D or higher rank. RGB data to convert. Last dimension must be size 3.
string name
A name for the operation (optional).
Returns
Tensor
A `Tensor`. Has the same type as `images`.

object rgb_to_hsv_dyn(object images, object name)

Converts one or more images from RGB to HSV.

Outputs a tensor of the same shape as the `images` tensor, containing the HSV value of the pixels. The output is only well defined if the value in `images` are in `[0,1]`.

`output[..., 0]` contains hue, `output[..., 1]` contains saturation, and `output[..., 2]` contains value. All HSV values are in `[0,1]`. A hue of 0 corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue.
Parameters
object images
A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 1-D or higher rank. RGB data to convert. Last dimension must be size 3.
object name
A name for the operation (optional).
Returns
object
A `Tensor`. Has the same type as `images`.

Tensor rgb_to_yiq(IGraphNodeBase images)

Converts one or more images from RGB to YIQ.

Outputs a tensor of the same shape as the `images` tensor, containing the YIQ value of the pixels. The output is only well defined if the value in images are in [0,1].
Parameters
IGraphNodeBase images
2-D or higher rank. Image data to convert. Last dimension must be size 3.
Returns
Tensor

object rgb_to_yiq_dyn(object images)

Converts one or more images from RGB to YIQ.

Outputs a tensor of the same shape as the `images` tensor, containing the YIQ value of the pixels. The output is only well defined if the value in images are in [0,1].
Parameters
object images
2-D or higher rank. Image data to convert. Last dimension must be size 3.
Returns
object

Tensor rgb_to_yuv(IGraphNodeBase images)

Converts one or more images from RGB to YUV.

Outputs a tensor of the same shape as the `images` tensor, containing the YUV value of the pixels. The output is only well defined if the value in images are in [0,1].
Parameters
IGraphNodeBase images
2-D or higher rank. Image data to convert. Last dimension must be size 3.
Returns
Tensor

object rgb_to_yuv_dyn(object images)

Converts one or more images from RGB to YUV.

Outputs a tensor of the same shape as the `images` tensor, containing the YUV value of the pixels. The output is only well defined if the value in images are in [0,1].
Parameters
object images
2-D or higher rank. Image data to convert. Last dimension must be size 3.
Returns
object

object rot90(IGraphNodeBase image, IGraphNodeBase k, string name)

Rotate image(s) counter-clockwise by 90 degrees.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
IGraphNodeBase k
A scalar integer. The number of times the image is rotated by 90 degrees.
string name
A name for this operation (optional).
Returns
object
A rotated tensor of the same type and shape as `image`.
Show Example
a=tf.constant([[[1],[2]],[[3],[4]]])
            # rotating `a` counter clockwise by 90 degrees
            a_rot=tf.image.rot90(a,k=1) #rotated `a`
            print(a_rot) # [[[2],[4]],[[1],[3]]] 

object rot90(IGraphNodeBase image, int k, string name)

Rotate image(s) counter-clockwise by 90 degrees.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
int k
A scalar integer. The number of times the image is rotated by 90 degrees.
string name
A name for this operation (optional).
Returns
object
A rotated tensor of the same type and shape as `image`.
Show Example
a=tf.constant([[[1],[2]],[[3],[4]]])
            # rotating `a` counter clockwise by 90 degrees
            a_rot=tf.image.rot90(a,k=1) #rotated `a`
            print(a_rot) # [[[2],[4]],[[1],[3]]] 

object rot90_dyn(object image, ImplicitContainer<T> k, object name)

Rotate image(s) counter-clockwise by 90 degrees.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
ImplicitContainer<T> k
A scalar integer. The number of times the image is rotated by 90 degrees.
object name
A name for this operation (optional).
Returns
object
A rotated tensor of the same type and shape as `image`.
Show Example
a=tf.constant([[[1],[2]],[[3],[4]]])
            # rotating `a` counter clockwise by 90 degrees
            a_rot=tf.image.rot90(a,k=1) #rotated `a`
            print(a_rot) # [[[2],[4]],[[1],[3]]] 

object sample_distorted_bounding_box(IGraphNodeBase image_size, IGraphNodeBase bounding_boxes, Nullable<int> seed, Nullable<int> seed2, double min_object_covered, Nullable<ValueTuple<double, object>> aspect_ratio_range, Nullable<ValueTuple<double, object>> area_range, object max_attempts, object use_image_if_no_bounding_boxes, string name)

Generate a single randomly distorted bounding box for an image. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: `seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.

Bounding box annotations are often supplied in addition to ground-truth labels in image recognition or object localization tasks. A common technique for training such a system is to randomly distort an image while preserving its content, i.e. *data augmentation*. This Op outputs a randomly distorted localization of an object, i.e. bounding box, given an `image_size`, `bounding_boxes` and a series of constraints.

The output of this Op is a single bounding box that may be used to crop the original image. The output is returned as 3 tensors: `begin`, `size` and `bboxes`. The first 2 tensors can be fed directly into tf.slice to crop the image. The latter may be supplied to tf.image.draw_bounding_boxes to visualize what the bounding box looks like.

Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and height of the underlying image.

For example, Note that if no bounding box information is available, setting `use_image_if_no_bounding_boxes = True` will assume there is a single implicit bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is false and no bounding boxes are supplied, an error is raised.
Parameters
IGraphNodeBase image_size
A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`. 1-D, containing `[height, width, channels]`.
IGraphNodeBase bounding_boxes
A `Tensor` of type `float32`. 3-D with shape `[batch, N, 4]` describing the N bounding boxes associated with the image.
Nullable<int> seed
An optional `int`. Defaults to `0`. If either `seed` or `seed2` are set to non-zero, the random number generator is seeded by the given `seed`. Otherwise, it is seeded by a random seed.
Nullable<int> seed2
An optional `int`. Defaults to `0`. A second seed to avoid seed collision.
double min_object_covered
A Tensor of type `float32`. Defaults to `0.1`. The cropped area of the image must contain at least this fraction of any bounding box supplied. The value of this parameter should be non-negative. In the case of 0, the cropped area does not need to overlap any of the bounding boxes supplied.
Nullable<ValueTuple<double, object>> aspect_ratio_range
An optional list of `floats`. Defaults to `[0.75, 1.33]`. The cropped area of the image must have an aspect ratio = width / height within this range.
Nullable<ValueTuple<double, object>> area_range
An optional list of `floats`. Defaults to `[0.05, 1]`. The cropped area of the image must contain a fraction of the supplied image within this range.
object max_attempts
An optional `int`. Defaults to `100`. Number of attempts at generating a cropped region of the image of the specified constraints. After `max_attempts` failures, return the entire image.
object use_image_if_no_bounding_boxes
An optional `bool`. Defaults to `False`. Controls behavior if no bounding boxes supplied. If true, assume an implicit bounding box covering the whole input. If false, raise an error.
string name
A name for the operation (optional).
Returns
object
A tuple of `Tensor` objects (begin, size, bboxes).
Show Example
# Generate a single distorted bounding box.
            begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box(
                tf.shape(image),
                bounding_boxes=bounding_boxes,
                min_object_covered=0.1) 

# Draw the bounding box in an image summary. image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0), bbox_for_draw) tf.compat.v1.summary.image('images_with_box', image_with_box)

# Employ the bounding box to distort the image. distorted_image = tf.slice(image, begin, size)

object sample_distorted_bounding_box(IGraphNodeBase image_size, IGraphNodeBase bounding_boxes, Nullable<int> seed, Nullable<int> seed2, IGraphNodeBase min_object_covered, Nullable<ValueTuple<double, object>> aspect_ratio_range, Nullable<ValueTuple<double, object>> area_range, object max_attempts, object use_image_if_no_bounding_boxes, string name)

Generate a single randomly distorted bounding box for an image. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: `seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.

Bounding box annotations are often supplied in addition to ground-truth labels in image recognition or object localization tasks. A common technique for training such a system is to randomly distort an image while preserving its content, i.e. *data augmentation*. This Op outputs a randomly distorted localization of an object, i.e. bounding box, given an `image_size`, `bounding_boxes` and a series of constraints.

The output of this Op is a single bounding box that may be used to crop the original image. The output is returned as 3 tensors: `begin`, `size` and `bboxes`. The first 2 tensors can be fed directly into tf.slice to crop the image. The latter may be supplied to tf.image.draw_bounding_boxes to visualize what the bounding box looks like.

Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and height of the underlying image.

For example, Note that if no bounding box information is available, setting `use_image_if_no_bounding_boxes = True` will assume there is a single implicit bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is false and no bounding boxes are supplied, an error is raised.
Parameters
IGraphNodeBase image_size
A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`. 1-D, containing `[height, width, channels]`.
IGraphNodeBase bounding_boxes
A `Tensor` of type `float32`. 3-D with shape `[batch, N, 4]` describing the N bounding boxes associated with the image.
Nullable<int> seed
An optional `int`. Defaults to `0`. If either `seed` or `seed2` are set to non-zero, the random number generator is seeded by the given `seed`. Otherwise, it is seeded by a random seed.
Nullable<int> seed2
An optional `int`. Defaults to `0`. A second seed to avoid seed collision.
IGraphNodeBase min_object_covered
A Tensor of type `float32`. Defaults to `0.1`. The cropped area of the image must contain at least this fraction of any bounding box supplied. The value of this parameter should be non-negative. In the case of 0, the cropped area does not need to overlap any of the bounding boxes supplied.
Nullable<ValueTuple<double, object>> aspect_ratio_range
An optional list of `floats`. Defaults to `[0.75, 1.33]`. The cropped area of the image must have an aspect ratio = width / height within this range.
Nullable<ValueTuple<double, object>> area_range
An optional list of `floats`. Defaults to `[0.05, 1]`. The cropped area of the image must contain a fraction of the supplied image within this range.
object max_attempts
An optional `int`. Defaults to `100`. Number of attempts at generating a cropped region of the image of the specified constraints. After `max_attempts` failures, return the entire image.
object use_image_if_no_bounding_boxes
An optional `bool`. Defaults to `False`. Controls behavior if no bounding boxes supplied. If true, assume an implicit bounding box covering the whole input. If false, raise an error.
string name
A name for the operation (optional).
Returns
object
A tuple of `Tensor` objects (begin, size, bboxes).
Show Example
# Generate a single distorted bounding box.
            begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box(
                tf.shape(image),
                bounding_boxes=bounding_boxes,
                min_object_covered=0.1) 

# Draw the bounding box in an image summary. image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0), bbox_for_draw) tf.compat.v1.summary.image('images_with_box', image_with_box)

# Employ the bounding box to distort the image. distorted_image = tf.slice(image, begin, size)

object sample_distorted_bounding_box_dyn(object image_size, object bounding_boxes, object seed, object seed2, ImplicitContainer<T> min_object_covered, object aspect_ratio_range, object area_range, object max_attempts, object use_image_if_no_bounding_boxes, object name)

Generate a single randomly distorted bounding box for an image. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: `seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.

Bounding box annotations are often supplied in addition to ground-truth labels in image recognition or object localization tasks. A common technique for training such a system is to randomly distort an image while preserving its content, i.e. *data augmentation*. This Op outputs a randomly distorted localization of an object, i.e. bounding box, given an `image_size`, `bounding_boxes` and a series of constraints.

The output of this Op is a single bounding box that may be used to crop the original image. The output is returned as 3 tensors: `begin`, `size` and `bboxes`. The first 2 tensors can be fed directly into tf.slice to crop the image. The latter may be supplied to tf.image.draw_bounding_boxes to visualize what the bounding box looks like.

Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and height of the underlying image.

For example, Note that if no bounding box information is available, setting `use_image_if_no_bounding_boxes = True` will assume there is a single implicit bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is false and no bounding boxes are supplied, an error is raised.
Parameters
object image_size
A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`. 1-D, containing `[height, width, channels]`.
object bounding_boxes
A `Tensor` of type `float32`. 3-D with shape `[batch, N, 4]` describing the N bounding boxes associated with the image.
object seed
An optional `int`. Defaults to `0`. If either `seed` or `seed2` are set to non-zero, the random number generator is seeded by the given `seed`. Otherwise, it is seeded by a random seed.
object seed2
An optional `int`. Defaults to `0`. A second seed to avoid seed collision.
ImplicitContainer<T> min_object_covered
A Tensor of type `float32`. Defaults to `0.1`. The cropped area of the image must contain at least this fraction of any bounding box supplied. The value of this parameter should be non-negative. In the case of 0, the cropped area does not need to overlap any of the bounding boxes supplied.
object aspect_ratio_range
An optional list of `floats`. Defaults to `[0.75, 1.33]`. The cropped area of the image must have an aspect ratio = width / height within this range.
object area_range
An optional list of `floats`. Defaults to `[0.05, 1]`. The cropped area of the image must contain a fraction of the supplied image within this range.
object max_attempts
An optional `int`. Defaults to `100`. Number of attempts at generating a cropped region of the image of the specified constraints. After `max_attempts` failures, return the entire image.
object use_image_if_no_bounding_boxes
An optional `bool`. Defaults to `False`. Controls behavior if no bounding boxes supplied. If true, assume an implicit bounding box covering the whole input. If false, raise an error.
object name
A name for the operation (optional).
Returns
object
A tuple of `Tensor` objects (begin, size, bboxes).
Show Example
# Generate a single distorted bounding box.
            begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box(
                tf.shape(image),
                bounding_boxes=bounding_boxes,
                min_object_covered=0.1) 

# Draw the bounding box in an image summary. image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0), bbox_for_draw) tf.compat.v1.summary.image('images_with_box', image_with_box)

# Employ the bounding box to distort the image. distorted_image = tf.slice(image, begin, size)

Tensor sobel_edges(IGraphNodeBase image)

Returns a tensor holding Sobel edge maps.
Parameters
IGraphNodeBase image
Image tensor with shape [batch_size, h, w, d] and type float32 or float64. The image(s) must be 2x2 or larger.
Returns
Tensor
Tensor holding edge maps for each channel. Returns a tensor with shape [batch_size, h, w, d, 2] where the last two dimensions hold [[dy[0], dx[0]], [dy[1], dx[1]],..., [dy[d-1], dx[d-1]]] calculated using the Sobel filter.

object sobel_edges_dyn(object image)

Returns a tensor holding Sobel edge maps.
Parameters
object image
Image tensor with shape [batch_size, h, w, d] and type float32 or float64. The image(s) must be 2x2 or larger.
Returns
object
Tensor holding edge maps for each channel. Returns a tensor with shape [batch_size, h, w, d, 2] where the last two dimensions hold [[dy[0], dx[0]], [dy[1], dx[1]],..., [dy[d-1], dx[d-1]]] calculated using the Sobel filter.

Tensor ssim(IGraphNodeBase img1, IGraphNodeBase img2, int max_val, int filter_size, double filter_sigma, double k1, double k2)

Computes SSIM index between img1 and img2.

This function is based on the standard SSIM implementation from: Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing.

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Details: - 11x11 Gaussian filter of width 1.5 is used. - k1 = 0.01, k2 = 0.03 as in the original paper.

The image sizes must be at least 11x11 because of the filter size.

Example:
Parameters
IGraphNodeBase img1
First image batch.
IGraphNodeBase img2
Second image batch.
int max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an SSIM value for each image in batch. Returned SSIM values are in range (-1, 1], when pixel values are non-negative. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute SSIM over tf.uint8 Tensors.
            ssim1 = tf.image.ssim(im1, im2, max_val=255, filter_size=11,
                                  filter_sigma=1.5, k1=0.01, k2=0.03) 

# Compute SSIM over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) ssim2 = tf.image.ssim(im1, im2, max_val=1.0, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03) # ssim1 and ssim2 both have type tf.float32 and are almost equal.

Tensor ssim(IGraphNodeBase img1, IGraphNodeBase img2, double max_val, int filter_size, double filter_sigma, double k1, double k2)

Computes SSIM index between img1 and img2.

This function is based on the standard SSIM implementation from: Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing.

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Details: - 11x11 Gaussian filter of width 1.5 is used. - k1 = 0.01, k2 = 0.03 as in the original paper.

The image sizes must be at least 11x11 because of the filter size.

Example:
Parameters
IGraphNodeBase img1
First image batch.
IGraphNodeBase img2
Second image batch.
double max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an SSIM value for each image in batch. Returned SSIM values are in range (-1, 1], when pixel values are non-negative. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute SSIM over tf.uint8 Tensors.
            ssim1 = tf.image.ssim(im1, im2, max_val=255, filter_size=11,
                                  filter_sigma=1.5, k1=0.01, k2=0.03) 

# Compute SSIM over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) ssim2 = tf.image.ssim(im1, im2, max_val=1.0, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03) # ssim1 and ssim2 both have type tf.float32 and are almost equal.

Tensor ssim(IGraphNodeBase img1, ValueTuple<PythonClassContainer, PythonClassContainer> img2, double max_val, int filter_size, double filter_sigma, double k1, double k2)

Computes SSIM index between img1 and img2.

This function is based on the standard SSIM implementation from: Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing.

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Details: - 11x11 Gaussian filter of width 1.5 is used. - k1 = 0.01, k2 = 0.03 as in the original paper.

The image sizes must be at least 11x11 because of the filter size.

Example:
Parameters
IGraphNodeBase img1
First image batch.
ValueTuple<PythonClassContainer, PythonClassContainer> img2
Second image batch.
double max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an SSIM value for each image in batch. Returned SSIM values are in range (-1, 1], when pixel values are non-negative. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute SSIM over tf.uint8 Tensors.
            ssim1 = tf.image.ssim(im1, im2, max_val=255, filter_size=11,
                                  filter_sigma=1.5, k1=0.01, k2=0.03) 

# Compute SSIM over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) ssim2 = tf.image.ssim(im1, im2, max_val=1.0, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03) # ssim1 and ssim2 both have type tf.float32 and are almost equal.

Tensor ssim(IGraphNodeBase img1, ValueTuple<PythonClassContainer, PythonClassContainer> img2, int max_val, int filter_size, double filter_sigma, double k1, double k2)

Computes SSIM index between img1 and img2.

This function is based on the standard SSIM implementation from: Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing.

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Details: - 11x11 Gaussian filter of width 1.5 is used. - k1 = 0.01, k2 = 0.03 as in the original paper.

The image sizes must be at least 11x11 because of the filter size.

Example:
Parameters
IGraphNodeBase img1
First image batch.
ValueTuple<PythonClassContainer, PythonClassContainer> img2
Second image batch.
int max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an SSIM value for each image in batch. Returned SSIM values are in range (-1, 1], when pixel values are non-negative. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute SSIM over tf.uint8 Tensors.
            ssim1 = tf.image.ssim(im1, im2, max_val=255, filter_size=11,
                                  filter_sigma=1.5, k1=0.01, k2=0.03) 

# Compute SSIM over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) ssim2 = tf.image.ssim(im1, im2, max_val=1.0, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03) # ssim1 and ssim2 both have type tf.float32 and are almost equal.

Tensor ssim(ValueTuple<PythonClassContainer, PythonClassContainer> img1, IGraphNodeBase img2, double max_val, int filter_size, double filter_sigma, double k1, double k2)

Computes SSIM index between img1 and img2.

This function is based on the standard SSIM implementation from: Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing.

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Details: - 11x11 Gaussian filter of width 1.5 is used. - k1 = 0.01, k2 = 0.03 as in the original paper.

The image sizes must be at least 11x11 because of the filter size.

Example:
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> img1
First image batch.
IGraphNodeBase img2
Second image batch.
double max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an SSIM value for each image in batch. Returned SSIM values are in range (-1, 1], when pixel values are non-negative. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute SSIM over tf.uint8 Tensors.
            ssim1 = tf.image.ssim(im1, im2, max_val=255, filter_size=11,
                                  filter_sigma=1.5, k1=0.01, k2=0.03) 

# Compute SSIM over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) ssim2 = tf.image.ssim(im1, im2, max_val=1.0, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03) # ssim1 and ssim2 both have type tf.float32 and are almost equal.

Tensor ssim(ValueTuple<PythonClassContainer, PythonClassContainer> img1, ValueTuple<PythonClassContainer, PythonClassContainer> img2, int max_val, int filter_size, double filter_sigma, double k1, double k2)

Computes SSIM index between img1 and img2.

This function is based on the standard SSIM implementation from: Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing.

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Details: - 11x11 Gaussian filter of width 1.5 is used. - k1 = 0.01, k2 = 0.03 as in the original paper.

The image sizes must be at least 11x11 because of the filter size.

Example:
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> img1
First image batch.
ValueTuple<PythonClassContainer, PythonClassContainer> img2
Second image batch.
int max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an SSIM value for each image in batch. Returned SSIM values are in range (-1, 1], when pixel values are non-negative. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute SSIM over tf.uint8 Tensors.
            ssim1 = tf.image.ssim(im1, im2, max_val=255, filter_size=11,
                                  filter_sigma=1.5, k1=0.01, k2=0.03) 

# Compute SSIM over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) ssim2 = tf.image.ssim(im1, im2, max_val=1.0, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03) # ssim1 and ssim2 both have type tf.float32 and are almost equal.

Tensor ssim(ValueTuple<PythonClassContainer, PythonClassContainer> img1, ValueTuple<PythonClassContainer, PythonClassContainer> img2, double max_val, int filter_size, double filter_sigma, double k1, double k2)

Computes SSIM index between img1 and img2.

This function is based on the standard SSIM implementation from: Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing.

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Details: - 11x11 Gaussian filter of width 1.5 is used. - k1 = 0.01, k2 = 0.03 as in the original paper.

The image sizes must be at least 11x11 because of the filter size.

Example:
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> img1
First image batch.
ValueTuple<PythonClassContainer, PythonClassContainer> img2
Second image batch.
double max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an SSIM value for each image in batch. Returned SSIM values are in range (-1, 1], when pixel values are non-negative. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute SSIM over tf.uint8 Tensors.
            ssim1 = tf.image.ssim(im1, im2, max_val=255, filter_size=11,
                                  filter_sigma=1.5, k1=0.01, k2=0.03) 

# Compute SSIM over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) ssim2 = tf.image.ssim(im1, im2, max_val=1.0, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03) # ssim1 and ssim2 both have type tf.float32 and are almost equal.

Tensor ssim(ValueTuple<PythonClassContainer, PythonClassContainer> img1, IGraphNodeBase img2, int max_val, int filter_size, double filter_sigma, double k1, double k2)

Computes SSIM index between img1 and img2.

This function is based on the standard SSIM implementation from: Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing.

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Details: - 11x11 Gaussian filter of width 1.5 is used. - k1 = 0.01, k2 = 0.03 as in the original paper.

The image sizes must be at least 11x11 because of the filter size.

Example:
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> img1
First image batch.
IGraphNodeBase img2
Second image batch.
int max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an SSIM value for each image in batch. Returned SSIM values are in range (-1, 1], when pixel values are non-negative. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute SSIM over tf.uint8 Tensors.
            ssim1 = tf.image.ssim(im1, im2, max_val=255, filter_size=11,
                                  filter_sigma=1.5, k1=0.01, k2=0.03) 

# Compute SSIM over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) ssim2 = tf.image.ssim(im1, im2, max_val=1.0, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03) # ssim1 and ssim2 both have type tf.float32 and are almost equal.

object ssim_dyn(object img1, object img2, object max_val, ImplicitContainer<T> filter_size, ImplicitContainer<T> filter_sigma, ImplicitContainer<T> k1, ImplicitContainer<T> k2)

Computes SSIM index between img1 and img2.

This function is based on the standard SSIM implementation from: Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing.

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Details: - 11x11 Gaussian filter of width 1.5 is used. - k1 = 0.01, k2 = 0.03 as in the original paper.

The image sizes must be at least 11x11 because of the filter size.

Example:
Parameters
object img1
First image batch.
object img2
Second image batch.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> filter_size
Default value 11 (size of gaussian filter).
ImplicitContainer<T> filter_sigma
Default value 1.5 (width of gaussian filter).
ImplicitContainer<T> k1
Default value 0.01
ImplicitContainer<T> k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
object
A tensor containing an SSIM value for each image in batch. Returned SSIM values are in range (-1, 1], when pixel values are non-negative. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).
Show Example
# Read images from file.
            im1 = tf.decode_png('path/to/im1.png')
            im2 = tf.decode_png('path/to/im2.png')
            # Compute SSIM over tf.uint8 Tensors.
            ssim1 = tf.image.ssim(im1, im2, max_val=255, filter_size=11,
                                  filter_sigma=1.5, k1=0.01, k2=0.03) 

# Compute SSIM over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) ssim2 = tf.image.ssim(im1, im2, max_val=1.0, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03) # ssim1 and ssim2 both have type tf.float32 and are almost equal.

Tensor ssim_multiscale(ValueTuple<PythonClassContainer, PythonClassContainer> img1, IndexedSlices img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> img1
First image batch.
IndexedSlices img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

Tensor ssim_multiscale(ValueTuple<PythonClassContainer, PythonClassContainer> img1, IGraphNodeBase img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> img1
First image batch.
IGraphNodeBase img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

Tensor ssim_multiscale(ValueTuple<PythonClassContainer, PythonClassContainer> img1, object img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> img1
First image batch.
object img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

Tensor ssim_multiscale(IndexedSlices img1, ValueTuple<PythonClassContainer, PythonClassContainer> img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
IndexedSlices img1
First image batch.
ValueTuple<PythonClassContainer, PythonClassContainer> img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

Tensor ssim_multiscale(IndexedSlices img1, IndexedSlices img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
IndexedSlices img1
First image batch.
IndexedSlices img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

Tensor ssim_multiscale(IndexedSlices img1, IGraphNodeBase img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
IndexedSlices img1
First image batch.
IGraphNodeBase img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

Tensor ssim_multiscale(IndexedSlices img1, object img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
IndexedSlices img1
First image batch.
object img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

Tensor ssim_multiscale(IGraphNodeBase img1, ValueTuple<PythonClassContainer, PythonClassContainer> img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
IGraphNodeBase img1
First image batch.
ValueTuple<PythonClassContainer, PythonClassContainer> img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

Tensor ssim_multiscale(IGraphNodeBase img1, IndexedSlices img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
IGraphNodeBase img1
First image batch.
IndexedSlices img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

Tensor ssim_multiscale(IGraphNodeBase img1, object img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
IGraphNodeBase img1
First image batch.
object img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

Tensor ssim_multiscale(object img1, ValueTuple<PythonClassContainer, PythonClassContainer> img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
object img1
First image batch.
ValueTuple<PythonClassContainer, PythonClassContainer> img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

Tensor ssim_multiscale(object img1, IndexedSlices img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
object img1
First image batch.
IndexedSlices img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

Tensor ssim_multiscale(object img1, IGraphNodeBase img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
object img1
First image batch.
IGraphNodeBase img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

Tensor ssim_multiscale(object img1, object img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
object img1
First image batch.
object img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

Tensor ssim_multiscale(IGraphNodeBase img1, IGraphNodeBase img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
IGraphNodeBase img1
First image batch.
IGraphNodeBase img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

Tensor ssim_multiscale(ValueTuple<PythonClassContainer, PythonClassContainer> img1, ValueTuple<PythonClassContainer, PythonClassContainer> img2, object max_val, ImplicitContainer<T> power_factors, int filter_size, double filter_sigma, double k1, double k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> img1
First image batch.
ValueTuple<PythonClassContainer, PythonClassContainer> img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
int filter_size
Default value 11 (size of gaussian filter).
double filter_sigma
Default value 1.5 (width of gaussian filter).
double k1
Default value 0.01
double k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
Tensor
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

object ssim_multiscale_dyn(object img1, object img2, object max_val, ImplicitContainer<T> power_factors, ImplicitContainer<T> filter_size, ImplicitContainer<T> filter_sigma, ImplicitContainer<T> k1, ImplicitContainer<T> k2)

Computes the MS-SSIM between img1 and img2.

This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels].

Note: The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If input is already YUV, then it will compute YUV SSIM average.)

Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004.
Parameters
object img1
First image batch.
object img2
Second image batch. Must have the same rank as img1.
object max_val
The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values).
ImplicitContainer<T> power_factors
Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
ImplicitContainer<T> filter_size
Default value 11 (size of gaussian filter).
ImplicitContainer<T> filter_sigma
Default value 1.5 (width of gaussian filter).
ImplicitContainer<T> k1
Default value 0.01
ImplicitContainer<T> k2
Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we taken the values in range of 0< K2 <0.4).
Returns
object
A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).

object total_variation(IGraphNodeBase images, string name)

Calculate and return the total variation for one or more images.

The total variation is the sum of the absolute differences for neighboring pixel-values in the input images. This measures how much noise is in the images.

This can be used as a loss-function during optimization so as to suppress noise in images. If you have a batch of images, then you should calculate the scalar loss-value as the sum: `loss = tf.reduce_sum(tf.image.total_variation(images))`

This implements the anisotropic 2-D version of the formula described here:

https://en.wikipedia.org/wiki/Total_variation_denoising
Parameters
IGraphNodeBase images
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
string name
A name for the operation (optional).
Returns
object
The total variation of `images`.

If `images` was 4-D, return a 1-D float Tensor of shape `[batch]` with the total variation for each image in the batch. If `images` was 3-D, return a scalar float with the total variation for that image.

object total_variation_dyn(object images, object name)

Calculate and return the total variation for one or more images.

The total variation is the sum of the absolute differences for neighboring pixel-values in the input images. This measures how much noise is in the images.

This can be used as a loss-function during optimization so as to suppress noise in images. If you have a batch of images, then you should calculate the scalar loss-value as the sum: `loss = tf.reduce_sum(tf.image.total_variation(images))`

This implements the anisotropic 2-D version of the formula described here:

https://en.wikipedia.org/wiki/Total_variation_denoising
Parameters
object images
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
object name
A name for the operation (optional).
Returns
object
The total variation of `images`.

If `images` was 4-D, return a 1-D float Tensor of shape `[batch]` with the total variation for each image in the batch. If `images` was 3-D, return a scalar float with the total variation for that image.

Tensor transpose(IGraphNodeBase image, string name)

Transpose image(s) by swapping the height and width dimension.
Parameters
IGraphNodeBase image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
string name
A name for this operation (optional).
Returns
Tensor
If `image` was 4-D, a 4-D float Tensor of shape `[batch, width, height, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[width, height, channels]`

object transpose_dyn(object image, object name)

Transpose image(s) by swapping the height and width dimension.
Parameters
object image
4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
object name
A name for this operation (optional).
Returns
object
If `image` was 4-D, a 4-D float Tensor of shape `[batch, width, height, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[width, height, channels]`

Tensor yiq_to_rgb(IGraphNodeBase images)

Converts one or more images from YIQ to RGB.

Outputs a tensor of the same shape as the `images` tensor, containing the RGB value of the pixels. The output is only well defined if the Y value in images are in [0,1], I value are in [-0.5957,0.5957] and Q value are in [-0.5226,0.5226].
Parameters
IGraphNodeBase images
2-D or higher rank. Image data to convert. Last dimension must be size 3.
Returns
Tensor

object yiq_to_rgb_dyn(object images)

Converts one or more images from YIQ to RGB.

Outputs a tensor of the same shape as the `images` tensor, containing the RGB value of the pixels. The output is only well defined if the Y value in images are in [0,1], I value are in [-0.5957,0.5957] and Q value are in [-0.5226,0.5226].
Parameters
object images
2-D or higher rank. Image data to convert. Last dimension must be size 3.
Returns
object

Tensor yuv_to_rgb(IGraphNodeBase images)

Converts one or more images from YUV to RGB.

Outputs a tensor of the same shape as the `images` tensor, containing the RGB value of the pixels. The output is only well defined if the Y value in images are in [0,1], U and V value are in [-0.5,0.5].
Parameters
IGraphNodeBase images
2-D or higher rank. Image data to convert. Last dimension must be size 3.
Returns
Tensor

object yuv_to_rgb_dyn(object images)

Converts one or more images from YUV to RGB.

Outputs a tensor of the same shape as the `images` tensor, containing the RGB value of the pixels. The output is only well defined if the Y value in images are in [0,1], U and V value are in [-0.5,0.5].
Parameters
object images
2-D or higher rank. Image data to convert. Last dimension must be size 3.
Returns
object

Public properties

PythonFunctionContainer adjust_brightness_fn get;

PythonFunctionContainer adjust_contrast_fn get;

PythonFunctionContainer adjust_gamma_fn get;

PythonFunctionContainer adjust_hue_fn get;

PythonFunctionContainer adjust_jpeg_quality_fn get;

PythonFunctionContainer adjust_saturation_fn get;

PythonFunctionContainer central_crop_fn get;

PythonFunctionContainer combined_non_max_suppression_fn get;

PythonFunctionContainer convert_image_dtype_fn get;

PythonFunctionContainer crop_and_resize_fn get;

PythonFunctionContainer crop_to_bounding_box_fn get;

PythonFunctionContainer draw_bounding_boxes_fn get;

PythonFunctionContainer encode_png_fn get;

PythonFunctionContainer extract_glimpse_fn get;

PythonFunctionContainer extract_patches_fn get;

PythonFunctionContainer flip_left_right_fn get;

PythonFunctionContainer flip_up_down_fn get;

PythonFunctionContainer grayscale_to_rgb_fn get;

PythonFunctionContainer hsv_to_rgb_fn get;

PythonFunctionContainer image_gradients_fn get;

PythonFunctionContainer non_max_suppression_fn get;

PythonFunctionContainer non_max_suppression_overlaps_fn get;

PythonFunctionContainer non_max_suppression_padded_fn get;

PythonFunctionContainer non_max_suppression_with_scores_fn get;

PythonFunctionContainer pad_to_bounding_box_fn get;

PythonFunctionContainer per_image_standardization_fn get;

PythonFunctionContainer random_brightness_fn get;

PythonFunctionContainer random_contrast_fn get;

PythonFunctionContainer random_flip_left_right_fn get;

PythonFunctionContainer random_flip_up_down_fn get;

PythonFunctionContainer random_hue_fn get;

PythonFunctionContainer random_jpeg_quality_fn get;

PythonFunctionContainer random_saturation_fn get;

PythonFunctionContainer resize_fn get;

PythonFunctionContainer resize_image_with_pad_fn get;

PythonFunctionContainer resize_with_crop_or_pad_fn get;

PythonFunctionContainer resize_with_pad_fn get;

PythonFunctionContainer rgb_to_grayscale_fn get;

PythonFunctionContainer rgb_to_hsv_fn get;

PythonFunctionContainer rgb_to_yiq_fn get;

PythonFunctionContainer rgb_to_yuv_fn get;

PythonFunctionContainer rot90_fn get;

PythonFunctionContainer sample_distorted_bounding_box_fn get;

PythonFunctionContainer sobel_edges_fn get;

PythonFunctionContainer ssim_multiscale_fn get;

PythonFunctionContainer total_variation_fn get;

PythonFunctionContainer transpose_fn get;

PythonFunctionContainer yiq_to_rgb_fn get;

PythonFunctionContainer yuv_to_rgb_fn get;