LostTech.TensorFlow : API Documentation

Type tf.metrics

Namespace tensorflow

Methods

Properties

Public static methods

ValueTuple<object, Tensor> accuracy(IGraphNodeBase labels, IndexedSlices predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IndexedSlices predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IndexedSlices labels, IGraphNodeBase predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IGraphNodeBase predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IndexedSlices labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IndexedSlices labels, IndexedSlices predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IndexedSlices predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IndexedSlices labels, IndexedSlices predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IndexedSlices predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IGraphNodeBase labels, IGraphNodeBase predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IGraphNodeBase predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IGraphNodeBase labels, IGraphNodeBase predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IGraphNodeBase predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IGraphNodeBase labels, IndexedSlices predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IndexedSlices predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IGraphNodeBase labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IGraphNodeBase labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IGraphNodeBase labels, IDictionary<object, object> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IDictionary<object, object> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IGraphNodeBase labels, IDictionary<object, object> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IDictionary<object, object> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IDictionary<object, object> labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IDictionary<object, object> labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IDictionary<object, object> labels, IDictionary<object, object> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IDictionary<object, object> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IDictionary<object, object> labels, IDictionary<object, object> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IDictionary<object, object> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IndexedSlices labels, IDictionary<object, object> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IDictionary<object, object> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IndexedSlices labels, IDictionary<object, object> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IDictionary<object, object> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IndexedSlices labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IndexedSlices labels, IGraphNodeBase predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IGraphNodeBase predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IGraphNodeBase predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IDictionary<object, object> labels, IndexedSlices predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IndexedSlices predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IndexedSlices predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IndexedSlices predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IDictionary<object, object> labels, IndexedSlices predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IndexedSlices predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IndexedSlices predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IndexedSlices predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IDictionary<object, object> labels, IGraphNodeBase predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IGraphNodeBase predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(IDictionary<object, object> labels, IGraphNodeBase predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IGraphNodeBase predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IDictionary<object, object> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IDictionary<object, object> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IDictionary<object, object> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IDictionary<object, object> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
IGraphNodeBase predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> accuracy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `accuracy` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

object accuracy_dyn(object labels, object predictions, object weights, object metrics_collections, object updates_collections, object name)

Calculates how often `predictions` matches `labels`.

The `accuracy` function creates two local variables, `total` and `count` that are used to compute the frequency with which `predictions` matches `labels`. This frequency is ultimately returned as `accuracy`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `accuracy`. Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 where the corresponding elements of `predictions` and `labels` match and 0.0 otherwise. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `is_correct`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose shape matches `predictions`.
object predictions
The predicted values, a `Tensor` of any shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `accuracy` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, object> auc(IGraphNodeBase labels, IndexedSlices predictions, PythonClassContainer weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IndexedSlices predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
PythonClassContainer weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IndexedSlices predictions, IndexedSlices weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IndexedSlices predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IndexedSlices weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IEnumerable<PythonClassContainer> predictions, IEnumerable<PythonClassContainer> weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IEnumerable<PythonClassContainer> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<PythonClassContainer> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IEnumerable<PythonClassContainer> predictions, IndexedSlices weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IEnumerable<PythonClassContainer> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IndexedSlices weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IndexedSlices predictions, string weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IndexedSlices predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
string weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IEnumerable<PythonClassContainer> predictions, IGraphNodeBase weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IEnumerable<PythonClassContainer> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IEnumerable<PythonClassContainer> predictions, PythonClassContainer weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IEnumerable<PythonClassContainer> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
PythonClassContainer weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IndexedSlices predictions, IEnumerable<PythonClassContainer> weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IndexedSlices predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<PythonClassContainer> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IEnumerable<PythonClassContainer> predictions, string weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IEnumerable<PythonClassContainer> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
string weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IndexedSlices predictions, IGraphNodeBase weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IndexedSlices predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IEnumerable<PythonClassContainer> predictions, object weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IEnumerable<PythonClassContainer> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IGraphNodeBase predictions, IndexedSlices weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IndexedSlices weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IndexedSlices predictions, object weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IndexedSlices predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, PythonClassContainer predictions, string weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
string weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IGraphNodeBase predictions, IEnumerable<PythonClassContainer> weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<PythonClassContainer> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, PythonClassContainer predictions, PythonClassContainer weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
PythonClassContainer weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, PythonClassContainer predictions, IGraphNodeBase weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, PythonClassContainer predictions, IndexedSlices weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IndexedSlices weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, PythonClassContainer predictions, object weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IGraphNodeBase predictions, string weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
string weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, PythonClassContainer predictions, IEnumerable<PythonClassContainer> weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<PythonClassContainer> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IGraphNodeBase predictions, object weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IGraphNodeBase predictions, IGraphNodeBase weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

ValueTuple<object, object> auc(IGraphNodeBase labels, IGraphNodeBase predictions, PythonClassContainer weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string curve, string name, string summation_method, IEnumerable<double> thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
PythonClassContainer weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use when discretizing the roc curve.
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
string name
An optional variable_scope name.
string summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
IEnumerable<double> thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
ValueTuple<object, object>

object auc_dyn(object labels, object predictions, object weights, ImplicitContainer<T> num_thresholds, object metrics_collections, object updates_collections, ImplicitContainer<T> curve, object name, ImplicitContainer<T> summation_method, object thresholds)

Computes the approximate AUC via a Riemann sum.

The `auc` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall.

This value is ultimately returned as `auc`, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The `num_thresholds` variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on `num_thresholds`.

For best results, `predictions` should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting `summation_method` to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The `thresholds` parameter can be used to manually specify thresholds which split the predictions more evenly.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `auc`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
object predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
ImplicitContainer<T> num_thresholds
The number of thresholds to use when discretizing the roc curve.
object metrics_collections
An optional list of collections that `auc` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
ImplicitContainer<T> curve
Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve.
object name
An optional variable_scope name.
ImplicitContainer<T> summation_method
Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve.
object thresholds
An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the `num_thresholds` parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1.
Returns
object

ValueTuple<object, Tensor> average_precision_at_k(ndarray labels, IEnumerable<object> predictions, int k, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes average precision@k of predictions with respect to sparse labels.

`average_precision_at_k` creates two local variables, `average_precision_at_/total` and `average_precision_at_/max`, that are used to compute the frequency. This frequency is ultimately returned as `average_precision_at_`: an idempotent operation that simply divides `average_precision_at_/total` by `average_precision_at_/max`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false positives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_positive_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IEnumerable<object> predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and `predictions` has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric. This will calculate an average precision for range `[1,k]`, as documented above.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> average_precision_at_k(object labels, IGraphNodeBase predictions, int k, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes average precision@k of predictions with respect to sparse labels.

`average_precision_at_k` creates two local variables, `average_precision_at_/total` and `average_precision_at_/max`, that are used to compute the frequency. This frequency is ultimately returned as `average_precision_at_`: an idempotent operation that simply divides `average_precision_at_/total` by `average_precision_at_/max`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false positives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_positive_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and `predictions` has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric. This will calculate an average precision for range `[1,k]`, as documented above.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> average_precision_at_k(object labels, IGraphNodeBase predictions, int k, ValueTuple<double, object> weights, object metrics_collections, object updates_collections, string name)

Computes average precision@k of predictions with respect to sparse labels.

`average_precision_at_k` creates two local variables, `average_precision_at_/total` and `average_precision_at_/max`, that are used to compute the frequency. This frequency is ultimately returned as `average_precision_at_`: an idempotent operation that simply divides `average_precision_at_/total` by `average_precision_at_/max`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false positives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_positive_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and `predictions` has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric. This will calculate an average precision for range `[1,k]`, as documented above.
ValueTuple<double, object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> average_precision_at_k(object labels, IEnumerable<object> predictions, int k, ValueTuple<double, object> weights, object metrics_collections, object updates_collections, string name)

Computes average precision@k of predictions with respect to sparse labels.

`average_precision_at_k` creates two local variables, `average_precision_at_/total` and `average_precision_at_/max`, that are used to compute the frequency. This frequency is ultimately returned as `average_precision_at_`: an idempotent operation that simply divides `average_precision_at_/total` by `average_precision_at_/max`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false positives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_positive_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IEnumerable<object> predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and `predictions` has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric. This will calculate an average precision for range `[1,k]`, as documented above.
ValueTuple<double, object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> average_precision_at_k(ndarray labels, IGraphNodeBase predictions, int k, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes average precision@k of predictions with respect to sparse labels.

`average_precision_at_k` creates two local variables, `average_precision_at_/total` and `average_precision_at_/max`, that are used to compute the frequency. This frequency is ultimately returned as `average_precision_at_`: an idempotent operation that simply divides `average_precision_at_/total` by `average_precision_at_/max`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false positives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_positive_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and `predictions` has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric. This will calculate an average precision for range `[1,k]`, as documented above.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> average_precision_at_k(ndarray labels, IGraphNodeBase predictions, int k, ValueTuple<double, object> weights, object metrics_collections, object updates_collections, string name)

Computes average precision@k of predictions with respect to sparse labels.

`average_precision_at_k` creates two local variables, `average_precision_at_/total` and `average_precision_at_/max`, that are used to compute the frequency. This frequency is ultimately returned as `average_precision_at_`: an idempotent operation that simply divides `average_precision_at_/total` by `average_precision_at_/max`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false positives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_positive_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and `predictions` has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric. This will calculate an average precision for range `[1,k]`, as documented above.
ValueTuple<double, object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> average_precision_at_k(ndarray labels, IEnumerable<object> predictions, int k, ValueTuple<double, object> weights, object metrics_collections, object updates_collections, string name)

Computes average precision@k of predictions with respect to sparse labels.

`average_precision_at_k` creates two local variables, `average_precision_at_/total` and `average_precision_at_/max`, that are used to compute the frequency. This frequency is ultimately returned as `average_precision_at_`: an idempotent operation that simply divides `average_precision_at_/total` by `average_precision_at_/max`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false positives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_positive_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IEnumerable<object> predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and `predictions` has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric. This will calculate an average precision for range `[1,k]`, as documented above.
ValueTuple<double, object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> average_precision_at_k(object labels, IEnumerable<object> predictions, int k, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes average precision@k of predictions with respect to sparse labels.

`average_precision_at_k` creates two local variables, `average_precision_at_/total` and `average_precision_at_/max`, that are used to compute the frequency. This frequency is ultimately returned as `average_precision_at_`: an idempotent operation that simply divides `average_precision_at_/total` by `average_precision_at_/max`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false positives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_positive_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IEnumerable<object> predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and `predictions` has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric. This will calculate an average precision for range `[1,k]`, as documented above.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

object average_precision_at_k_dyn(object labels, object predictions, object k, object weights, object metrics_collections, object updates_collections, object name)

Computes average precision@k of predictions with respect to sparse labels.

`average_precision_at_k` creates two local variables, `average_precision_at_/total` and `average_precision_at_/max`, that are used to compute the frequency. This frequency is ultimately returned as `average_precision_at_`: an idempotent operation that simply divides `average_precision_at_/total` by `average_precision_at_/max`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false positives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_positive_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
object predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and `predictions` has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
object k
Integer, k for @k metric. This will calculate an average precision for range `[1,k]`, as documented above.
object weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
object name
Name of new update operation, and namespace for other dependent ops.
Returns
object

ValueTuple<object, Tensor> false_negatives(IGraphNodeBase labels, object predictions, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(object labels, PythonClassContainer predictions, ValueTuple<object, object, object> weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
ValueTuple<object, object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(object labels, PythonClassContainer predictions, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(object labels, object predictions, ValueTuple<object, object, object> weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
ValueTuple<object, object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(object labels, object predictions, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(IGraphNodeBase labels, PythonClassContainer predictions, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(double labels, PythonClassContainer predictions, ValueTuple<object, object, object> weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
ValueTuple<object, object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(IGraphNodeBase labels, object predictions, ValueTuple<object, object, object> weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
ValueTuple<object, object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(double labels, object predictions, ValueTuple<object, object, object> weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
ValueTuple<object, object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(IEnumerable<object> labels, PythonClassContainer predictions, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(ndarray labels, object predictions, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(IEnumerable<object> labels, object predictions, ValueTuple<object, object, object> weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
ValueTuple<object, object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(IEnumerable<object> labels, object predictions, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(double labels, PythonClassContainer predictions, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(ndarray labels, object predictions, ValueTuple<object, object, object> weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
ValueTuple<object, object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(ndarray labels, PythonClassContainer predictions, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(ndarray labels, PythonClassContainer predictions, ValueTuple<object, object, object> weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
ValueTuple<object, object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(ValueTuple<PythonClassContainer, PythonClassContainer> labels, PythonClassContainer predictions, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(ValueTuple<PythonClassContainer, PythonClassContainer> labels, PythonClassContainer predictions, ValueTuple<object, object, object> weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
ValueTuple<object, object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(ValueTuple<PythonClassContainer, PythonClassContainer> labels, object predictions, ValueTuple<object, object, object> weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
ValueTuple<object, object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(IGraphNodeBase labels, PythonClassContainer predictions, ValueTuple<object, object, object> weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
ValueTuple<object, object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(IndexedSlices labels, object predictions, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(IndexedSlices labels, object predictions, ValueTuple<object, object, object> weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
ValueTuple<object, object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(double labels, object predictions, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(IndexedSlices labels, PythonClassContainer predictions, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(IndexedSlices labels, PythonClassContainer predictions, ValueTuple<object, object, object> weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
ValueTuple<object, object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(IEnumerable<object> labels, PythonClassContainer predictions, ValueTuple<object, object, object> weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
ValueTuple<object, object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives(ValueTuple<PythonClassContainer, PythonClassContainer> labels, object predictions, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_negatives_at_thresholds(IGraphNodeBase labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, Nullable<ValueTuple<object, object, object>> weights, object metrics_collections, object updates_collections, string name)

Computes false negatives at provided threshold values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
Nullable<ValueTuple<object, object, object>> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `false_negatives` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

object false_negatives_at_thresholds_dyn(object labels, object predictions, object thresholds, object weights, object metrics_collections, object updates_collections, object name)

Computes false negatives at provided threshold values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
object predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `false_negatives` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

object false_negatives_dyn(object labels, object predictions, object weights, object metrics_collections, object updates_collections, object name)

Computes the total number of false negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, Tensor> false_positives(ndarray labels, object predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of false positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_positives(ndarray labels, PythonClassContainer predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of false positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_positives(double labels, object predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of false positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_positives(ValueTuple<PythonClassContainer, PythonClassContainer> labels, PythonClassContainer predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of false positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_positives(ValueTuple<PythonClassContainer, PythonClassContainer> labels, object predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of false positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_positives(IndexedSlices labels, PythonClassContainer predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of false positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_positives(IndexedSlices labels, object predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of false positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_positives(IGraphNodeBase labels, PythonClassContainer predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of false positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_positives(IEnumerable<object> labels, PythonClassContainer predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of false positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_positives(IEnumerable<object> labels, object predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of false positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_positives(IGraphNodeBase labels, object predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of false positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_positives(object labels, PythonClassContainer predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of false positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_positives(object labels, object predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of false positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_positives(double labels, PythonClassContainer predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of false positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> false_positives_at_thresholds(IGraphNodeBase labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, Nullable<ValueTuple<object, object, object>> weights, object metrics_collections, object updates_collections, string name)

Computes false positives at provided threshold values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
Nullable<ValueTuple<object, object, object>> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `false_positives` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

object false_positives_at_thresholds_dyn(object labels, object predictions, object thresholds, object weights, object metrics_collections, object updates_collections, object name)

Computes false positives at provided threshold values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
object predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `false_positives` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

object false_positives_dyn(object labels, object predictions, object weights, object metrics_collections, object updates_collections, object name)

Sum the weights of false positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, Tensor> mean(IGraphNodeBase values, PythonClassContainer weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes the (weighted) mean of the given values.

The `mean` function creates two local variables, `total` and `count` that are used to compute the average of `values`. This average is ultimately returned as `mean` which is an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean`. `update_op` increments `total` with the reduced sum of the product of `values` and `weights`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase values
A `Tensor` of arbitrary dimensions.
PythonClassContainer weights
Optional `Tensor` whose rank is either 0, or the same rank as `values`, and must be broadcastable to `values` (i.e., all dimensions must be either `1`, or the same as the corresponding `values` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean(IGraphNodeBase values, PythonClassContainer weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the (weighted) mean of the given values.

The `mean` function creates two local variables, `total` and `count` that are used to compute the average of `values`. This average is ultimately returned as `mean` which is an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean`. `update_op` increments `total` with the reduced sum of the product of `values` and `weights`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase values
A `Tensor` of arbitrary dimensions.
PythonClassContainer weights
Optional `Tensor` whose rank is either 0, or the same rank as `values`, and must be broadcastable to `values` (i.e., all dimensions must be either `1`, or the same as the corresponding `values` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean(IGraphNodeBase values, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the (weighted) mean of the given values.

The `mean` function creates two local variables, `total` and `count` that are used to compute the average of `values`. This average is ultimately returned as `mean` which is an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean`. `update_op` increments `total` with the reduced sum of the product of `values` and `weights`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase values
A `Tensor` of arbitrary dimensions.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `values`, and must be broadcastable to `values` (i.e., all dimensions must be either `1`, or the same as the corresponding `values` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean(IGraphNodeBase values, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes the (weighted) mean of the given values.

The `mean` function creates two local variables, `total` and `count` that are used to compute the average of `values`. This average is ultimately returned as `mean` which is an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean`. `update_op` increments `total` with the reduced sum of the product of `values` and `weights`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase values
A `Tensor` of arbitrary dimensions.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `values`, and must be broadcastable to `values` (i.e., all dimensions must be either `1`, or the same as the corresponding `values` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_absolute_error(IGraphNodeBase labels, IGraphNodeBase predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the mean absolute error between the labels and predictions.

The `mean_absolute_error` function creates two local variables, `total` and `count` that are used to compute the mean absolute error. This average is weighted by `weights`, and it is ultimately returned as `mean_absolute_error`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_absolute_error`. Internally, an `absolute_errors` operation computes the absolute value of the differences between `predictions` and `labels`. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `absolute_errors`, and it increments `count` with the reduced sum of `weights`

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` of the same shape as `predictions`.
IGraphNodeBase predictions
A `Tensor` of arbitrary shape.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_absolute_error` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

object mean_absolute_error_dyn(object labels, object predictions, object weights, object metrics_collections, object updates_collections, object name)

Computes the mean absolute error between the labels and predictions.

The `mean_absolute_error` function creates two local variables, `total` and `count` that are used to compute the mean absolute error. This average is weighted by `weights`, and it is ultimately returned as `mean_absolute_error`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_absolute_error`. Internally, an `absolute_errors` operation computes the absolute value of the differences between `predictions` and `labels`. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `absolute_errors`, and it increments `count` with the reduced sum of `weights`

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
A `Tensor` of the same shape as `predictions`.
object predictions
A `Tensor` of arbitrary shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `mean_absolute_error` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

object mean_cosine_distance(IGraphNodeBase labels, IGraphNodeBase predictions, int dim, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the cosine distance between the labels and predictions.

The `mean_cosine_distance` function creates two local variables, `total` and `count` that are used to compute the average cosine distance between `predictions` and `labels`. This average is weighted by `weights`, and it is ultimately returned as `mean_distance`, which is an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_distance`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` of arbitrary shape.
IGraphNodeBase predictions
A `Tensor` of the same shape as `labels`.
int dim
The dimension along which the cosine distance is computed.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension). Also, dimension `dim` must be `1`.
IEnumerable<string> metrics_collections
An optional list of collections that the metric value variable should be added to.
IEnumerable<string> updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
object

object mean_cosine_distance_dyn(object labels, object predictions, object dim, object weights, object metrics_collections, object updates_collections, object name)

Computes the cosine distance between the labels and predictions.

The `mean_cosine_distance` function creates two local variables, `total` and `count` that are used to compute the average cosine distance between `predictions` and `labels`. This average is weighted by `weights`, and it is ultimately returned as `mean_distance`, which is an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_distance`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
A `Tensor` of arbitrary shape.
object predictions
A `Tensor` of the same shape as `labels`.
object dim
The dimension along which the cosine distance is computed.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension). Also, dimension `dim` must be `1`.
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
object name
An optional variable_scope name.
Returns
object

object mean_dyn(object values, object weights, object metrics_collections, object updates_collections, object name)

Computes the (weighted) mean of the given values.

The `mean` function creates two local variables, `total` and `count` that are used to compute the average of `values`. This average is ultimately returned as `mean` which is an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean`. `update_op` increments `total` with the reduced sum of the product of `values` and `weights`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object values
A `Tensor` of arbitrary dimensions.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `values`, and must be broadcastable to `values` (i.e., all dimensions must be either `1`, or the same as the corresponding `values` dimension).
object metrics_collections
An optional list of collections that `mean` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, Tensor> mean_iou(IGraphNodeBase labels, IGraphNodeBase predictions, int num_classes, IDictionary<object, object> weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculate per-step mean Intersection-Over-Union (mIOU).

Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. IOU is defined as follows: IOU = true_positive / (true_positive + false_positive + false_negative). The predictions are accumulated in a confusion matrix, weighted by `weights`, and mIOU is then calculated from it.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_iou`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IGraphNodeBase predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since a confusion matrix of dimension = [num_classes, num_classes] will be allocated.
IDictionary<object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_iou` should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_iou(IDictionary<object, object> labels, IDictionary<object, object> predictions, int num_classes, IDictionary<object, object> weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculate per-step mean Intersection-Over-Union (mIOU).

Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. IOU is defined as follows: IOU = true_positive / (true_positive + false_positive + false_negative). The predictions are accumulated in a confusion matrix, weighted by `weights`, and mIOU is then calculated from it.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_iou`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IDictionary<object, object> predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since a confusion matrix of dimension = [num_classes, num_classes] will be allocated.
IDictionary<object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_iou` should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_iou(IGraphNodeBase labels, IDictionary<object, object> predictions, int num_classes, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculate per-step mean Intersection-Over-Union (mIOU).

Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. IOU is defined as follows: IOU = true_positive / (true_positive + false_positive + false_negative). The predictions are accumulated in a confusion matrix, weighted by `weights`, and mIOU is then calculated from it.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_iou`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IDictionary<object, object> predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since a confusion matrix of dimension = [num_classes, num_classes] will be allocated.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_iou` should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_iou(IGraphNodeBase labels, IDictionary<object, object> predictions, int num_classes, IDictionary<object, object> weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculate per-step mean Intersection-Over-Union (mIOU).

Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. IOU is defined as follows: IOU = true_positive / (true_positive + false_positive + false_negative). The predictions are accumulated in a confusion matrix, weighted by `weights`, and mIOU is then calculated from it.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_iou`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IDictionary<object, object> predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since a confusion matrix of dimension = [num_classes, num_classes] will be allocated.
IDictionary<object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_iou` should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_iou(IDictionary<object, object> labels, IGraphNodeBase predictions, int num_classes, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculate per-step mean Intersection-Over-Union (mIOU).

Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. IOU is defined as follows: IOU = true_positive / (true_positive + false_positive + false_negative). The predictions are accumulated in a confusion matrix, weighted by `weights`, and mIOU is then calculated from it.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_iou`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IGraphNodeBase predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since a confusion matrix of dimension = [num_classes, num_classes] will be allocated.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_iou` should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_iou(IDictionary<object, object> labels, IDictionary<object, object> predictions, int num_classes, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculate per-step mean Intersection-Over-Union (mIOU).

Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. IOU is defined as follows: IOU = true_positive / (true_positive + false_positive + false_negative). The predictions are accumulated in a confusion matrix, weighted by `weights`, and mIOU is then calculated from it.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_iou`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IDictionary<object, object> predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since a confusion matrix of dimension = [num_classes, num_classes] will be allocated.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_iou` should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_iou(IGraphNodeBase labels, IGraphNodeBase predictions, int num_classes, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculate per-step mean Intersection-Over-Union (mIOU).

Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. IOU is defined as follows: IOU = true_positive / (true_positive + false_positive + false_negative). The predictions are accumulated in a confusion matrix, weighted by `weights`, and mIOU is then calculated from it.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_iou`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IGraphNodeBase predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since a confusion matrix of dimension = [num_classes, num_classes] will be allocated.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_iou` should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_iou(IDictionary<object, object> labels, IGraphNodeBase predictions, int num_classes, IDictionary<object, object> weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculate per-step mean Intersection-Over-Union (mIOU).

Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. IOU is defined as follows: IOU = true_positive / (true_positive + false_positive + false_negative). The predictions are accumulated in a confusion matrix, weighted by `weights`, and mIOU is then calculated from it.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_iou`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IGraphNodeBase predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since a confusion matrix of dimension = [num_classes, num_classes] will be allocated.
IDictionary<object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_iou` should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

object mean_iou_dyn(object labels, object predictions, object num_classes, object weights, object metrics_collections, object updates_collections, object name)

Calculate per-step mean Intersection-Over-Union (mIOU).

Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. IOU is defined as follows: IOU = true_positive / (true_positive + false_positive + false_negative). The predictions are accumulated in a confusion matrix, weighted by `weights`, and mIOU is then calculated from it.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_iou`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
object predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
object num_classes
The possible number of labels the prediction task can have. This value must be provided, since a confusion matrix of dimension = [num_classes, num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `mean_iou` should be added to.
object updates_collections
An optional list of collections `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, Tensor> mean_per_class_accuracy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IGraphNodeBase predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_per_class_accuracy(IndexedSlices labels, IGraphNodeBase predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IGraphNodeBase predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_per_class_accuracy(IndexedSlices labels, IDictionary<object, object> predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IDictionary<object, object> predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_per_class_accuracy(IGraphNodeBase labels, IDictionary<object, object> predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IDictionary<object, object> predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_per_class_accuracy(IGraphNodeBase labels, IndexedSlices predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IndexedSlices predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_per_class_accuracy(IGraphNodeBase labels, IGraphNodeBase predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IGraphNodeBase predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_per_class_accuracy(IndexedSlices labels, IndexedSlices predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IndexedSlices predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_per_class_accuracy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_per_class_accuracy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IndexedSlices predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IndexedSlices predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_per_class_accuracy(IGraphNodeBase labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_per_class_accuracy(IndexedSlices labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_per_class_accuracy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IDictionary<object, object> predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IDictionary<object, object> predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_per_class_accuracy(IDictionary<object, object> labels, IndexedSlices predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IndexedSlices predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_per_class_accuracy(IDictionary<object, object> labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_per_class_accuracy(IDictionary<object, object> labels, IDictionary<object, object> predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IDictionary<object, object> predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_per_class_accuracy(IDictionary<object, object> labels, IGraphNodeBase predictions, int num_classes, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
IGraphNodeBase predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
int num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
IEnumerable<string> updates_collections
An optional list of collections `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

object mean_per_class_accuracy_dyn(object labels, object predictions, object num_classes, object weights, object metrics_collections, object updates_collections, object name)

Calculates the mean of the per-class accuracies.

Calculates the accuracy for each class, then takes the mean of that.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates the accuracy of each class and returns them.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
A `Tensor` of ground truth labels with shape [batch size] and of type `int32` or `int64`. The tensor will be flattened if its rank > 1.
object predictions
A `Tensor` of prediction results for semantic labels, whose shape is [batch size] and type `int32` or `int64`. The tensor will be flattened if its rank > 1.
object num_classes
The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `mean_per_class_accuracy' should be added to.
object updates_collections
An optional list of collections `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, Tensor> mean_relative_error(IGraphNodeBase labels, IGraphNodeBase predictions, IGraphNodeBase normalizer, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the mean relative error by normalizing with the given values.

The `mean_relative_error` function creates two local variables, `total` and `count` that are used to compute the mean relative absolute error. This average is weighted by `weights`, and it is ultimately returned as `mean_relative_error`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_reative_error`. Internally, a `relative_errors` operation divides the absolute value of the differences between `predictions` and `labels` by the `normalizer`. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `relative_errors`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` of the same shape as `predictions`.
IGraphNodeBase predictions
A `Tensor` of arbitrary shape.
IGraphNodeBase normalizer
A `Tensor` of the same shape as `predictions`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_relative_error` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

object mean_relative_error_dyn(object labels, object predictions, object normalizer, object weights, object metrics_collections, object updates_collections, object name)

Computes the mean relative error by normalizing with the given values.

The `mean_relative_error` function creates two local variables, `total` and `count` that are used to compute the mean relative absolute error. This average is weighted by `weights`, and it is ultimately returned as `mean_relative_error`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_reative_error`. Internally, a `relative_errors` operation divides the absolute value of the differences between `predictions` and `labels` by the `normalizer`. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `relative_errors`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
A `Tensor` of the same shape as `predictions`.
object predictions
A `Tensor` of arbitrary shape.
object normalizer
A `Tensor` of the same shape as `predictions`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `mean_relative_error` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, Tensor> mean_squared_error(IGraphNodeBase labels, IGraphNodeBase predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the mean squared error between the labels and predictions.

The `mean_squared_error` function creates two local variables, `total` and `count` that are used to compute the mean squared error. This average is weighted by `weights`, and it is ultimately returned as `mean_squared_error`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_squared_error`. Internally, a `squared_error` operation computes the element-wise square of the difference between `predictions` and `labels`. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `squared_error`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` of the same shape as `predictions`.
IGraphNodeBase predictions
A `Tensor` of arbitrary shape.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean_squared_error` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

object mean_squared_error_dyn(object labels, object predictions, object weights, object metrics_collections, object updates_collections, object name)

Computes the mean squared error between the labels and predictions.

The `mean_squared_error` function creates two local variables, `total` and `count` that are used to compute the mean squared error. This average is weighted by `weights`, and it is ultimately returned as `mean_squared_error`: an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean_squared_error`. Internally, a `squared_error` operation computes the element-wise square of the difference between `predictions` and `labels`. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `squared_error`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
A `Tensor` of the same shape as `predictions`.
object predictions
A `Tensor` of arbitrary shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `mean_squared_error` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, Tensor> mean_tensor(IGraphNodeBase values, IDictionary<object, object> weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the element-wise (weighted) mean of the given tensors.

In contrast to the `mean` function which returns a scalar with the mean, this function returns an average tensor with the same shape as the input tensors.

The `mean_tensor` function creates two local variables, `total_tensor` and `count_tensor` that are used to compute the average of `values`. This average is ultimately returned as `mean` which is an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean`. `update_op` increments `total` with the reduced sum of the product of `values` and `weights`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase values
A `Tensor` of arbitrary dimensions.
IDictionary<object, object> weights
Optional `Tensor` whose rank is either 0, or the same rank as `values`, and must be broadcastable to `values` (i.e., all dimensions must be either `1`, or the same as the corresponding `values` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_tensor(IGraphNodeBase values, ValueTuple<PythonClassContainer, PythonClassContainer> weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the element-wise (weighted) mean of the given tensors.

In contrast to the `mean` function which returns a scalar with the mean, this function returns an average tensor with the same shape as the input tensors.

The `mean_tensor` function creates two local variables, `total_tensor` and `count_tensor` that are used to compute the average of `values`. This average is ultimately returned as `mean` which is an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean`. `update_op` increments `total` with the reduced sum of the product of `values` and `weights`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase values
A `Tensor` of arbitrary dimensions.
ValueTuple<PythonClassContainer, PythonClassContainer> weights
Optional `Tensor` whose rank is either 0, or the same rank as `values`, and must be broadcastable to `values` (i.e., all dimensions must be either `1`, or the same as the corresponding `values` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_tensor(IGraphNodeBase values, IndexedSlices weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the element-wise (weighted) mean of the given tensors.

In contrast to the `mean` function which returns a scalar with the mean, this function returns an average tensor with the same shape as the input tensors.

The `mean_tensor` function creates two local variables, `total_tensor` and `count_tensor` that are used to compute the average of `values`. This average is ultimately returned as `mean` which is an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean`. `update_op` increments `total` with the reduced sum of the product of `values` and `weights`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase values
A `Tensor` of arbitrary dimensions.
IndexedSlices weights
Optional `Tensor` whose rank is either 0, or the same rank as `values`, and must be broadcastable to `values` (i.e., all dimensions must be either `1`, or the same as the corresponding `values` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> mean_tensor(IGraphNodeBase values, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the element-wise (weighted) mean of the given tensors.

In contrast to the `mean` function which returns a scalar with the mean, this function returns an average tensor with the same shape as the input tensors.

The `mean_tensor` function creates two local variables, `total_tensor` and `count_tensor` that are used to compute the average of `values`. This average is ultimately returned as `mean` which is an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean`. `update_op` increments `total` with the reduced sum of the product of `values` and `weights`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase values
A `Tensor` of arbitrary dimensions.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `values`, and must be broadcastable to `values` (i.e., all dimensions must be either `1`, or the same as the corresponding `values` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `mean` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

object mean_tensor_dyn(object values, object weights, object metrics_collections, object updates_collections, object name)

Computes the element-wise (weighted) mean of the given tensors.

In contrast to the `mean` function which returns a scalar with the mean, this function returns an average tensor with the same shape as the input tensors.

The `mean_tensor` function creates two local variables, `total_tensor` and `count_tensor` that are used to compute the average of `values`. This average is ultimately returned as `mean` which is an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `mean`. `update_op` increments `total` with the reduced sum of the product of `values` and `weights`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object values
A `Tensor` of arbitrary dimensions.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `values`, and must be broadcastable to `values` (i.e., all dimensions must be either `1`, or the same as the corresponding `values` dimension).
object metrics_collections
An optional list of collections that `mean` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, Tensor> percentage_below(IGraphNodeBase values, int threshold, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the percentage of values less than the given threshold.

The `percentage_below` function creates two local variables, `total` and `count` that are used to compute the percentage of `values` that fall below `threshold`. This rate is weighted by `weights`, and it is ultimately returned as `percentage` which is an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `percentage`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase values
A numeric `Tensor` of arbitrary size.
int threshold
A scalar threshold.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `values`, and must be broadcastable to `values` (i.e., all dimensions must be either `1`, or the same as the corresponding `values` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that the metric value variable should be added to.
IEnumerable<string> updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

object percentage_below_dyn(object values, object threshold, object weights, object metrics_collections, object updates_collections, object name)

Computes the percentage of values less than the given threshold.

The `percentage_below` function creates two local variables, `total` and `count` that are used to compute the percentage of `values` that fall below `threshold`. This rate is weighted by `weights`, and it is ultimately returned as `percentage` which is an idempotent operation that simply divides `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `percentage`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object values
A numeric `Tensor` of arbitrary size.
object threshold
A scalar threshold.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `values`, and must be broadcastable to `values` (i.e., all dimensions must be either `1`, or the same as the corresponding `values` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, object> precision(IEnumerable<object> labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(IndexedSlices labels, object predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(IDictionary<object, object> labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(ValueTuple<PythonClassContainer, PythonClassContainer> labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(object labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(IDictionary<object, object> labels, object predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(IndexedSlices labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(IDictionary<object, object> labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(ValueTuple<PythonClassContainer, PythonClassContainer> labels, object predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(object labels, object predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(object labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(object labels, PythonClassContainer predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(IGraphNodeBase labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(ValueTuple<PythonClassContainer, PythonClassContainer> labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(IEnumerable<object> labels, PythonClassContainer predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(IGraphNodeBase labels, object predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(ValueTuple<PythonClassContainer, PythonClassContainer> labels, PythonClassContainer predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(IEnumerable<object> labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(IGraphNodeBase labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(IEnumerable<object> labels, object predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(IGraphNodeBase labels, PythonClassContainer predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(IDictionary<object, object> labels, PythonClassContainer predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(ndarray labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(double labels, PythonClassContainer predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(IndexedSlices labels, PythonClassContainer predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(double labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(double labels, object predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(double labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(ndarray labels, PythonClassContainer predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(IndexedSlices labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(ndarray labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision(ndarray labels, object predictions, int weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
int weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `precision` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, Tensor> precision_at_k(object labels, IGraphNodeBase predictions, int k, Nullable<int> class_id, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes precision@k of the predictions with respect to sparse labels.

If `class_id` is specified, we calculate precision by considering only the entries in the batch for which `class_id` is in the top-k highest `predictions`, and computing the fraction of them for which `class_id` is indeed a correct label. If `class_id` is not specified, we'll calculate precision as how often on average a class among the top-k classes with the highest predicted values of a batch entry is correct and can be found in the label for that entry.

`precision_at_k` creates two local variables, `true_positive_at_` and `false_positive_at_`, that are used to compute the precision@k frequency. This frequency is ultimately returned as `precision_at_`: an idempotent operation that simply divides `true_positive_at_` by total (`true_positive_at_` + `false_positive_at_`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false positives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_positive_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> precision_at_k(object labels, IGraphNodeBase predictions, int k, Nullable<int> class_id, IEnumerable<object> weights, object metrics_collections, object updates_collections, string name)

Computes precision@k of the predictions with respect to sparse labels.

If `class_id` is specified, we calculate precision by considering only the entries in the batch for which `class_id` is in the top-k highest `predictions`, and computing the fraction of them for which `class_id` is indeed a correct label. If `class_id` is not specified, we'll calculate precision as how often on average a class among the top-k classes with the highest predicted values of a batch entry is correct and can be found in the label for that entry.

`precision_at_k` creates two local variables, `true_positive_at_` and `false_positive_at_`, that are used to compute the precision@k frequency. This frequency is ultimately returned as `precision_at_`: an idempotent operation that simply divides `true_positive_at_` by total (`true_positive_at_` + `false_positive_at_`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false positives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_positive_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN.
IEnumerable<object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> precision_at_k(ndarray labels, IGraphNodeBase predictions, int k, Nullable<int> class_id, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes precision@k of the predictions with respect to sparse labels.

If `class_id` is specified, we calculate precision by considering only the entries in the batch for which `class_id` is in the top-k highest `predictions`, and computing the fraction of them for which `class_id` is indeed a correct label. If `class_id` is not specified, we'll calculate precision as how often on average a class among the top-k classes with the highest predicted values of a batch entry is correct and can be found in the label for that entry.

`precision_at_k` creates two local variables, `true_positive_at_` and `false_positive_at_`, that are used to compute the precision@k frequency. This frequency is ultimately returned as `precision_at_`: an idempotent operation that simply divides `true_positive_at_` by total (`true_positive_at_` + `false_positive_at_`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false positives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_positive_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> precision_at_k(ndarray labels, IGraphNodeBase predictions, int k, Nullable<int> class_id, IEnumerable<object> weights, object metrics_collections, object updates_collections, string name)

Computes precision@k of the predictions with respect to sparse labels.

If `class_id` is specified, we calculate precision by considering only the entries in the batch for which `class_id` is in the top-k highest `predictions`, and computing the fraction of them for which `class_id` is indeed a correct label. If `class_id` is not specified, we'll calculate precision as how often on average a class among the top-k classes with the highest predicted values of a batch entry is correct and can be found in the label for that entry.

`precision_at_k` creates two local variables, `true_positive_at_` and `false_positive_at_`, that are used to compute the precision@k frequency. This frequency is ultimately returned as `precision_at_`: an idempotent operation that simply divides `true_positive_at_` by total (`true_positive_at_` + `false_positive_at_`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false positives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_positive_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN.
IEnumerable<object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

object precision_at_k_dyn(object labels, object predictions, object k, object class_id, object weights, object metrics_collections, object updates_collections, object name)

Computes precision@k of the predictions with respect to sparse labels.

If `class_id` is specified, we calculate precision by considering only the entries in the batch for which `class_id` is in the top-k highest `predictions`, and computing the fraction of them for which `class_id` is indeed a correct label. If `class_id` is not specified, we'll calculate precision as how often on average a class among the top-k classes with the highest predicted values of a batch entry is correct and can be found in the label for that entry.

`precision_at_k` creates two local variables, `true_positive_at_` and `false_positive_at_`, that are used to compute the precision@k frequency. This frequency is ultimately returned as `precision_at_`: an idempotent operation that simply divides `true_positive_at_` by total (`true_positive_at_` + `false_positive_at_`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false positives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_positive_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
object predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
object k
Integer, k for @k metric.
object class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN.
object weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
object name
Name of new update operation, and namespace for other dependent ops.
Returns
object

ValueTuple<object, object> precision_at_thresholds(IDictionary<object, object> labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IndexedSlices labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IDictionary<object, object> labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IndexedSlices labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IndexedSlices labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IndexedSlices labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IDictionary<object, object> labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IDictionary<object, object> labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IDictionary<object, object> labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IDictionary<object, object> labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IDictionary<object, object> labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IDictionary<object, object> labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IndexedSlices labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IGraphNodeBase labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IndexedSlices labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IndexedSlices labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IndexedSlices labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IGraphNodeBase labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IGraphNodeBase labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IGraphNodeBase labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IGraphNodeBase labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IGraphNodeBase labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IGraphNodeBase labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> precision_at_thresholds(IGraphNodeBase labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `auc` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

object precision_at_thresholds_dyn(object labels, object predictions, object thresholds, object weights, object metrics_collections, object updates_collections, object name)

Computes precision values for different `thresholds` on `predictions`.

The `precision_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `precision[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of values in `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + false_positives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `auc` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, Tensor> precision_at_top_k(IEnumerable<object> labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IEnumerable<object> weights, object metrics_collections, object updates_collections, string name)

Computes precision@k of the predictions with respect to sparse labels.

Differs from `sparse_precision_at_k` in that predictions must be in the form of top `k` class indices, whereas `sparse_precision_at_k` expects logits. Refer to `sparse_precision_at_k` for more details.
Parameters
IEnumerable<object> labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN.
IEnumerable<object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> precision_at_top_k(IEnumerable<object> labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes precision@k of the predictions with respect to sparse labels.

Differs from `sparse_precision_at_k` in that predictions must be in the form of top `k` class indices, whereas `sparse_precision_at_k` expects logits. Refer to `sparse_precision_at_k` for more details.
Parameters
IEnumerable<object> labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> precision_at_top_k(object labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IEnumerable<object> weights, object metrics_collections, object updates_collections, PythonFunctionContainer name)

Computes precision@k of the predictions with respect to sparse labels.

Differs from `sparse_precision_at_k` in that predictions must be in the form of top `k` class indices, whereas `sparse_precision_at_k` expects logits. Refer to `sparse_precision_at_k` for more details.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN.
IEnumerable<object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
PythonFunctionContainer name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> precision_at_top_k(object labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IEnumerable<object> weights, object metrics_collections, object updates_collections, string name)

Computes precision@k of the predictions with respect to sparse labels.

Differs from `sparse_precision_at_k` in that predictions must be in the form of top `k` class indices, whereas `sparse_precision_at_k` expects logits. Refer to `sparse_precision_at_k` for more details.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN.
IEnumerable<object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> precision_at_top_k(IEnumerable<object> labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IEnumerable<object> weights, object metrics_collections, object updates_collections, PythonFunctionContainer name)

Computes precision@k of the predictions with respect to sparse labels.

Differs from `sparse_precision_at_k` in that predictions must be in the form of top `k` class indices, whereas `sparse_precision_at_k` expects logits. Refer to `sparse_precision_at_k` for more details.
Parameters
IEnumerable<object> labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN.
IEnumerable<object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
PythonFunctionContainer name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> precision_at_top_k(IEnumerable<object> labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IGraphNodeBase weights, object metrics_collections, object updates_collections, PythonFunctionContainer name)

Computes precision@k of the predictions with respect to sparse labels.

Differs from `sparse_precision_at_k` in that predictions must be in the form of top `k` class indices, whereas `sparse_precision_at_k` expects logits. Refer to `sparse_precision_at_k` for more details.
Parameters
IEnumerable<object> labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
PythonFunctionContainer name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> precision_at_top_k(object labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes precision@k of the predictions with respect to sparse labels.

Differs from `sparse_precision_at_k` in that predictions must be in the form of top `k` class indices, whereas `sparse_precision_at_k` expects logits. Refer to `sparse_precision_at_k` for more details.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> precision_at_top_k(object labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IGraphNodeBase weights, object metrics_collections, object updates_collections, PythonFunctionContainer name)

Computes precision@k of the predictions with respect to sparse labels.

Differs from `sparse_precision_at_k` in that predictions must be in the form of top `k` class indices, whereas `sparse_precision_at_k` expects logits. Refer to `sparse_precision_at_k` for more details.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
PythonFunctionContainer name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

object precision_at_top_k_dyn(object labels, object predictions_idx, object k, object class_id, object weights, object metrics_collections, object updates_collections, object name)

Computes precision@k of the predictions with respect to sparse labels.

Differs from `sparse_precision_at_k` in that predictions must be in the form of top `k` class indices, whereas `sparse_precision_at_k` expects logits. Refer to `sparse_precision_at_k` for more details.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored.
object predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
object k
Integer, k for @k metric. Only used for the default op name.
object class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN.
object weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
object name
Name of new update operation, and namespace for other dependent ops.
Returns
object

object precision_dyn(object labels, object predictions, object weights, object metrics_collections, object updates_collections, object name)

Computes the precision of the predictions with respect to the labels.

The `precision` function creates two local variables, `true_positives` and `false_positives`, that are used to compute the precision. This value is ultimately returned as `precision`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_positives`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `precision`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `precision` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, object> recall(IDictionary<object, object> labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall(IEnumerable<object> labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall(IGraphNodeBase labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall(ValueTuple<PythonClassContainer, PythonClassContainer> labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall(ValueTuple<PythonClassContainer, PythonClassContainer> labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall(IndexedSlices labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall(IEnumerable<object> labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall(IDictionary<object, object> labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall(IndexedSlices labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall(double labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall(double labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall(ndarray labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall(ndarray labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall(IGraphNodeBase labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall(object labels, PythonClassContainer predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall(object labels, object predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, Tensor> recall_at_k(object labels, IGraphNodeBase predictions, int k, Nullable<int> class_id, ValueTuple<double> weights, object metrics_collections, object updates_collections, string name)

Computes recall@k of the predictions with respect to sparse labels.

If `class_id` is specified, we calculate recall by considering only the entries in the batch for which `class_id` is in the label, and computing the fraction of them for which `class_id` is in the top-k `predictions`. If `class_id` is not specified, we'll calculate recall as how often on average a class among the labels of a batch entry is in the top-k `predictions`.

`sparse_recall_at_k` creates two local variables, `true_positive_at_` and `false_negative_at_`, that are used to compute the recall_at_k frequency. This frequency is ultimately returned as `recall_at_`: an idempotent operation that simply divides `true_positive_at_` by total (`true_positive_at_` + `false_negative_at_`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false negatives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_negative_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
ValueTuple<double> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_k(object labels, IGraphNodeBase predictions, int k, Nullable<int> class_id, IEnumerable<object> weights, object metrics_collections, object updates_collections, string name)

Computes recall@k of the predictions with respect to sparse labels.

If `class_id` is specified, we calculate recall by considering only the entries in the batch for which `class_id` is in the label, and computing the fraction of them for which `class_id` is in the top-k `predictions`. If `class_id` is not specified, we'll calculate recall as how often on average a class among the labels of a batch entry is in the top-k `predictions`.

`sparse_recall_at_k` creates two local variables, `true_positive_at_` and `false_negative_at_`, that are used to compute the recall_at_k frequency. This frequency is ultimately returned as `recall_at_`: an idempotent operation that simply divides `true_positive_at_` by total (`true_positive_at_` + `false_negative_at_`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false negatives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_negative_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
IEnumerable<object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_k(ndarray labels, IGraphNodeBase predictions, int k, Nullable<int> class_id, ValueTuple<double> weights, object metrics_collections, object updates_collections, string name)

Computes recall@k of the predictions with respect to sparse labels.

If `class_id` is specified, we calculate recall by considering only the entries in the batch for which `class_id` is in the label, and computing the fraction of them for which `class_id` is in the top-k `predictions`. If `class_id` is not specified, we'll calculate recall as how often on average a class among the labels of a batch entry is in the top-k `predictions`.

`sparse_recall_at_k` creates two local variables, `true_positive_at_` and `false_negative_at_`, that are used to compute the recall_at_k frequency. This frequency is ultimately returned as `recall_at_`: an idempotent operation that simply divides `true_positive_at_` by total (`true_positive_at_` + `false_negative_at_`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false negatives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_negative_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
ValueTuple<double> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_k(ndarray labels, IGraphNodeBase predictions, int k, Nullable<int> class_id, IEnumerable<object> weights, object metrics_collections, object updates_collections, string name)

Computes recall@k of the predictions with respect to sparse labels.

If `class_id` is specified, we calculate recall by considering only the entries in the batch for which `class_id` is in the label, and computing the fraction of them for which `class_id` is in the top-k `predictions`. If `class_id` is not specified, we'll calculate recall as how often on average a class among the labels of a batch entry is in the top-k `predictions`.

`sparse_recall_at_k` creates two local variables, `true_positive_at_` and `false_negative_at_`, that are used to compute the recall_at_k frequency. This frequency is ultimately returned as `recall_at_`: an idempotent operation that simply divides `true_positive_at_` by total (`true_positive_at_` + `false_negative_at_`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false negatives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_negative_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
IEnumerable<object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_k(ndarray labels, IGraphNodeBase predictions, int k, Nullable<int> class_id, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes recall@k of the predictions with respect to sparse labels.

If `class_id` is specified, we calculate recall by considering only the entries in the batch for which `class_id` is in the label, and computing the fraction of them for which `class_id` is in the top-k `predictions`. If `class_id` is not specified, we'll calculate recall as how often on average a class among the labels of a batch entry is in the top-k `predictions`.

`sparse_recall_at_k` creates two local variables, `true_positive_at_` and `false_negative_at_`, that are used to compute the recall_at_k frequency. This frequency is ultimately returned as `recall_at_`: an idempotent operation that simply divides `true_positive_at_` by total (`true_positive_at_` + `false_negative_at_`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false negatives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_negative_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_k(object labels, IGraphNodeBase predictions, int k, Nullable<int> class_id, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes recall@k of the predictions with respect to sparse labels.

If `class_id` is specified, we calculate recall by considering only the entries in the batch for which `class_id` is in the label, and computing the fraction of them for which `class_id` is in the top-k `predictions`. If `class_id` is not specified, we'll calculate recall as how often on average a class among the labels of a batch entry is in the top-k `predictions`.

`sparse_recall_at_k` creates two local variables, `true_positive_at_` and `false_negative_at_`, that are used to compute the recall_at_k frequency. This frequency is ultimately returned as `recall_at_`: an idempotent operation that simply divides `true_positive_at_` by total (`true_positive_at_` + `false_negative_at_`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false negatives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_negative_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_k(IGraphNodeBase labels, IGraphNodeBase predictions, int k, Nullable<int> class_id, ValueTuple<double> weights, object metrics_collections, object updates_collections, string name)

Computes recall@k of the predictions with respect to sparse labels.

If `class_id` is specified, we calculate recall by considering only the entries in the batch for which `class_id` is in the label, and computing the fraction of them for which `class_id` is in the top-k `predictions`. If `class_id` is not specified, we'll calculate recall as how often on average a class among the labels of a batch entry is in the top-k `predictions`.

`sparse_recall_at_k` creates two local variables, `true_positive_at_` and `false_negative_at_`, that are used to compute the recall_at_k frequency. This frequency is ultimately returned as `recall_at_`: an idempotent operation that simply divides `true_positive_at_` by total (`true_positive_at_` + `false_negative_at_`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false negatives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_negative_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
ValueTuple<double> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_k(IGraphNodeBase labels, IGraphNodeBase predictions, int k, Nullable<int> class_id, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes recall@k of the predictions with respect to sparse labels.

If `class_id` is specified, we calculate recall by considering only the entries in the batch for which `class_id` is in the label, and computing the fraction of them for which `class_id` is in the top-k `predictions`. If `class_id` is not specified, we'll calculate recall as how often on average a class among the labels of a batch entry is in the top-k `predictions`.

`sparse_recall_at_k` creates two local variables, `true_positive_at_` and `false_negative_at_`, that are used to compute the recall_at_k frequency. This frequency is ultimately returned as `recall_at_`: an idempotent operation that simply divides `true_positive_at_` by total (`true_positive_at_` + `false_negative_at_`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false negatives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_negative_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_k(IGraphNodeBase labels, IGraphNodeBase predictions, int k, Nullable<int> class_id, IEnumerable<object> weights, object metrics_collections, object updates_collections, string name)

Computes recall@k of the predictions with respect to sparse labels.

If `class_id` is specified, we calculate recall by considering only the entries in the batch for which `class_id` is in the label, and computing the fraction of them for which `class_id` is in the top-k `predictions`. If `class_id` is not specified, we'll calculate recall as how often on average a class among the labels of a batch entry is in the top-k `predictions`.

`sparse_recall_at_k` creates two local variables, `true_positive_at_` and `false_negative_at_`, that are used to compute the recall_at_k frequency. This frequency is ultimately returned as `recall_at_`: an idempotent operation that simply divides `true_positive_at_` by total (`true_positive_at_` + `false_negative_at_`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false negatives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_negative_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
int k
Integer, k for @k metric.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
IEnumerable<object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

object recall_at_k_dyn(object labels, object predictions, object k, object class_id, object weights, object metrics_collections, object updates_collections, object name)

Computes recall@k of the predictions with respect to sparse labels.

If `class_id` is specified, we calculate recall by considering only the entries in the batch for which `class_id` is in the label, and computing the fraction of them for which `class_id` is in the top-k `predictions`. If `class_id` is not specified, we'll calculate recall as how often on average a class among the labels of a batch entry is in the top-k `predictions`.

`sparse_recall_at_k` creates two local variables, `true_positive_at_` and `false_negative_at_`, that are used to compute the recall_at_k frequency. This frequency is ultimately returned as `recall_at_`: an idempotent operation that simply divides `true_positive_at_` by total (`true_positive_at_` + `false_negative_at_`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall_at_`. Internally, a `top_k` operation computes a `Tensor` indicating the top `k` `predictions`. Set operations applied to `top_k` and `labels` calculate the true positives and false negatives weighted by `weights`. Then `update_op` increments `true_positive_at_` and `false_negative_at_` using these values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
object predictions
Float `Tensor` with shape [D1,... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1,... DN] must match `labels`.
object k
Integer, k for @k metric.
object class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
object weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
object name
Name of new update operation, and namespace for other dependent ops.
Returns
object

ValueTuple<object, object> recall_at_thresholds(IDictionary<object, object> labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IDictionary<object, object> labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IDictionary<object, object> labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IDictionary<object, object> labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IDictionary<object, object> labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IDictionary<object, object> labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IDictionary<object, object> labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IDictionary<object, object> labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IDictionary<object, object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IGraphNodeBase labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IGraphNodeBase labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IGraphNodeBase labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IGraphNodeBase labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IGraphNodeBase labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IGraphNodeBase labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IGraphNodeBase labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IGraphNodeBase labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IndexedSlices labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IndexedSlices labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IndexedSlices labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IndexedSlices labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IndexedSlices labels, IDictionary<object, object> predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IndexedSlices labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase predictions, object thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, PythonFunctionContainer name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
PythonFunctionContainer name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IndexedSlices labels, IDictionary<object, object> predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IDictionary<object, object> predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> recall_at_thresholds(IndexedSlices labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, object weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `recall` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

object recall_at_thresholds_dyn(object labels, object predictions, object thresholds, object weights, object metrics_collections, object updates_collections, object name)

Computes various recall values for different `thresholds` on `predictions`.

The `recall_at_thresholds` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` for various values of thresholds. `recall[i]` is defined as the total weight of values in `predictions` above `thresholds[i]` whose corresponding entry in `labels` is `True`, divided by the total weight of `True` values in `labels` (`true_positives[i] / (true_positives[i] + false_negatives[i])`).

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `recall` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, Tensor> recall_at_top_k(IEnumerable<object> labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, ValueTuple<double> weights, object metrics_collections, object updates_collections, string name)

Computes recall@k of top-k predictions with respect to sparse labels.

Differs from `recall_at_k` in that predictions must be in the form of top `k` class indices, whereas `recall_at_k` expects logits. Refer to `recall_at_k` for more details.
Parameters
IEnumerable<object> labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
ValueTuple<double> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_top_k(IEnumerable<object> labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, ValueTuple<double> weights, object metrics_collections, object updates_collections, PythonFunctionContainer name)

Computes recall@k of top-k predictions with respect to sparse labels.

Differs from `recall_at_k` in that predictions must be in the form of top `k` class indices, whereas `recall_at_k` expects logits. Refer to `recall_at_k` for more details.
Parameters
IEnumerable<object> labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
ValueTuple<double> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
PythonFunctionContainer name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_top_k(object labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IEnumerable<object> weights, object metrics_collections, object updates_collections, PythonFunctionContainer name)

Computes recall@k of top-k predictions with respect to sparse labels.

Differs from `recall_at_k` in that predictions must be in the form of top `k` class indices, whereas `recall_at_k` expects logits. Refer to `recall_at_k` for more details.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
IEnumerable<object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
PythonFunctionContainer name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_top_k(IEnumerable<object> labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IEnumerable<object> weights, object metrics_collections, object updates_collections, PythonFunctionContainer name)

Computes recall@k of top-k predictions with respect to sparse labels.

Differs from `recall_at_k` in that predictions must be in the form of top `k` class indices, whereas `recall_at_k` expects logits. Refer to `recall_at_k` for more details.
Parameters
IEnumerable<object> labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
IEnumerable<object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
PythonFunctionContainer name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_top_k(IEnumerable<object> labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IEnumerable<object> weights, object metrics_collections, object updates_collections, string name)

Computes recall@k of top-k predictions with respect to sparse labels.

Differs from `recall_at_k` in that predictions must be in the form of top `k` class indices, whereas `recall_at_k` expects logits. Refer to `recall_at_k` for more details.
Parameters
IEnumerable<object> labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
IEnumerable<object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_top_k(IEnumerable<object> labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IGraphNodeBase weights, object metrics_collections, object updates_collections, PythonFunctionContainer name)

Computes recall@k of top-k predictions with respect to sparse labels.

Differs from `recall_at_k` in that predictions must be in the form of top `k` class indices, whereas `recall_at_k` expects logits. Refer to `recall_at_k` for more details.
Parameters
IEnumerable<object> labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
PythonFunctionContainer name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_top_k(object labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes recall@k of top-k predictions with respect to sparse labels.

Differs from `recall_at_k` in that predictions must be in the form of top `k` class indices, whereas `recall_at_k` expects logits. Refer to `recall_at_k` for more details.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_top_k(object labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IEnumerable<object> weights, object metrics_collections, object updates_collections, string name)

Computes recall@k of top-k predictions with respect to sparse labels.

Differs from `recall_at_k` in that predictions must be in the form of top `k` class indices, whereas `recall_at_k` expects logits. Refer to `recall_at_k` for more details.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
IEnumerable<object> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_top_k(object labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, ValueTuple<double> weights, object metrics_collections, object updates_collections, PythonFunctionContainer name)

Computes recall@k of top-k predictions with respect to sparse labels.

Differs from `recall_at_k` in that predictions must be in the form of top `k` class indices, whereas `recall_at_k` expects logits. Refer to `recall_at_k` for more details.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
ValueTuple<double> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
PythonFunctionContainer name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_top_k(object labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, ValueTuple<double> weights, object metrics_collections, object updates_collections, string name)

Computes recall@k of top-k predictions with respect to sparse labels.

Differs from `recall_at_k` in that predictions must be in the form of top `k` class indices, whereas `recall_at_k` expects logits. Refer to `recall_at_k` for more details.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
ValueTuple<double> weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_top_k(object labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IGraphNodeBase weights, object metrics_collections, object updates_collections, PythonFunctionContainer name)

Computes recall@k of top-k predictions with respect to sparse labels.

Differs from `recall_at_k` in that predictions must be in the form of top `k` class indices, whereas `recall_at_k` expects logits. Refer to `recall_at_k` for more details.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
PythonFunctionContainer name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> recall_at_top_k(IEnumerable<object> labels, IGraphNodeBase predictions_idx, Nullable<int> k, Nullable<int> class_id, IGraphNodeBase weights, object metrics_collections, object updates_collections, string name)

Computes recall@k of top-k predictions with respect to sparse labels.

Differs from `recall_at_k` in that predictions must be in the form of top `k` class indices, whereas `recall_at_k` expects logits. Refer to `recall_at_k` for more details.
Parameters
IEnumerable<object> labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
IGraphNodeBase predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
Nullable<int> k
Integer, k for @k metric. Only used for the default op name.
Nullable<int> class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
IGraphNodeBase weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
string name
Name of new update operation, and namespace for other dependent ops.
Returns
ValueTuple<object, Tensor>

object recall_at_top_k_dyn(object labels, object predictions_idx, object k, object class_id, object weights, object metrics_collections, object updates_collections, object name)

Computes recall@k of top-k predictions with respect to sparse labels.

Differs from `recall_at_k` in that predictions must be in the form of top `k` class indices, whereas `recall_at_k` expects logits. Refer to `recall_at_k` for more details.
Parameters
object labels
`int64` `Tensor` or `SparseTensor` with shape [D1,... DN, num_labels] or [D1,... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape [batch_size, num_labels]. [D1,... DN] must match `predictions`. Values should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range always count towards `false_negative_at_`.
object predictions_idx
Integer `Tensor` with shape [D1,... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top `k` predicted class indices. [D1,... DN] must match `labels`.
object k
Integer, k for @k metric. Only used for the default op name.
object class_id
Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of `predictions`. If class_id is outside this range, the method returns NAN.
object weights
`Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that values should be added to.
object updates_collections
An optional list of collections that updates should be added to.
object name
Name of new update operation, and namespace for other dependent ops.
Returns
object

object recall_dyn(object labels, object predictions, object weights, object metrics_collections, object updates_collections, object name)

Computes the recall of the predictions with respect to the labels.

The `recall` function creates two local variables, `true_positives` and `false_negatives`, that are used to compute the recall. This value is ultimately returned as `recall`, an idempotent operation that simply divides `true_positives` by the sum of `true_positives` and `false_negatives`.

For estimation of the metric over a stream of data, the function creates an `update_op` that updates these variables and returns the `recall`. `update_op` weights each prediction by the corresponding value in `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `recall` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, object> root_mean_squared_error(IGraphNodeBase labels, IGraphNodeBase predictions, IGraphNodeBase weights, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the root mean squared error between the labels and predictions.

The `root_mean_squared_error` function creates two local variables, `total` and `count` that are used to compute the root mean squared error. This average is weighted by `weights`, and it is ultimately returned as `root_mean_squared_error`: an idempotent operation that takes the square root of the division of `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `root_mean_squared_error`. Internally, a `squared_error` operation computes the element-wise square of the difference between `predictions` and `labels`. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `squared_error`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` of the same shape as `predictions`.
IGraphNodeBase predictions
A `Tensor` of arbitrary shape.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
IEnumerable<string> metrics_collections
An optional list of collections that `root_mean_squared_error` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

object root_mean_squared_error_dyn(object labels, object predictions, object weights, object metrics_collections, object updates_collections, object name)

Computes the root mean squared error between the labels and predictions.

The `root_mean_squared_error` function creates two local variables, `total` and `count` that are used to compute the root mean squared error. This average is weighted by `weights`, and it is ultimately returned as `root_mean_squared_error`: an idempotent operation that takes the square root of the division of `total` by `count`.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `root_mean_squared_error`. Internally, a `squared_error` operation computes the element-wise square of the difference between `predictions` and `labels`. Then `update_op` increments `total` with the reduced sum of the product of `weights` and `squared_error`, and it increments `count` with the reduced sum of `weights`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
A `Tensor` of the same shape as `predictions`.
object predictions
A `Tensor` of arbitrary shape.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `root_mean_squared_error` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, object> sensitivity_at_specificity(IndexedSlices labels, IGraphNodeBase predictions, double specificity, IGraphNodeBase weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the specificity at a given sensitivity.

The `sensitivity_at_specificity` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the sensitivity at the given specificity value. The threshold for the given specificity value is computed and used to evaluate the corresponding sensitivity.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `sensitivity`. `update_op` increments the `true_positives`, `true_negatives`, `false_positives` and `false_negatives` counts with the weight of each case found in the `predictions` and `labels`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.

For additional information about specificity and sensitivity, see the following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
double specificity
A scalar value in range `[0, 1]`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use for matching the given specificity.
IEnumerable<string> metrics_collections
An optional list of collections that `sensitivity` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> sensitivity_at_specificity(IGraphNodeBase labels, IGraphNodeBase predictions, double specificity, IGraphNodeBase weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the specificity at a given sensitivity.

The `sensitivity_at_specificity` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the sensitivity at the given specificity value. The threshold for the given specificity value is computed and used to evaluate the corresponding sensitivity.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `sensitivity`. `update_op` increments the `true_positives`, `true_negatives`, `false_positives` and `false_negatives` counts with the weight of each case found in the `predictions` and `labels`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.

For additional information about specificity and sensitivity, see the following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
double specificity
A scalar value in range `[0, 1]`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use for matching the given specificity.
IEnumerable<string> metrics_collections
An optional list of collections that `sensitivity` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> sensitivity_at_specificity(IEnumerable<int> labels, IGraphNodeBase predictions, double specificity, IGraphNodeBase weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the specificity at a given sensitivity.

The `sensitivity_at_specificity` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the sensitivity at the given specificity value. The threshold for the given specificity value is computed and used to evaluate the corresponding sensitivity.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `sensitivity`. `update_op` increments the `true_positives`, `true_negatives`, `false_positives` and `false_negatives` counts with the weight of each case found in the `predictions` and `labels`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.

For additional information about specificity and sensitivity, see the following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
Parameters
IEnumerable<int> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
double specificity
A scalar value in range `[0, 1]`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use for matching the given specificity.
IEnumerable<string> metrics_collections
An optional list of collections that `sensitivity` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> sensitivity_at_specificity(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase predictions, double specificity, IGraphNodeBase weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the specificity at a given sensitivity.

The `sensitivity_at_specificity` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the sensitivity at the given specificity value. The threshold for the given specificity value is computed and used to evaluate the corresponding sensitivity.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `sensitivity`. `update_op` increments the `true_positives`, `true_negatives`, `false_positives` and `false_negatives` counts with the weight of each case found in the `predictions` and `labels`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.

For additional information about specificity and sensitivity, see the following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
double specificity
A scalar value in range `[0, 1]`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use for matching the given specificity.
IEnumerable<string> metrics_collections
An optional list of collections that `sensitivity` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

object sensitivity_at_specificity_dyn(object labels, object predictions, object specificity, object weights, ImplicitContainer<T> num_thresholds, object metrics_collections, object updates_collections, object name)

Computes the specificity at a given sensitivity.

The `sensitivity_at_specificity` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the sensitivity at the given specificity value. The threshold for the given specificity value is computed and used to evaluate the corresponding sensitivity.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `sensitivity`. `update_op` increments the `true_positives`, `true_negatives`, `false_positives` and `false_negatives` counts with the weight of each case found in the `predictions` and `labels`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.

For additional information about specificity and sensitivity, see the following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object specificity
A scalar value in range `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
ImplicitContainer<T> num_thresholds
The number of thresholds to use for matching the given specificity.
object metrics_collections
An optional list of collections that `sensitivity` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, Tensor> sparse_average_precision_at_k(object labels, object predictions, object k, object weights, object metrics_collections, object updates_collections, string name)

Renamed to `average_precision_at_k`, please use that method instead. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use average_precision_at_k instead

object sparse_average_precision_at_k_dyn(object labels, object predictions, object k, object weights, object metrics_collections, object updates_collections, object name)

Renamed to `average_precision_at_k`, please use that method instead. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use average_precision_at_k instead

ValueTuple<object, Tensor> sparse_precision_at_k(object labels, object predictions, object k, object class_id, object weights, object metrics_collections, object updates_collections, string name)

Renamed to `precision_at_k`, please use that method instead. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use precision_at_k instead

object sparse_precision_at_k_dyn(object labels, object predictions, object k, object class_id, object weights, object metrics_collections, object updates_collections, object name)

Renamed to `precision_at_k`, please use that method instead. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use precision_at_k instead

ValueTuple<object, object> specificity_at_sensitivity(IGraphNodeBase labels, IGraphNodeBase predictions, double sensitivity, IGraphNodeBase weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the specificity at a given sensitivity.

The `specificity_at_sensitivity` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the specificity at the given sensitivity value. The threshold for the given sensitivity value is computed and used to evaluate the corresponding specificity.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `specificity`. `update_op` increments the `true_positives`, `true_negatives`, `false_positives` and `false_negatives` counts with the weight of each case found in the `predictions` and `labels`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.

For additional information about specificity and sensitivity, see the following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
double sensitivity
A scalar value in range `[0, 1]`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use for matching the given sensitivity.
IEnumerable<string> metrics_collections
An optional list of collections that `specificity` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> specificity_at_sensitivity(IndexedSlices labels, IGraphNodeBase predictions, double sensitivity, IGraphNodeBase weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the specificity at a given sensitivity.

The `specificity_at_sensitivity` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the specificity at the given sensitivity value. The threshold for the given sensitivity value is computed and used to evaluate the corresponding specificity.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `specificity`. `update_op` increments the `true_positives`, `true_negatives`, `false_positives` and `false_negatives` counts with the weight of each case found in the `predictions` and `labels`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.

For additional information about specificity and sensitivity, see the following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
double sensitivity
A scalar value in range `[0, 1]`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use for matching the given sensitivity.
IEnumerable<string> metrics_collections
An optional list of collections that `specificity` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> specificity_at_sensitivity(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase predictions, double sensitivity, IGraphNodeBase weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the specificity at a given sensitivity.

The `specificity_at_sensitivity` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the specificity at the given sensitivity value. The threshold for the given sensitivity value is computed and used to evaluate the corresponding specificity.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `specificity`. `update_op` increments the `true_positives`, `true_negatives`, `false_positives` and `false_negatives` counts with the weight of each case found in the `predictions` and `labels`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.

For additional information about specificity and sensitivity, see the following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
double sensitivity
A scalar value in range `[0, 1]`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use for matching the given sensitivity.
IEnumerable<string> metrics_collections
An optional list of collections that `specificity` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

ValueTuple<object, object> specificity_at_sensitivity(IEnumerable<int> labels, IGraphNodeBase predictions, double sensitivity, IGraphNodeBase weights, int num_thresholds, IEnumerable<string> metrics_collections, IEnumerable<string> updates_collections, string name)

Computes the specificity at a given sensitivity.

The `specificity_at_sensitivity` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the specificity at the given sensitivity value. The threshold for the given sensitivity value is computed and used to evaluate the corresponding specificity.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `specificity`. `update_op` increments the `true_positives`, `true_negatives`, `false_positives` and `false_negatives` counts with the weight of each case found in the `predictions` and `labels`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.

For additional information about specificity and sensitivity, see the following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
Parameters
IEnumerable<int> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
double sensitivity
A scalar value in range `[0, 1]`.
IGraphNodeBase weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
int num_thresholds
The number of thresholds to use for matching the given sensitivity.
IEnumerable<string> metrics_collections
An optional list of collections that `specificity` should be added to.
IEnumerable<string> updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, object>

object specificity_at_sensitivity_dyn(object labels, object predictions, object sensitivity, object weights, ImplicitContainer<T> num_thresholds, object metrics_collections, object updates_collections, object name)

Computes the specificity at a given sensitivity.

The `specificity_at_sensitivity` function creates four local variables, `true_positives`, `true_negatives`, `false_positives` and `false_negatives` that are used to compute the specificity at the given sensitivity value. The threshold for the given sensitivity value is computed and used to evaluate the corresponding specificity.

For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `specificity`. `update_op` increments the `true_positives`, `true_negatives`, `false_positives` and `false_negatives` counts with the weight of each case found in the `predictions` and `labels`.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.

For additional information about specificity and sensitivity, see the following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object sensitivity
A scalar value in range `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
ImplicitContainer<T> num_thresholds
The number of thresholds to use for matching the given sensitivity.
object metrics_collections
An optional list of collections that `specificity` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, Tensor> true_negatives(IGraphNodeBase labels, ndarray predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
ndarray predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(double labels, double predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
double predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(IGraphNodeBase labels, IGraphNodeBase predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(IGraphNodeBase labels, IndexedSlices predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IndexedSlices predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(IGraphNodeBase labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(IGraphNodeBase labels, double predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
double predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(IndexedSlices labels, IGraphNodeBase predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(IndexedSlices labels, IndexedSlices predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IndexedSlices predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(IndexedSlices labels, ndarray predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
ndarray predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(IndexedSlices labels, double predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
double predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IndexedSlices predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IndexedSlices predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(ValueTuple<PythonClassContainer, PythonClassContainer> labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(ValueTuple<PythonClassContainer, PythonClassContainer> labels, ndarray predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
ndarray predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(ValueTuple<PythonClassContainer, PythonClassContainer> labels, double predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
double predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(IEnumerable<object> labels, IGraphNodeBase predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(IndexedSlices labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(IEnumerable<object> labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(IEnumerable<object> labels, ndarray predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
ndarray predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(IEnumerable<object> labels, IndexedSlices predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IndexedSlices predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(IEnumerable<object> labels, double predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
double predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(double labels, ndarray predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
ndarray predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(ndarray labels, IGraphNodeBase predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(ndarray labels, IndexedSlices predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IndexedSlices predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(ndarray labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(double labels, ValueTuple<PythonClassContainer, PythonClassContainer> predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
ValueTuple<PythonClassContainer, PythonClassContainer> predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(ndarray labels, ndarray predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
ndarray predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(ndarray labels, double predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
double predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(double labels, IGraphNodeBase predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives(double labels, IndexedSlices predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
IndexedSlices predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_negatives_at_thresholds(IGraphNodeBase labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, Nullable<ValueTuple<object>> weights, object metrics_collections, object updates_collections, string name)

Computes true negatives at provided threshold values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
Nullable<ValueTuple<object>> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `true_negatives` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

object true_negatives_at_thresholds_dyn(object labels, object predictions, object thresholds, object weights, object metrics_collections, object updates_collections, object name)

Computes true negatives at provided threshold values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
object predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `true_negatives` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

object true_negatives_dyn(object labels, object predictions, object weights, object metrics_collections, object updates_collections, object name)

Sum the weights of true_negatives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
object name
An optional variable_scope name.
Returns
object

ValueTuple<object, Tensor> true_positives(IEnumerable<object> labels, object predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_positives(object labels, object predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_positives(IGraphNodeBase labels, PythonClassContainer predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_positives(IndexedSlices labels, object predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_positives(IndexedSlices labels, PythonClassContainer predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IndexedSlices labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_positives(ValueTuple<PythonClassContainer, PythonClassContainer> labels, object predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_positives(ValueTuple<PythonClassContainer, PythonClassContainer> labels, PythonClassContainer predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ValueTuple<PythonClassContainer, PythonClassContainer> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_positives(object labels, PythonClassContainer predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_positives(IEnumerable<object> labels, PythonClassContainer predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IEnumerable<object> labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_positives(ndarray labels, object predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_positives(ndarray labels, PythonClassContainer predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
ndarray labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_positives(double labels, object predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_positives(double labels, PythonClassContainer predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
double labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
PythonClassContainer predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_positives(IGraphNodeBase labels, object predictions, object weights, object metrics_collections, object updates_collections, string name)

Sum the weights of true_positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

ValueTuple<object, Tensor> true_positives_at_thresholds(IGraphNodeBase labels, IGraphNodeBase predictions, IEnumerable<double> thresholds, Nullable<double> weights, object metrics_collections, object updates_collections, string name)

Computes true positives at provided threshold values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
IGraphNodeBase labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
IGraphNodeBase predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
IEnumerable<double> thresholds
A python list or tuple of float thresholds in `[0, 1]`.
Nullable<double> weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `true_positives` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
string name
An optional variable_scope name.
Returns
ValueTuple<object, Tensor>

object true_positives_at_thresholds_dyn(object labels, object predictions, object thresholds, object weights, object metrics_collections, object updates_collections, object name)

Computes true positives at provided threshold values.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
A `Tensor` whose shape matches `predictions`. Will be cast to `bool`.
object predictions
A floating point `Tensor` of arbitrary shape and whose values are in the range `[0, 1]`.
object thresholds
A python list or tuple of float thresholds in `[0, 1]`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that `true_positives` should be added to.
object updates_collections
An optional list of collections that `update_op` should be added to.
object name
An optional variable_scope name.
Returns
object

object true_positives_dyn(object labels, object predictions, object weights, object metrics_collections, object updates_collections, object name)

Sum the weights of true_positives.

If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
Parameters
object labels
The ground truth values, a `Tensor` whose dimensions must match `predictions`. Will be cast to `bool`.
object predictions
The predicted values, a `Tensor` of arbitrary dimensions. Will be cast to `bool`.
object weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension).
object metrics_collections
An optional list of collections that the metric value variable should be added to.
object updates_collections
An optional list of collections that the metric update ops should be added to.
object name
An optional variable_scope name.
Returns
object

Public properties

PythonFunctionContainer accuracy_fn get;

PythonFunctionContainer average_precision_at_k_fn get;

PythonFunctionContainer false_negatives_at_thresholds_fn get;

PythonFunctionContainer false_negatives_fn get;

PythonFunctionContainer false_positives_at_thresholds_fn get;

PythonFunctionContainer false_positives_fn get;

PythonFunctionContainer mean_absolute_error_fn get;

PythonFunctionContainer mean_cosine_distance_fn get;

PythonFunctionContainer mean_iou_fn get;

PythonFunctionContainer mean_per_class_accuracy_fn get;

PythonFunctionContainer mean_relative_error_fn get;

PythonFunctionContainer mean_squared_error_fn get;

PythonFunctionContainer mean_tensor_fn get;

PythonFunctionContainer percentage_below_fn get;

PythonFunctionContainer precision_at_k_fn get;

PythonFunctionContainer precision_at_thresholds_fn get;

PythonFunctionContainer precision_at_top_k_fn get;

PythonFunctionContainer precision_fn get;

PythonFunctionContainer recall_at_k_fn get;

PythonFunctionContainer recall_at_thresholds_fn get;

PythonFunctionContainer recall_at_top_k_fn get;

PythonFunctionContainer recall_fn get;

PythonFunctionContainer root_mean_squared_error_fn get;

PythonFunctionContainer sensitivity_at_specificity_fn get;

PythonFunctionContainer sparse_average_precision_at_k_fn get;

PythonFunctionContainer sparse_precision_at_k_fn get;

PythonFunctionContainer specificity_at_sensitivity_fn get;

PythonFunctionContainer true_negatives_at_thresholds_fn get;

PythonFunctionContainer true_negatives_fn get;

PythonFunctionContainer true_positives_at_thresholds_fn get;

PythonFunctionContainer true_positives_fn get;