# LostTech.TensorFlow : API Documentation

Type tf.losses

Namespace tensorflow

### Public static methods

#### objectabsolute_difference(IGraphNodeBase labels, IGraphNodeBase predictions, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds an Absolute Difference loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a `Tensor` of shape `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, same dimensions as 'predictions'.
`IGraphNodeBase` predictions
The predicted outputs.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which this loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectabsolute_difference(IGraphNodeBase labels, IGraphNodeBase predictions, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds an Absolute Difference loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a `Tensor` of shape `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, same dimensions as 'predictions'.
`IGraphNodeBase` predictions
The predicted outputs.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which this loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectabsolute_difference_dyn(object labels, object predictions, ImplicitContainer<T> weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds an Absolute Difference loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a `Tensor` of shape `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`object` labels
The ground truth output tensor, same dimensions as 'predictions'.
`object` predictions
The predicted outputs.
`ImplicitContainer<T>` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which this loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

Adds a externally defined loss to the collection of losses.
##### Parameters
`string` loss
A loss `Tensor`.
`ImplicitContainer<T>` loss_collection
Optional collection to add the loss to.

Adds a externally defined loss to the collection of losses.
##### Parameters
`object` loss
A loss `Tensor`.
`ImplicitContainer<T>` loss_collection
Optional collection to add the loss to.

#### objectcompute_weighted_loss(IndexedSlices losses, ndarray weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`IndexedSlices` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`ndarray` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(ndarray losses, ndarray weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`ndarray` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`ndarray` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(ndarray losses, ValueTuple<double, object, object> weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`ndarray` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`ValueTuple<double, object, object>` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(ndarray losses, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`ndarray` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(ValueTuple<PythonClassContainer, PythonClassContainer> losses, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`ValueTuple<PythonClassContainer, PythonClassContainer>` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(ValueTuple<PythonClassContainer, PythonClassContainer> losses, ndarray weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`ValueTuple<PythonClassContainer, PythonClassContainer>` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`ndarray` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(ValueTuple<PythonClassContainer, PythonClassContainer> losses, ValueTuple<double, object, object> weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`ValueTuple<PythonClassContainer, PythonClassContainer>` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`ValueTuple<double, object, object>` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(ValueTuple<PythonClassContainer, PythonClassContainer> losses, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`ValueTuple<PythonClassContainer, PythonClassContainer>` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(IndexedSlices losses, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`IndexedSlices` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(IndexedSlices losses, ValueTuple<double, object, object> weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`IndexedSlices` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`ValueTuple<double, object, object>` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(IndexedSlices losses, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`IndexedSlices` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(ndarray losses, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`ndarray` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(IGraphNodeBase losses, ndarray weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`IGraphNodeBase` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`ndarray` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(string losses, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`string` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(string losses, ValueTuple<double, object, object> weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`string` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`ValueTuple<double, object, object>` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(string losses, ndarray weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`string` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`ndarray` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(IGraphNodeBase losses, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`IGraphNodeBase` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(object losses, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`object` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(string losses, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`string` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(object losses, ndarray weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`object` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`ndarray` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(object losses, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`object` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(IGraphNodeBase losses, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`IGraphNodeBase` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(IGraphNodeBase losses, ValueTuple<double, object, object> weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`IGraphNodeBase` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`ValueTuple<double, object, object>` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss(object losses, ValueTuple<double, object, object> weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`object` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`ValueTuple<double, object, object>` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcompute_weighted_loss_dyn(object losses, ImplicitContainer<T> weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Computes the weighted loss.
##### Parameters
`object` losses
`Tensor` of shape `[batch_size, d1,... dN]`.
`ImplicitContainer<T>` weights
Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
the loss will be added to these collections.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.

#### objectcosine_distance(IGraphNodeBase labels, IGraphNodeBase predictions, object axis, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction, Nullable<int> dim)

Adds a cosine-distance loss to the training procedure. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Note that the function assumes that `predictions` and `labels` are already unit-normalized.
##### Parameters
`IGraphNodeBase` labels
`Tensor` whose shape matches 'predictions'
`IGraphNodeBase` predictions
An arbitrary matrix.
`object` axis
The dimension along which the cosine distance is computed.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which this loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
`Nullable<int>` dim
The old (deprecated) name for `axis`.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectcosine_distance(IGraphNodeBase labels, IGraphNodeBase predictions, object axis, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction, Nullable<int> dim)

Adds a cosine-distance loss to the training procedure. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Note that the function assumes that `predictions` and `labels` are already unit-normalized.
##### Parameters
`IGraphNodeBase` labels
`Tensor` whose shape matches 'predictions'
`IGraphNodeBase` predictions
An arbitrary matrix.
`object` axis
The dimension along which the cosine distance is computed.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which this loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
`Nullable<int>` dim
The old (deprecated) name for `axis`.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectcosine_distance_dyn(object labels, object predictions, object axis, ImplicitContainer<T> weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction, object dim)

Adds a cosine-distance loss to the training procedure. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead

Note that the function assumes that `predictions` and `labels` are already unit-normalized.
##### Parameters
`object` labels
`Tensor` whose shape matches 'predictions'
`object` predictions
An arbitrary matrix.
`object` axis
The dimension along which the cosine distance is computed.
`ImplicitContainer<T>` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which this loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
`object` dim
The old (deprecated) name for `axis`.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### Tensorget_regularization_loss(string scope, string name)

Gets the total regularization loss.
##### Parameters
`string` scope
An optional scope name for filtering the losses to return.
`string` name
The name of the returned tensor.
##### Returns
`Tensor`
A scalar regularization loss.

#### objectget_regularization_loss_dyn(object scope, ImplicitContainer<T> name)

Gets the total regularization loss.
##### Parameters
`object` scope
An optional scope name for filtering the losses to return.
`ImplicitContainer<T>` name
The name of the returned tensor.
##### Returns
`object`
A scalar regularization loss.

#### objectget_regularization_losses(string scope)

Gets the list of regularization losses.
##### Parameters
`string` scope
An optional scope name for filtering the losses to return.
##### Returns
`object`
A list of regularization losses as Tensors.

#### Tensorget_total_loss(bool add_regularization_losses, string name, object scope)

Returns a tensor whose value represents the total loss.

In particular, this adds any losses you have added with `tf.add_loss()` to any regularization losses that have been added by regularization parameters on layers constructors e.g. `tf.layers`. Be very sure to use this if you are constructing a loss_op manually. Otherwise regularization arguments on `tf.layers` methods will not function.
##### Parameters
`bool` add_regularization_losses
A boolean indicating whether or not to use the regularization losses in the sum.
`string` name
The name of the returned tensor.
`object` scope
An optional scope name for filtering the losses to return. Note that this filters the losses added with `tf.add_loss()` as well as the regularization losses to that scope.
##### Returns
`Tensor`
A `Tensor` whose value represents the total loss.

#### objectget_total_loss_dyn(ImplicitContainer<T> add_regularization_losses, ImplicitContainer<T> name, object scope)

Returns a tensor whose value represents the total loss.

In particular, this adds any losses you have added with `tf.add_loss()` to any regularization losses that have been added by regularization parameters on layers constructors e.g. `tf.layers`. Be very sure to use this if you are constructing a loss_op manually. Otherwise regularization arguments on `tf.layers` methods will not function.
##### Parameters
`ImplicitContainer<T>` add_regularization_losses
A boolean indicating whether or not to use the regularization losses in the sum.
`ImplicitContainer<T>` name
The name of the returned tensor.
`object` scope
An optional scope name for filtering the losses to return. Note that this filters the losses added with `tf.add_loss()` as well as the regularization losses to that scope.
##### Returns
`object`
A `Tensor` whose value represents the total loss.

#### objecthinge_loss(IDictionary<string, object> labels, IGraphNodeBase logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a hinge loss to the training procedure.
##### Parameters
`IDictionary<string, object>` labels
The ground truth output tensor. Its shape should match the shape of logits. The values of the tensor are expected to be 0.0 or 1.0. Internally the {0,1} labels are converted to {-1,1} when calculating the hinge loss.
`IGraphNodeBase` logits
The logits, a float tensor. Note that logits are assumed to be unbounded and 0-centered. A value > 0 (resp. < 0) is considered a positive (resp. negative) binary prediction.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objecthinge_loss(IDictionary<string, object> labels, IGraphNodeBase logits, double weights, string scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a hinge loss to the training procedure.
##### Parameters
`IDictionary<string, object>` labels
The ground truth output tensor. Its shape should match the shape of logits. The values of the tensor are expected to be 0.0 or 1.0. Internally the {0,1} labels are converted to {-1,1} when calculating the hinge loss.
`IGraphNodeBase` logits
The logits, a float tensor. Note that logits are assumed to be unbounded and 0-centered. A value > 0 (resp. < 0) is considered a positive (resp. negative) binary prediction.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`string` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objecthinge_loss(IGraphNodeBase labels, IGraphNodeBase logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a hinge loss to the training procedure.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor. Its shape should match the shape of logits. The values of the tensor are expected to be 0.0 or 1.0. Internally the {0,1} labels are converted to {-1,1} when calculating the hinge loss.
`IGraphNodeBase` logits
The logits, a float tensor. Note that logits are assumed to be unbounded and 0-centered. A value > 0 (resp. < 0) is considered a positive (resp. negative) binary prediction.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objecthinge_loss(IGraphNodeBase labels, IGraphNodeBase logits, double weights, string scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a hinge loss to the training procedure.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor. Its shape should match the shape of logits. The values of the tensor are expected to be 0.0 or 1.0. Internally the {0,1} labels are converted to {-1,1} when calculating the hinge loss.
`IGraphNodeBase` logits
The logits, a float tensor. Note that logits are assumed to be unbounded and 0-centered. A value > 0 (resp. < 0) is considered a positive (resp. negative) binary prediction.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`string` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objecthinge_loss_dyn(object labels, object logits, ImplicitContainer<T> weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a hinge loss to the training procedure.
##### Parameters
`object` labels
The ground truth output tensor. Its shape should match the shape of logits. The values of the tensor are expected to be 0.0 or 1.0. Internally the {0,1} labels are converted to {-1,1} when calculating the hinge loss.
`object` logits
The logits, a float tensor. Note that logits are assumed to be unbounded and 0-centered. A value > 0 (resp. < 0) is considered a positive (resp. negative) binary prediction.
`ImplicitContainer<T>` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objecthuber_loss(IGraphNodeBase labels, IGraphNodeBase predictions, double weights, double delta, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Huber Loss term to the training procedure.

For each value x in `error=labels-predictions`, the following is calculated:

``` 0.5 * x^2 if |x| <= d 0.5 * d^2 + d * (|x| - d) if |x| > d ```

where d is `delta`.

See: https://en.wikipedia.org/wiki/Huber_loss

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, same dimensions as 'predictions'.
`IGraphNodeBase` predictions
The predicted outputs.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`double` delta
`float`, the point where the huber loss function changes from a quadratic to linear.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objecthuber_loss_dyn(object labels, object predictions, ImplicitContainer<T> weights, ImplicitContainer<T> delta, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Huber Loss term to the training procedure.

For each value x in `error=labels-predictions`, the following is calculated:

``` 0.5 * x^2 if |x| <= d 0.5 * d^2 + d * (|x| - d) if |x| > d ```

where d is `delta`.

See: https://en.wikipedia.org/wiki/Huber_loss

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`object` labels
The ground truth output tensor, same dimensions as 'predictions'.
`object` predictions
The predicted outputs.
`ImplicitContainer<T>` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`ImplicitContainer<T>` delta
`float`, the point where the huber loss function changes from a quadratic to linear.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectlog_loss(IGraphNodeBase labels, IGraphNodeBase predictions, double weights, double epsilon, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Log Loss term to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, same dimensions as 'predictions'.
`IGraphNodeBase` predictions
The predicted outputs.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`double` epsilon
A small increment to add to avoid taking a log of zero.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectlog_loss(IGraphNodeBase labels, IGraphNodeBase predictions, IGraphNodeBase weights, double epsilon, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Log Loss term to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, same dimensions as 'predictions'.
`IGraphNodeBase` predictions
The predicted outputs.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`double` epsilon
A small increment to add to avoid taking a log of zero.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectlog_loss_dyn(object labels, object predictions, ImplicitContainer<T> weights, ImplicitContainer<T> epsilon, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Log Loss term to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`object` labels
The ground truth output tensor, same dimensions as 'predictions'.
`object` predictions
The predicted outputs.
`ImplicitContainer<T>` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`ImplicitContainer<T>` epsilon
A small increment to add to avoid taking a log of zero.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### Tensormean_pairwise_squared_error(IGraphNodeBase labels, IGraphNodeBase predictions, double weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`IGraphNodeBase` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`double` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### Tensormean_pairwise_squared_error(IGraphNodeBase labels, IGraphNodeBase predictions, ndarray weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`IGraphNodeBase` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`ndarray` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### Tensormean_pairwise_squared_error(IGraphNodeBase labels, ndarray predictions, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`ndarray` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`IGraphNodeBase` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### Tensormean_pairwise_squared_error(IGraphNodeBase labels, ndarray predictions, int weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`ndarray` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`int` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### Tensormean_pairwise_squared_error(IGraphNodeBase labels, ndarray predictions, ndarray weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`ndarray` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`ndarray` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### Tensormean_pairwise_squared_error(IGraphNodeBase labels, ndarray predictions, double weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`ndarray` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`double` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### Tensormean_pairwise_squared_error(ndarray labels, ndarray predictions, int weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`ndarray` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`ndarray` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`int` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### Tensormean_pairwise_squared_error(ndarray labels, IGraphNodeBase predictions, int weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`ndarray` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`IGraphNodeBase` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`int` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### Tensormean_pairwise_squared_error(ndarray labels, IGraphNodeBase predictions, ndarray weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`ndarray` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`IGraphNodeBase` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`ndarray` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### Tensormean_pairwise_squared_error(ndarray labels, IGraphNodeBase predictions, double weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`ndarray` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`IGraphNodeBase` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`double` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### Tensormean_pairwise_squared_error(ndarray labels, ndarray predictions, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`ndarray` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`ndarray` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`IGraphNodeBase` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### Tensormean_pairwise_squared_error(ndarray labels, ndarray predictions, ndarray weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`ndarray` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`ndarray` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`ndarray` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### Tensormean_pairwise_squared_error(IGraphNodeBase labels, IGraphNodeBase predictions, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`IGraphNodeBase` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`IGraphNodeBase` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### Tensormean_pairwise_squared_error(ndarray labels, IGraphNodeBase predictions, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`ndarray` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`IGraphNodeBase` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`IGraphNodeBase` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### Tensormean_pairwise_squared_error(IGraphNodeBase labels, IGraphNodeBase predictions, int weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`IGraphNodeBase` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`int` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### Tensormean_pairwise_squared_error(ndarray labels, ndarray predictions, double weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`ndarray` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`ndarray` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`double` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`Tensor`
A scalar `Tensor` that returns the weighted loss.

#### objectmean_pairwise_squared_error_dyn(object labels, object predictions, ImplicitContainer<T> weights, object scope, ImplicitContainer<T> loss_collection)

Adds a pairwise-errors-squared loss to the training procedure.

Unlike `mean_squared_error`, which is a measure of the differences between corresponding elements of `predictions` and `labels`, `mean_pairwise_squared_error` is a measure of the differences between pairs of corresponding elements of `predictions` and `labels`.

For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of shape `[batch_size, d0,... dN]`, the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if `predictions` represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector.
##### Parameters
`object` labels
The ground truth output tensor, whose shape must match the shape of `predictions`.
`object` predictions
The predicted outputs, a tensor of size `[batch_size, d0,.. dN]` where N+1 is the total number of dimensions in `predictions`.
`ImplicitContainer<T>` weights
Coefficients for the loss a scalar, a tensor of shape `[batch_size]` or a tensor whose shape matches `predictions`.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
##### Returns
`object`
A scalar `Tensor` that returns the weighted loss.

#### objectmean_squared_error(float32 labels, object predictions, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`float32` labels
The ground truth output tensor, same dimensions as 'predictions'.
`object` predictions
The predicted outputs.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(float32 labels, object predictions, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`float32` labels
The ground truth output tensor, same dimensions as 'predictions'.
`object` predictions
The predicted outputs.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(IDictionary<object, object> labels, IEnumerable<IGraphNodeBase> predictions, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IDictionary<object, object>` labels
The ground truth output tensor, same dimensions as 'predictions'.
`IEnumerable<IGraphNodeBase>` predictions
The predicted outputs.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(IDictionary<object, object> labels, IEnumerable<IGraphNodeBase> predictions, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IDictionary<object, object>` labels
The ground truth output tensor, same dimensions as 'predictions'.
`IEnumerable<IGraphNodeBase>` predictions
The predicted outputs.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(IDictionary<object, object> labels, object predictions, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IDictionary<object, object>` labels
The ground truth output tensor, same dimensions as 'predictions'.
`object` predictions
The predicted outputs.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(IDictionary<object, object> labels, object predictions, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IDictionary<object, object>` labels
The ground truth output tensor, same dimensions as 'predictions'.
`object` predictions
The predicted outputs.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(IEnumerable<object> labels, IEnumerable<IGraphNodeBase> predictions, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IEnumerable<object>` labels
The ground truth output tensor, same dimensions as 'predictions'.
`IEnumerable<IGraphNodeBase>` predictions
The predicted outputs.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(IEnumerable<object> labels, IEnumerable<IGraphNodeBase> predictions, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IEnumerable<object>` labels
The ground truth output tensor, same dimensions as 'predictions'.
`IEnumerable<IGraphNodeBase>` predictions
The predicted outputs.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(IEnumerable<object> labels, object predictions, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IEnumerable<object>` labels
The ground truth output tensor, same dimensions as 'predictions'.
`object` predictions
The predicted outputs.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(IEnumerable<object> labels, object predictions, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IEnumerable<object>` labels
The ground truth output tensor, same dimensions as 'predictions'.
`object` predictions
The predicted outputs.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(IGraphNodeBase labels, IEnumerable<IGraphNodeBase> predictions, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, same dimensions as 'predictions'.
`IEnumerable<IGraphNodeBase>` predictions
The predicted outputs.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(IGraphNodeBase labels, IEnumerable<IGraphNodeBase> predictions, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, same dimensions as 'predictions'.
`IEnumerable<IGraphNodeBase>` predictions
The predicted outputs.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(float32 labels, IEnumerable<IGraphNodeBase> predictions, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`float32` labels
The ground truth output tensor, same dimensions as 'predictions'.
`IEnumerable<IGraphNodeBase>` predictions
The predicted outputs.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(IGraphNodeBase labels, object predictions, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, same dimensions as 'predictions'.
`object` predictions
The predicted outputs.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(object labels, IEnumerable<IGraphNodeBase> predictions, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`object` labels
The ground truth output tensor, same dimensions as 'predictions'.
`IEnumerable<IGraphNodeBase>` predictions
The predicted outputs.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(object labels, IEnumerable<IGraphNodeBase> predictions, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`object` labels
The ground truth output tensor, same dimensions as 'predictions'.
`IEnumerable<IGraphNodeBase>` predictions
The predicted outputs.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(object labels, object predictions, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`object` labels
The ground truth output tensor, same dimensions as 'predictions'.
`object` predictions
The predicted outputs.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(object labels, object predictions, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`object` labels
The ground truth output tensor, same dimensions as 'predictions'.
`object` predictions
The predicted outputs.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(IGraphNodeBase labels, object predictions, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`IGraphNodeBase` labels
The ground truth output tensor, same dimensions as 'predictions'.
`object` predictions
The predicted outputs.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error(float32 labels, IEnumerable<IGraphNodeBase> predictions, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`float32` labels
The ground truth output tensor, same dimensions as 'predictions'.
`IEnumerable<IGraphNodeBase>` predictions
The predicted outputs.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectmean_squared_error_dyn(object labels, object predictions, ImplicitContainer<T> weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Adds a Sum-of-Squares loss to the training procedure.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of size `[batch_size]`, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights` vector. If the shape of `weights` matches the shape of `predictions`, then the loss of each measurable element of `predictions` is scaled by the corresponding value of `weights`.
##### Parameters
`object` labels
The ground truth output tensor, same dimensions as 'predictions'.
`object` predictions
The predicted outputs.
`ImplicitContainer<T>` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, IDictionary<object, object> logits, double weights, double label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`IDictionary<object, object>` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`double` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, IDictionary<object, object> logits, double weights, int label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`IDictionary<object, object>` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`int` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, IDictionary<object, object> logits, IGraphNodeBase weights, double label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`IDictionary<object, object>` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`double` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, IDictionary<object, object> logits, IGraphNodeBase weights, int label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`IDictionary<object, object>` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`int` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, ValueTuple<PythonClassContainer, PythonClassContainer> logits, double weights, double label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`ValueTuple<PythonClassContainer, PythonClassContainer>` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`double` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, ValueTuple<PythonClassContainer, PythonClassContainer> logits, double weights, int label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`ValueTuple<PythonClassContainer, PythonClassContainer>` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`int` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, ValueTuple<PythonClassContainer, PythonClassContainer> logits, IGraphNodeBase weights, double label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`ValueTuple<PythonClassContainer, PythonClassContainer>` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`double` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, IndexedSlices logits, IGraphNodeBase weights, double label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`IndexedSlices` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`double` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, IndexedSlices logits, double weights, double label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`IndexedSlices` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`double` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, IndexedSlices logits, IGraphNodeBase weights, int label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`IndexedSlices` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`int` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, IGraphNodeBase logits, double weights, double label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`IGraphNodeBase` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`double` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, IGraphNodeBase logits, double weights, int label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`IGraphNodeBase` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`int` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, IGraphNodeBase logits, IGraphNodeBase weights, double label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`IGraphNodeBase` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`double` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, IGraphNodeBase logits, IGraphNodeBase weights, int label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`IGraphNodeBase` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`int` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, IndexedSlices logits, double weights, int label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`IndexedSlices` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`double` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`int` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy(IGraphNodeBase multi_class_labels, ValueTuple<PythonClassContainer, PythonClassContainer> logits, IGraphNodeBase weights, int label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`IGraphNodeBase` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`ValueTuple<PythonClassContainer, PythonClassContainer>` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`int` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsigmoid_cross_entropy_dyn(object multi_class_labels, object logits, ImplicitContainer<T> weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/2:

new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing
##### Parameters
`object` multi_class_labels
`[batch_size, num_classes]` target integer labels in `{0, 1}`.
`object` logits
Float `[batch_size, num_classes]` logits outputs of the network.
`ImplicitContainer<T>` weights
Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension).
`ImplicitContainer<T>` label_smoothing
If greater than `0` then smooth the labels.
`object` scope
The scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `logits`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IDictionary<object, object> onehot_labels, IEnumerable<IGraphNodeBase> logits, double weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IDictionary<object, object>` onehot_labels
One-hot-encoded labels.
`IEnumerable<IGraphNodeBase>` logits
Logits outputs of the network.
`double` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IDictionary<object, object> onehot_labels, IEnumerable<IGraphNodeBase> logits, IGraphNodeBase weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IDictionary<object, object>` onehot_labels
One-hot-encoded labels.
`IEnumerable<IGraphNodeBase>` logits
Logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IDictionary<object, object> onehot_labels, AttentionWrapperState logits, double weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IDictionary<object, object>` onehot_labels
One-hot-encoded labels.
`AttentionWrapperState` logits
Logits outputs of the network.
`double` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IDictionary<object, object> onehot_labels, AttentionWrapperState logits, IGraphNodeBase weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IDictionary<object, object>` onehot_labels
One-hot-encoded labels.
`AttentionWrapperState` logits
Logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IDictionary<object, object> onehot_labels, IGraphNodeBase logits, double weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IDictionary<object, object>` onehot_labels
One-hot-encoded labels.
`IGraphNodeBase` logits
Logits outputs of the network.
`double` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IDictionary<object, object> onehot_labels, IGraphNodeBase logits, IGraphNodeBase weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IDictionary<object, object>` onehot_labels
One-hot-encoded labels.
`IGraphNodeBase` logits
Logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IDictionary<object, object> onehot_labels, object logits, double weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IDictionary<object, object>` onehot_labels
One-hot-encoded labels.
`object` logits
Logits outputs of the network.
`double` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IDictionary<object, object> onehot_labels, string logits, double weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IDictionary<object, object>` onehot_labels
One-hot-encoded labels.
`string` logits
Logits outputs of the network.
`double` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IDictionary<object, object> onehot_labels, string logits, IGraphNodeBase weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IDictionary<object, object>` onehot_labels
One-hot-encoded labels.
`string` logits
Logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IGraphNodeBase onehot_labels, IEnumerable<IGraphNodeBase> logits, double weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IGraphNodeBase` onehot_labels
One-hot-encoded labels.
`IEnumerable<IGraphNodeBase>` logits
Logits outputs of the network.
`double` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IGraphNodeBase onehot_labels, IEnumerable<IGraphNodeBase> logits, IGraphNodeBase weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IGraphNodeBase` onehot_labels
One-hot-encoded labels.
`IEnumerable<IGraphNodeBase>` logits
Logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IGraphNodeBase onehot_labels, AttentionWrapperState logits, double weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IGraphNodeBase` onehot_labels
One-hot-encoded labels.
`AttentionWrapperState` logits
Logits outputs of the network.
`double` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IGraphNodeBase onehot_labels, AttentionWrapperState logits, IGraphNodeBase weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IGraphNodeBase` onehot_labels
One-hot-encoded labels.
`AttentionWrapperState` logits
Logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IGraphNodeBase onehot_labels, IGraphNodeBase logits, double weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IGraphNodeBase` onehot_labels
One-hot-encoded labels.
`IGraphNodeBase` logits
Logits outputs of the network.
`double` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IGraphNodeBase onehot_labels, IGraphNodeBase logits, IGraphNodeBase weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IGraphNodeBase` onehot_labels
One-hot-encoded labels.
`IGraphNodeBase` logits
Logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IGraphNodeBase onehot_labels, object logits, double weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IGraphNodeBase` onehot_labels
One-hot-encoded labels.
`object` logits
Logits outputs of the network.
`double` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IGraphNodeBase onehot_labels, object logits, IGraphNodeBase weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IGraphNodeBase` onehot_labels
One-hot-encoded labels.
`object` logits
Logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IGraphNodeBase onehot_labels, string logits, double weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IGraphNodeBase` onehot_labels
One-hot-encoded labels.
`string` logits
Logits outputs of the network.
`double` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IGraphNodeBase onehot_labels, string logits, IGraphNodeBase weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IGraphNodeBase` onehot_labels
One-hot-encoded labels.
`string` logits
Logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy(IDictionary<object, object> onehot_labels, object logits, IGraphNodeBase weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`IDictionary<object, object>` onehot_labels
One-hot-encoded labels.
`object` logits
Logits outputs of the network.
`IGraphNodeBase` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsoftmax_cross_entropy_dyn(object onehot_labels, object logits, ImplicitContainer<T> weights, ImplicitContainer<T> label_smoothing, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.

If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Note that `onehot_labels` and `logits` must have the same shape, e.g. `[batch_size, num_classes]`. The shape of `weights` must be broadcastable to loss, whose shape is decided by the shape of `logits`. In case the shape of `logits` is `[batch_size, num_classes]`, loss is a `Tensor` of shape `[batch_size]`.
##### Parameters
`object` onehot_labels
One-hot-encoded labels.
`object` logits
Logits outputs of the network.
`ImplicitContainer<T>` weights
Optional `Tensor` that is broadcastable to loss.
`ImplicitContainer<T>` label_smoothing
If greater than 0 then smooth the labels.
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IEnumerable<IGraphNodeBase> logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`ValueTuple<PythonClassContainer, PythonClassContainer>` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IEnumerable<IGraphNodeBase>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`double` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IndexedSlices labels, IndexedSlices logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IndexedSlices` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IndexedSlices` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`double` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IEnumerable<IGraphNodeBase> logits, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`ValueTuple<PythonClassContainer, PythonClassContainer>` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IEnumerable<IGraphNodeBase>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`IGraphNodeBase` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, ValueTuple<PythonClassContainer, PythonClassContainer> logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`ValueTuple<PythonClassContainer, PythonClassContainer>` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`ValueTuple<PythonClassContainer, PythonClassContainer>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`double` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, ValueTuple<PythonClassContainer, PythonClassContainer> logits, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`ValueTuple<PythonClassContainer, PythonClassContainer>` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`ValueTuple<PythonClassContainer, PythonClassContainer>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`IGraphNodeBase` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IndexedSlices logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`ValueTuple<PythonClassContainer, PythonClassContainer>` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IndexedSlices` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`double` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IndexedSlices logits, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`ValueTuple<PythonClassContainer, PythonClassContainer>` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IndexedSlices` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`IGraphNodeBase` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`ValueTuple<PythonClassContainer, PythonClassContainer>` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IGraphNodeBase` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`double` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IGraphNodeBase logits, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`ValueTuple<PythonClassContainer, PythonClassContainer>` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IGraphNodeBase` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`IGraphNodeBase` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IndexedSlices labels, IDictionary<object, object> logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IndexedSlices` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IDictionary<object, object>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`double` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IndexedSlices labels, IDictionary<object, object> logits, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IndexedSlices` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IDictionary<object, object>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`IGraphNodeBase` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IndexedSlices labels, IEnumerable<IGraphNodeBase> logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IndexedSlices` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IEnumerable<IGraphNodeBase>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`double` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IndexedSlices labels, IEnumerable<IGraphNodeBase> logits, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IndexedSlices` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IEnumerable<IGraphNodeBase>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`IGraphNodeBase` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IndexedSlices labels, ValueTuple<PythonClassContainer, PythonClassContainer> logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IndexedSlices` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`ValueTuple<PythonClassContainer, PythonClassContainer>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`double` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IndexedSlices labels, ValueTuple<PythonClassContainer, PythonClassContainer> logits, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IndexedSlices` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`ValueTuple<PythonClassContainer, PythonClassContainer>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`IGraphNodeBase` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IDictionary<object, object> logits, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`ValueTuple<PythonClassContainer, PythonClassContainer>` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IDictionary<object, object>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`IGraphNodeBase` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IndexedSlices labels, IndexedSlices logits, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IndexedSlices` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IndexedSlices` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`IGraphNodeBase` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IndexedSlices labels, IGraphNodeBase logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IndexedSlices` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IGraphNodeBase` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`double` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IndexedSlices labels, IGraphNodeBase logits, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IndexedSlices` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IGraphNodeBase` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`IGraphNodeBase` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IGraphNodeBase labels, IDictionary<object, object> logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IGraphNodeBase` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IDictionary<object, object>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`double` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IGraphNodeBase labels, IDictionary<object, object> logits, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IGraphNodeBase` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IDictionary<object, object>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`IGraphNodeBase` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IGraphNodeBase labels, IEnumerable<IGraphNodeBase> logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IGraphNodeBase` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IEnumerable<IGraphNodeBase>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`double` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IGraphNodeBase labels, IEnumerable<IGraphNodeBase> logits, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IGraphNodeBase` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IEnumerable<IGraphNodeBase>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`IGraphNodeBase` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IGraphNodeBase labels, ValueTuple<PythonClassContainer, PythonClassContainer> logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IGraphNodeBase` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`ValueTuple<PythonClassContainer, PythonClassContainer>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`double` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IGraphNodeBase labels, ValueTuple<PythonClassContainer, PythonClassContainer> logits, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IGraphNodeBase` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`ValueTuple<PythonClassContainer, PythonClassContainer>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`IGraphNodeBase` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IGraphNodeBase labels, IndexedSlices logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IGraphNodeBase` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IndexedSlices` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`double` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IGraphNodeBase labels, IndexedSlices logits, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IGraphNodeBase` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IndexedSlices` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`IGraphNodeBase` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IGraphNodeBase labels, IGraphNodeBase logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IGraphNodeBase` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IGraphNodeBase` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`double` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(IGraphNodeBase labels, IGraphNodeBase logits, IGraphNodeBase weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`IGraphNodeBase` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IGraphNodeBase` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`IGraphNodeBase` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy(ValueTuple<PythonClassContainer, PythonClassContainer> labels, IDictionary<object, object> logits, double weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`ValueTuple<PythonClassContainer, PythonClassContainer>` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`IDictionary<object, object>` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`double` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.

#### objectsparse_softmax_cross_entropy_dyn(object labels, object logits, ImplicitContainer<T> weights, object scope, ImplicitContainer<T> loss_collection, ImplicitContainer<T> reduction)

Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.

`weights` acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights` is a tensor of shape `[batch_size]`, then the loss weights apply to each corresponding sample.
##### Parameters
`object` labels
`Tensor` of shape `[d_0, d_1,..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU.
`object` logits
Unscaled log probabilities of shape `[d_0, d_1,..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or `float64`.
`ImplicitContainer<T>` weights
Coefficients for the loss. This must be scalar or broadcastable to `labels` (i.e. same rank and each dimension is either 1 or the same).
`object` scope
the scope for the operations performed in computing the loss.
`ImplicitContainer<T>` loss_collection
collection to which the loss will be added.
`ImplicitContainer<T>` reduction
Type of reduction to apply to loss.
##### Returns
`object`
Weighted loss `Tensor` of the same type as `logits`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar.