LostTech.TensorFlow : API Documentation

Type LossScale

Namespace tensorflow.train.experimental

Parent PythonObjectContainer

Interfaces Trackable, ILossScale

Loss scale base class.

Loss scaling is a process that multiplies the loss by a multiplier called the loss scale, and divides each gradient by the same multiplier. The pseudocode for this process is:

``` loss =... loss *= loss_scale grads = gradients(loss, vars) grads /= loss_scale ```

Mathematically, loss scaling has no effect, but can help avoid numerical underflow in intermediate gradients when float16 tensors are used for mixed precision training. By multiplying the loss, each intermediate gradient will have the same multiplier applied.

Instances of this class represent a loss scale. Calling instances of this class returns the loss scale as a scalar float32 tensor, while method `update()` updates the loss scale depending on the values of the gradients. Optimizers use instances of this class to scale loss and gradients.

Methods

Properties

Public instance methods

object update(IEnumerable<object> grads)

Updates loss scale based on if gradients are finite in current step.

object update(IGraphNodeBase grads)

Updates loss scale based on if gradients are finite in current step.

object update_dyn(object grads)

Updates loss scale based on if gradients are finite in current step.

Public static methods

object from_config_dyn<TClass>(object config)

Creates the LossScale from its config.

TClass from_config<TClass>(IDictionary<object, object> config)

Creates the LossScale from its config.

Public properties

object PythonObject get;