LostTech.TensorFlow : API Documentation

Type DynamicLossScale

Namespace tensorflow.train.experimental

Parent LossScale

Interfaces IDynamicLossScale

Loss scale that dynamically adjusts itself.

Dynamic loss scaling works by adjusting the loss scale as training progresses. The goal is to keep the loss scale as high as possible without overflowing the gradients. As long as the gradients do not overflow, raising the loss scale never hurts.

The algorithm starts by setting the loss scale to an initial value. Every N steps that the gradients are finite, the loss scale is increased by some factor. However, if a NaN or Inf gradient is found, the gradients for that step are not applied, and the loss scale is decreased by the factor. This process tends to keep the loss scale as high as possible without gradients overflowing.


Public properties

int increment_period get;

object increment_period_dyn get;

double initial_loss_scale get;

object initial_loss_scale_dyn get;

object multiplier get;

object multiplier_dyn get;

object PythonObject get;