LostTech.TensorFlow : API Documentation

Type LossScaleOptimizer

Namespace tensorflow.contrib.mixed_precision

Parent Optimizer

Interfaces ILossScaleOptimizer

An optimizer that applies loss scaling in backprop.

This class is useful for "mixed precision training" on GPUs (or other potential accelerators), an approach to improve compute throughput without compromising model quality.

The canonical way to perform mixed precision training is the following: * Model variables are kept in high precision (e.g. float32). * Computations are done in lower precision (e.g. float16), which enjoys performance speedup by virtue of hardware support. Variables are casted to lower precision before they're used. * Final gradients are casted back to high precision dtype, then used to update variables.

The side-effect of performing computation in lower precision, is that it comes with smaller numerical range. During backproping, small gradients might underflow in the reduced numerical range, causing a model to converge at suboptimal level.

To prevent underflow, this optimizer multiplies the loss by a factor before backprop starts. Consequently, the gradients are linearly scaled up by the same factor, thus not falling into the underflow zone. After that, to perserve the correctness of backprop, the gradients are down-scaled by the same factor, casted to the (higher) variable precision, then applied on the variables.

See [Nvidia's manual on mixed precision training]( https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html) for more details.

To use loss scale optimizer, one only needs choose a loss scale strategy and wrap a regular optimizer. See examples below.

``` loss = loss_fn() opt = tf.AdamOptimizer(learning_rate=...)

# Choose a loss scale manager which decides how to pick the right loss scale # throughout the training process. loss_scale_manager = tf.contrib.mixed_precision.FixedLossScaleManager(5000)

# Wraps the original optimizer in a LossScaleOptimizer. loss_scale_optimizer = tf.contrib.mixed_precision.LossScaleOptimizer(opt, loss_scale_manager)

# Call minimize() on the loss scale optimizer. train_op = loss_scale_optimizer.minimize(loss) ```

If gradients clipping is applied, one can call `optimizer.compute_gradients()` and `optimizer.apply_gradients()` separately.

Notice the following way of using LossScaleOptimizer is not intended. Always use `loss_scale_optimizer.compute_gradients()` to compute gradients instead of `tf.gradients()` if doing mixed precision training.

``` # The following is a wrong way to use LossScaleOptimizer along with # tf.gradients().

# Always use loss_scale_optimizer.compute_gradients() to compute grads, or # loss scale is not correctly applied. grads = tf.gradients(loss,...)

# Do some custom grad clipping. grads = clip_grads(grads,...)

loss_scale_optimizer.apply(grads_and_vars) ```

Properties

Public properties

object PythonObject get;