LostTech.TensorFlow : API Documentation

Type NoisyLinearCosineDecay

Namespace tensorflow.keras.experimental

Parent LearningRateSchedule

Interfaces INoisyLinearCosineDecay

A LearningRateSchedule that uses a noisy linear cosine decay schedule.

Methods

Properties

Public static methods

NoisyLinearCosineDecay NewDyn(object initial_learning_rate, object decay_steps, ImplicitContainer<T> initial_variance, ImplicitContainer<T> variance_decay, ImplicitContainer<T> num_periods, ImplicitContainer<T> alpha, ImplicitContainer<T> beta, object name)

Applies noisy linear cosine decay to the learning rate.

See [Bello et al., ICML2017] Neural Optimizer Search with RL. https://arxiv.org/abs/1709.07417

For the idea of warm starts here controlled by `num_periods`, see [Loshchilov & Hutter, ICLR2016] SGDR: Stochastic Gradient Descent with Warm Restarts. https://arxiv.org/abs/1608.03983

Note that linear cosine decay is more aggressive than cosine decay and larger initial learning rates can typically be used.

When training a model, it is often recommended to lower the learning rate as the training progresses. This schedule applies a noisy linear cosine decay function to an optimizer step, given a provided initial learning rate. It requires a `step` value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step.

The schedule a 1-arg callable that produces a decayed learning rate when passed the current optimizer step. This can be useful for changing the learning rate value across different invocations of optimizer functions. It is computed as: where eps_t is 0-centered gaussian noise with variance initial_variance / (1 + global_step) ** variance_decay

Example usage: You can pass this schedule directly into a tf.keras.optimizers.Optimizer as the learning rate. The learning rate schedule is also serializable and deserializable using tf.keras.optimizers.schedules.serialize and tf.keras.optimizers.schedules.deserialize.
Parameters
object initial_learning_rate
A scalar `float32` or `float64` Tensor or a Python number. The initial learning rate.
object decay_steps
A scalar `int32` or `int64` `Tensor` or a Python number. Number of steps to decay over.
ImplicitContainer<T> initial_variance
initial variance for the noise. See computation above.
ImplicitContainer<T> variance_decay
decay for the noise's variance. See computation above.
ImplicitContainer<T> num_periods
Number of periods in the cosine part of the decay. See computation above.
ImplicitContainer<T> alpha
See computation above.
ImplicitContainer<T> beta
See computation above.
object name
String. Optional name of the operation. Defaults to 'NoisyLinearCosineDecay'.
Returns
NoisyLinearCosineDecay
A 1-arg callable learning rate schedule that takes the current optimizer step and outputs the decayed learning rate, a scalar `Tensor` of the same type as `initial_learning_rate`.
Show Example
def decayed_learning_rate(step):
              step = min(step, decay_steps)
              linear_decay = (decay_steps - step) / decay_steps)
              cosine_decay = 0.5 * (
                  1 + cos(pi * 2 * num_periods * step / decay_steps))
              decayed = (alpha + linear_decay + eps_t) * cosine_decay + beta
              return initial_learning_rate * decayed 

Public properties

double alpha get; set;

double beta get; set;

int decay_steps get; set;

double initial_learning_rate get; set;

double initial_variance get; set;

object name get; set;

double num_periods get; set;

object PythonObject get;

double variance_decay get; set;