LostTech.TensorFlow : API Documentation

Type AdamOptimizer

Namespace tensorflow.train

Parent Optimizer

Interfaces IAdamOptimizer

Optimizer that implements the Adam algorithm.

See [Kingma et al., 2014](http://arxiv.org/abs/1412.6980) ([pdf](http://arxiv.org/pdf/1412.6980.pdf)).

Methods

Properties

Public static methods

AdamOptimizer NewDyn(ImplicitContainer<T> learning_rate, ImplicitContainer<T> beta1, ImplicitContainer<T> beta2, ImplicitContainer<T> epsilon, ImplicitContainer<T> use_locking, ImplicitContainer<T> name)

Construct a new Adam optimizer.

Initialization:

$$m_0 := 0 \text{(Initialize initial 1st moment vector)}$$ $$v_0 := 0 \text{(Initialize initial 2nd moment vector)}$$ $$t := 0 \text{(Initialize timestep)}$$

The update rule for `variable` with gradient `g` uses an optimization described at the end of section 2 of the paper:

$$t := t + 1$$ $$lr_t := \text{learning\_rate} * \sqrt{1 - beta_2^t} / (1 - beta_1^t)$$

$$m_t := beta_1 * m_{t-1} + (1 - beta_1) * g$$ $$v_t := beta_2 * v_{t-1} + (1 - beta_2) * g * g$$ $$variable := variable - lr_t * m_t / (\sqrt{v_t} + \epsilon)$$

The default value of 1e-8 for epsilon might not be a good default in general. For example, when training an Inception network on ImageNet a current good choice is 1.0 or 0.1. Note that since AdamOptimizer uses the formulation just before Section 2.1 of the Kingma and Ba paper rather than the formulation in Algorithm 1, the "epsilon" referred to here is "epsilon hat" in the paper.

The sparse implementation of this algorithm (used when the gradient is an IndexedSlices object, typically because of tf.gather or an embedding lookup in the forward pass) does apply momentum to variable slices even if they were not used in the forward pass (meaning they have a gradient equal to zero). Momentum decay (beta1) is also applied to the entire momentum accumulator. This means that the sparse behavior is equivalent to the dense behavior (in contrast to some momentum implementations which ignore momentum unless a variable slice was actually used).
Parameters
ImplicitContainer<T> learning_rate
A Tensor or a floating point value. The learning rate.
ImplicitContainer<T> beta1
A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates.
ImplicitContainer<T> beta2
A float value or a constant float tensor. The exponential decay rate for the 2nd moment estimates.
ImplicitContainer<T> epsilon
A small constant for numerical stability. This epsilon is "epsilon hat" in the Kingma and Ba paper (in the formula just before Section 2.1), not the epsilon in Algorithm 1 of the paper.
ImplicitContainer<T> use_locking
If True use locks for update operations.
ImplicitContainer<T> name
Optional name for the operations created when applying gradients. Defaults to "Adam". @compatibility(eager) When eager execution is enabled, `learning_rate`, `beta1`, `beta2`, and `epsilon` can each be a callable that takes no arguments and returns the actual value to use. This can be useful for changing these values across different invocations of optimizer functions. @end_compatibility

Public properties

object PythonObject get;