Type Adamax
Namespace tensorflow.keras.optimizers
Parent Optimizer
Interfaces IAdamax
Optimizer that implements the Adamax algorithm. It is a variant of Adam based on the infinity norm.
Default parameters follow those provided in the paper.
Adamax is sometimes superior to adam, specially in models with embeddings. References
see Section 7 of [Kingma et al., 2014](http://arxiv.org/abs/1412.6980)
([pdf](http://arxiv.org/pdf/1412.6980.pdf)).
Methods
Properties
Public static methods
Adamax NewDyn(ImplicitContainer<T> learning_rate, ImplicitContainer<T> beta_1, ImplicitContainer<T> beta_2, ImplicitContainer<T> epsilon, ImplicitContainer<T> name, IDictionary<string, object> kwargs)
Construct a new Adamax optimizer. Initialization: ```
m_0 <- 0 (Initialize initial 1st moment vector)
v_0 <- 0 (Initialize the exponentially weighted infinity norm)
t <- 0 (Initialize timestep)
``` The update rule for `variable` with gradient `g` uses an optimization
described at the end of section 7.1 of the paper: ```
t <- t + 1 m_t <- beta1 * m_{t-1} + (1 - beta1) * g
v_t <- max(beta2 * v_{t-1}, abs(g))
variable <- variable - learning_rate / (1 - beta1^t) * m_t / (v_t + epsilon)
``` Similar to AdamOptimizer, the epsilon is added for numerical stability
(especially to get rid of division by zero when v_t = 0). Contrast to AdamOptimizer, the sparse implementation of this algorithm
(used when the gradient is an IndexedSlices object, typically because of
tf.gather
or an embedding lookup in the forward pass) only updates
variable slices and corresponding `m_t`, `v_t` terms when that part of
the variable was used in the forward pass. This means that the sparse
behavior is contrast to the dense behavior (similar to some momentum
implementations which ignore momentum unless a variable slice was actually
used).
Parameters
-
ImplicitContainer<T>
learning_rate - A Tensor or a floating point value. The learning rate.
-
ImplicitContainer<T>
beta_1 - A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates.
-
ImplicitContainer<T>
beta_2 - A float value or a constant float tensor. The exponential decay rate for the exponentially weighted infinity norm.
-
ImplicitContainer<T>
epsilon - A small constant for numerical stability.
-
ImplicitContainer<T>
name - Optional name for the operations created when applying gradients. Defaults to "Adamax".
-
IDictionary<string, object>
kwargs - keyword arguments. Allowed to be {`clipnorm`, `clipvalue`, `lr`, `decay`}. `clipnorm` is clip gradients by norm; `clipvalue` is clip gradients by value, `decay` is included for backward compatibility to allow time inverse decay of learning rate. `lr` is included for backward compatibility, recommended to use `learning_rate` instead.