LostTech.TensorFlow : API Documentation

Type AdamParameters

Namespace tensorflow.tpu.experimental

Parent _OptimizationParameters

Interfaces IAdamParameters

Optimization parameters for Adam with TPU embeddings.

Pass this to tf.estimator.tpu.experimental.EmbeddingConfigSpec via the `optimization_parameters` argument to set the optimizer and its parameters. See the documentation for tf.estimator.tpu.experimental.EmbeddingConfigSpec for more details.

``` estimator = tf.estimator.tpu.TPUEstimator( ... embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec( ... optimization_parameters=tf.tpu.experimental.AdamParameters(0.1), ...)) ```

Methods

Properties

Public static methods

AdamParameters NewDyn(object learning_rate, ImplicitContainer<T> beta1, ImplicitContainer<T> beta2, ImplicitContainer<T> epsilon, ImplicitContainer<T> lazy_adam, ImplicitContainer<T> sum_inside_sqrt, ImplicitContainer<T> use_gradient_accumulation, object clip_weight_min, object clip_weight_max)

Optimization parameters for Adam.
Parameters
object learning_rate
a floating point value. The learning rate.
ImplicitContainer<T> beta1
A float value. The exponential decay rate for the 1st moment estimates.
ImplicitContainer<T> beta2
A float value. The exponential decay rate for the 2nd moment estimates.
ImplicitContainer<T> epsilon
A small constant for numerical stability.
ImplicitContainer<T> lazy_adam
Use lazy Adam instead of Adam. Lazy Adam trains faster. Please see `optimization_parameters.proto` for details.
ImplicitContainer<T> sum_inside_sqrt
This improves training speed. Please see `optimization_parameters.proto` for details.
ImplicitContainer<T> use_gradient_accumulation
setting this to `False` makes embedding gradients calculation less accurate but faster. Please see `optimization_parameters.proto` for details. for details.
object clip_weight_min
the minimum value to clip by; None means -infinity.
object clip_weight_max
the maximum value to clip by; None means +infinity.

Public properties

double beta1 get; set;

double beta2 get; set;

object clip_weight_max get; set;

object clip_weight_min get; set;

double epsilon get; set;

bool lazy_adam get; set;

double learning_rate get; set;

object PythonObject get;

bool sum_inside_sqrt get; set;

bool use_gradient_accumulation get; set;