Type SyncReplicasOptimizer
Namespace tensorflow.train
Parent Optimizer
Interfaces ISyncReplicasOptimizer
Class to synchronize, aggregate gradients and pass them to the optimizer. This class is deprecated. For synchrononous training, please use [Distribution
Strategies](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/distribute). In a typical asynchronous training environment, it's common to have some
stale gradients. For example, with a N-replica asynchronous training,
gradients will be applied to the variables N times independently. Depending
on each replica's training speed, some gradients might be calculated from
copies of the variable from several steps back (N-1 steps on average). This
optimizer avoids stale gradients by collecting gradients from all replicas,
averaging them, then applying them to the variables in one shot, after
which replicas can fetch the new variables and continue. The following accumulators/queue are created: * N `gradient accumulators`, one per variable to train. Gradients are pushed
to them and the chief worker will wait until enough gradients are collected
and then average them before applying to variables. The accumulator will
drop all stale gradients (more details in the accumulator op).
* 1 `token` queue where the optimizer pushes the new global_step value after
all variables are updated. The following local variable is created:
* `sync_rep_local_step`, one per replica. Compared against the global_step in
each accumulator to check for staleness of the gradients. The optimizer adds nodes to the graph to collect gradients and pause the
trainers until variables are updated.
For the Parameter Server job: 1. An accumulator is created for each variable, and each replica pushes the
gradients into the accumulators instead of directly applying them to the
variables.
2. Each accumulator averages once enough gradients (replicas_to_aggregate)
have been accumulated.
3. Apply the averaged gradients to the variables.
4. Only after all variables have been updated, increment the global step.
5. Only after step 4, pushes `global_step` in the `token_queue`, once for
each worker replica. The workers can now fetch the global step, use it to
update its local_step variable and start the next batch. Please note that
some workers can consume multiple minibatches, while some may not consume
even one. This is because each worker fetches minibatches as long as
a token exists. If one worker is stuck for some reason and does not
consume a token, another worker can use it. For the replicas: 1. Start a step: fetch variables and compute gradients.
2. Once the gradients have been computed, push them into gradient
accumulators. Each accumulator will check the staleness and drop the stale.
3. After pushing all the gradients, dequeue an updated value of global_step
from the token queue and record that step to its local_step variable. Note
that this is effectively a barrier.
4. Start the next batch. ### Usage
In the training program, every worker will run the train_op as if not
synchronized.
To use SyncReplicasOptimizer with an `Estimator`, you need to send
sync_replicas_hook while calling the fit.
Show Example
# Create any optimizer to update the variables, say a simple SGD: opt = GradientDescentOptimizer(learning_rate=0.1) # Wrap the optimizer with sync_replicas_optimizer with 50 replicas: at each # step the optimizer collects 50 gradients before applying to variables. # Note that if you want to have 2 backup replicas, you can change # total_num_replicas=52 and make sure this number matches how many physical # replicas you started in your job. opt = tf.compat.v1.train.SyncReplicasOptimizer(opt, replicas_to_aggregate=50, total_num_replicas=50) # Some models have startup_delays to help stabilize the model but when using # sync_replicas training, set it to 0. # Now you can call `minimize()` or `compute_gradients()` and # `apply_gradients()` normally training_op = opt.minimize(total_loss, global_step=self.global_step) # You can create the hook which handles initialization and queues. sync_replicas_hook = opt.make_session_run_hook(is_chief)
Methods
- get_chief_queue_runner
- get_chief_queue_runner_dyn
- get_init_tokens_op
- get_init_tokens_op_dyn
- make_session_run_hook
- make_session_run_hook_dyn
- NewDyn
Properties
Public instance methods
QueueRunner get_chief_queue_runner()
Returns the QueueRunner for the chief to execute. This includes the operations to synchronize replicas: aggregate gradients,
apply to variables, increment global step, insert tokens to token queue. Note that this can only be called after calling apply_gradients() which
actually generates this queuerunner.
Returns
-
QueueRunner
- A `QueueRunner` for chief to execute.
object get_chief_queue_runner_dyn()
Returns the QueueRunner for the chief to execute. This includes the operations to synchronize replicas: aggregate gradients,
apply to variables, increment global step, insert tokens to token queue. Note that this can only be called after calling apply_gradients() which
actually generates this queuerunner.
Returns
-
object
- A `QueueRunner` for chief to execute.
object get_init_tokens_op(int num_tokens)
Returns the op to fill the sync_token_queue with the tokens. This is supposed to be executed in the beginning of the chief/sync thread
so that even if the total_num_replicas is less than replicas_to_aggregate,
the model can still proceed as the replicas can compute multiple steps per
variable update. Make sure:
`num_tokens >= replicas_to_aggregate - total_num_replicas`.
Parameters
-
int
num_tokens - Number of tokens to add to the queue.
Returns
-
object
- An op for the chief/sync replica to fill the token queue.
object get_init_tokens_op_dyn(ImplicitContainer<T> num_tokens)
Returns the op to fill the sync_token_queue with the tokens. This is supposed to be executed in the beginning of the chief/sync thread
so that even if the total_num_replicas is less than replicas_to_aggregate,
the model can still proceed as the replicas can compute multiple steps per
variable update. Make sure:
`num_tokens >= replicas_to_aggregate - total_num_replicas`.
Parameters
-
ImplicitContainer<T>
num_tokens - Number of tokens to add to the queue.
Returns
-
object
- An op for the chief/sync replica to fill the token queue.
object make_session_run_hook(bool is_chief, int num_tokens)
Creates a hook to handle SyncReplicasHook ops such as initialization.
object make_session_run_hook_dyn(object is_chief, ImplicitContainer<T> num_tokens)
Creates a hook to handle SyncReplicasHook ops such as initialization.
Public static methods
SyncReplicasOptimizer NewDyn(object opt, object replicas_to_aggregate, object total_num_replicas, object variable_averages, object variables_to_average, ImplicitContainer<T> use_locking, ImplicitContainer<T> name)
Construct a sync_replicas optimizer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
The `SyncReplicaOptimizer` class is deprecated. For synchrononous training, please use [Distribution Strategies](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/distribute).
Parameters
-
object
opt - The actual optimizer that will be used to compute and apply the gradients. Must be one of the Optimizer classes.
-
object
replicas_to_aggregate - number of replicas to aggregate for each variable update.
-
object
total_num_replicas - Total number of tasks/workers/replicas, could be different from replicas_to_aggregate. If total_num_replicas > replicas_to_aggregate: it is backup_replicas + replicas_to_aggregate. If total_num_replicas < replicas_to_aggregate: Replicas compute multiple batches per update to variables.
-
object
variable_averages - Optional `ExponentialMovingAverage` object, used to maintain moving averages for the variables passed in `variables_to_average`.
-
object
variables_to_average - a list of variables that need to be averaged. Only needed if variable_averages is passed in.
-
ImplicitContainer<T>
use_locking - If True use locks for update operation.
-
ImplicitContainer<T>
name - string. Optional name of the returned operation.