LostTech.TensorFlow : API Documentation

Type Optimizer

Namespace tensorflow.train

Parent PythonObjectContainer

Interfaces Trackable, IOptimizer

Base class for optimizers.

This class defines the API to add Ops to train a model. You never use this class directly, but instead instantiate one of its subclasses such as `GradientDescentOptimizer`, `AdagradOptimizer`, or `MomentumOptimizer`.

### Usage In the training program you will just have to run the returned Op. ### Processing gradients before applying them.

Calling `minimize()` takes care of both computing the gradients and applying them to the variables. If you want to process the gradients before applying them you can instead use the optimizer in three steps:

1. Compute the gradients with `compute_gradients()`. 2. Process the gradients as you wish. 3. Apply the processed gradients with `apply_gradients()`.

Example: ### Gating Gradients

Both `minimize()` and `compute_gradients()` accept a `gate_gradients` argument that controls the degree of parallelism during the application of the gradients.

The possible values are: `GATE_NONE`, `GATE_OP`, and `GATE_GRAPH`.

`GATE_NONE`: Compute and apply gradients in parallel. This provides the maximum parallelism in execution, at the cost of some non-reproducibility in the results. For example the two gradients of `matmul` depend on the input values: With `GATE_NONE` one of the gradients could be applied to one of the inputs _before_ the other gradient is computed resulting in non-reproducible results.

`GATE_OP`: For each Op, make sure all gradients are computed before they are used. This prevents race conditions for Ops that generate gradients for multiple inputs where the gradients depend on the inputs.

`GATE_GRAPH`: Make sure all gradients for all variables are computed before any one of them is used. This provides the least parallelism but can be useful if you want to process all gradients before applying any of them.

### Slots

Some optimizer subclasses, such as `MomentumOptimizer` and `AdagradOptimizer` allocate and manage additional variables associated with the variables to train. These are called Slots. Slots have names and you can ask the optimizer for the names of the slots that it uses. Once you have a slot name you can ask the optimizer for the variable it created to hold the slot value.

This can be useful if you want to log debug a training algorithm, report stats about the slots, etc.
Show Example
# Create an optimizer with the desired parameters.
            opt = GradientDescentOptimizer(learning_rate=0.1)
            # Add Ops to the graph to minimize a cost by updating a list of variables.
            # "cost" is a Tensor, and the list of variables contains tf.Variable
            # objects.
            opt_op = opt.minimize(cost, var_list=) 

Properties

Fields

Public properties

object GATE_GRAPH_dyn get; set;

object GATE_NONE_dyn get; set;

object GATE_OP_dyn get; set;

object PythonObject get;

Public fields

int GATE_NONE

return int

int GATE_OP

return int

int GATE_GRAPH

return int