Type tf.train
Namespace tensorflow
Methods
- add_queue_runner
- add_queue_runner_dyn
- assert_global_step
- assert_global_step
- basic_train_loop
- basic_train_loop_dyn
- batch
- batch
- batch_dyn
- batch_join
- batch_join
- batch_join
- batch_join
- batch_join_dyn
- checkpoint_exists
- checkpoint_exists
- checkpoint_exists
- checkpoint_exists
- checkpoints_iterator
- cosine_decay
- cosine_decay_dyn
- cosine_decay_restarts
- cosine_decay_restarts
- cosine_decay_restarts_dyn
- do_quantize_training_on_graphdef
- do_quantize_training_on_graphdef_dyn
- exponential_decay
- exponential_decay
- exponential_decay
- exponential_decay_dyn
- export_meta_graph
- export_meta_graph
- export_meta_graph_dyn
- generate_checkpoint_state_proto
- generate_checkpoint_state_proto
- generate_checkpoint_state_proto
- generate_checkpoint_state_proto
- generate_checkpoint_state_proto_dyn
- get_checkpoint_mtimes
- get_checkpoint_mtimes_dyn
- get_checkpoint_state
- get_checkpoint_state
- get_checkpoint_state_dyn
- get_global_step
- global_step
- global_step
- global_step
- global_step
- global_step_dyn
- import_meta_graph
- import_meta_graph
- import_meta_graph
- import_meta_graph_dyn
- input_producer
- input_producer
- input_producer
- input_producer
- input_producer_dyn
- inverse_time_decay
- inverse_time_decay_dyn
- latest_checkpoint
- latest_checkpoint
- latest_checkpoint_dyn
- limit_epochs
- limit_epochs
- limit_epochs_dyn
- linear_cosine_decay
- linear_cosine_decay
- linear_cosine_decay_dyn
- list_variables
- maybe_batch
- maybe_batch
- maybe_batch
- maybe_batch
- maybe_batch
- maybe_batch
- maybe_batch_dyn
- maybe_batch_join
- maybe_batch_join
- maybe_batch_join
- maybe_batch_join_dyn
- maybe_shuffle_batch
- maybe_shuffle_batch
- maybe_shuffle_batch
- maybe_shuffle_batch
- maybe_shuffle_batch
- maybe_shuffle_batch
- maybe_shuffle_batch_dyn
- maybe_shuffle_batch_join
- maybe_shuffle_batch_join
- maybe_shuffle_batch_join
- maybe_shuffle_batch_join_dyn
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession
- MonitoredTrainingSession_dyn
- natural_exp_decay
- natural_exp_decay_dyn
- noisy_linear_cosine_decay
- noisy_linear_cosine_decay
- noisy_linear_cosine_decay_dyn
- piecewise_constant
- piecewise_constant_dyn
- polynomial_decay
- polynomial_decay_dyn
- range_input_producer
- range_input_producer_dyn
- remove_checkpoint
- remove_checkpoint
- remove_checkpoint
- remove_checkpoint_dyn
- replica_device_setter
- replica_device_setter
- replica_device_setter
- replica_device_setter
- replica_device_setter
- replica_device_setter
- replica_device_setter
- replica_device_setter
- replica_device_setter
- replica_device_setter_dyn
- sdca_fprint
- sdca_fprint_dyn
- sdca_optimizer
- sdca_optimizer
- sdca_optimizer
- sdca_optimizer
- sdca_optimizer
- sdca_optimizer
- sdca_optimizer
- sdca_optimizer
- sdca_optimizer
- sdca_optimizer
- sdca_optimizer
- sdca_optimizer
- sdca_optimizer
- sdca_optimizer
- sdca_optimizer
- sdca_optimizer
- sdca_optimizer_dyn
- sdca_shrink_l1
- sdca_shrink_l1
- sdca_shrink_l1_dyn
- shuffle_batch
- shuffle_batch
- shuffle_batch_dyn
- shuffle_batch_join
- shuffle_batch_join
- shuffle_batch_join
- shuffle_batch_join
- shuffle_batch_join
- shuffle_batch_join
- shuffle_batch_join
- shuffle_batch_join
- shuffle_batch_join_dyn
- slice_input_producer
- slice_input_producer_dyn
- start_queue_runners
- start_queue_runners
- start_queue_runners
- start_queue_runners
- start_queue_runners
- start_queue_runners_dyn
- string_input_producer
- string_input_producer
- string_input_producer_dyn
- summary_iterator
- summary_iterator
- summary_iterator_dyn
- update_checkpoint_state
- update_checkpoint_state_dyn
- warm_start
- warm_start
- warm_start_dyn
Properties
- add_queue_runner_fn
- assert_global_step_fn
- basic_train_loop_fn
- batch_fn
- batch_join_fn
- checkpoint_exists_fn
- checkpoints_iterator_fn
- cosine_decay_fn
- cosine_decay_restarts_fn
- create_global_step_fn
- do_quantize_training_on_graphdef_fn
- exponential_decay_fn
- export_meta_graph_fn
- generate_checkpoint_state_proto_fn
- get_checkpoint_mtimes_fn
- get_checkpoint_state_fn
- get_global_step__fn
- get_or_create_global_step_fn
- global_step_fn
- import_meta_graph_fn
- init_from_checkpoint_fn
- input_producer_fn
- inverse_time_decay_fn
- latest_checkpoint_fn
- limit_epochs_fn
- linear_cosine_decay_fn
- list_variables_fn
- load_checkpoint_fn
- load_variable_fn
- maybe_batch_fn
- maybe_batch_join_fn
- maybe_shuffle_batch_fn
- maybe_shuffle_batch_join_fn
- MonitoredTrainingSession_fn
- natural_exp_decay_fn
- noisy_linear_cosine_decay_fn
- piecewise_constant_fn
- polynomial_decay_fn
- range_input_producer_fn
- remove_checkpoint_fn
- replica_device_setter_fn
- sdca_fprint_fn
- sdca_optimizer_fn
- sdca_shrink_l1_fn
- shuffle_batch_fn
- shuffle_batch_join_fn
- slice_input_producer_fn
- start_queue_runners_fn
- string_input_producer_fn
- summary_iterator_fn
- update_checkpoint_state_fn
- warm_start_fn
Public static methods
void add_queue_runner(QueueRunner qr, ImplicitContainer<T> collection)
Adds a `QueueRunner` to a collection in the graph. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the
tf.data
module. When building a complex model that uses many queues it is often difficult to
gather all the queue runners that need to be run. This convenience function
allows you to add a queue runner to a well known collection in the graph. The companion method `start_queue_runners()` can be used to start threads for
all the collected queue runners.
Parameters
-
QueueRunner
qr - A `QueueRunner`.
-
ImplicitContainer<T>
collection - A `GraphKey` specifying the graph collection to add the queue runner to. Defaults to `GraphKeys.QUEUE_RUNNERS`.
object add_queue_runner_dyn(object qr, ImplicitContainer<T> collection)
Adds a `QueueRunner` to a collection in the graph. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the
tf.data
module. When building a complex model that uses many queues it is often difficult to
gather all the queue runners that need to be run. This convenience function
allows you to add a queue runner to a well known collection in the graph. The companion method `start_queue_runners()` can be used to start threads for
all the collected queue runners.
Parameters
-
object
qr - A `QueueRunner`.
-
ImplicitContainer<T>
collection - A `GraphKey` specifying the graph collection to add the queue runner to. Defaults to `GraphKeys.QUEUE_RUNNERS`.
void assert_global_step(IEnumerable<object> global_step_tensor)
Asserts `global_step_tensor` is a scalar int `Variable` or `Tensor`.
Parameters
-
IEnumerable<object>
global_step_tensor - `Tensor` to test.
void assert_global_step(object global_step_tensor)
Asserts `global_step_tensor` is a scalar int `Variable` or `Tensor`.
Parameters
-
object
global_step_tensor - `Tensor` to test.
void basic_train_loop(Supervisor supervisor, PythonFunctionContainer train_step_fn, Nullable<ValueTuple<Supervisor, string>> args, IDictionary<string, string> kwargs, string master)
Basic loop to train a model. Calls `train_step_fn` in a loop to train a model. The function is called as:
It is passed a `tf.compat.v1.Session` in addition to `args` and `kwargs`. The
function
typically runs one training step in the session.
Parameters
-
Supervisor
supervisor - `tf.compat.v1.train.Supervisor` to run the training services.
-
PythonFunctionContainer
train_step_fn - Callable to execute one training step. Called repeatedly as `train_step_fn(session, *args **kwargs)`.
-
Nullable<ValueTuple<Supervisor, string>>
args - Optional positional arguments passed to `train_step_fn`.
-
IDictionary<string, string>
kwargs - Optional keyword arguments passed to `train_step_fn`.
-
string
master - Master to use to create the training session. Defaults to `""` which causes the session to be created in the local process.
Show Example
train_step_fn(session, *args, **kwargs)
object basic_train_loop_dyn(object supervisor, object train_step_fn, object args, object kwargs, ImplicitContainer<T> master)
Basic loop to train a model. Calls `train_step_fn` in a loop to train a model. The function is called as:
It is passed a `tf.compat.v1.Session` in addition to `args` and `kwargs`. The
function
typically runs one training step in the session.
Parameters
-
object
supervisor - `tf.compat.v1.train.Supervisor` to run the training services.
-
object
train_step_fn - Callable to execute one training step. Called repeatedly as `train_step_fn(session, *args **kwargs)`.
-
object
args - Optional positional arguments passed to `train_step_fn`.
-
object
kwargs - Optional keyword arguments passed to `train_step_fn`.
-
ImplicitContainer<T>
master - Master to use to create the training session. Defaults to `""` which causes the session to be created in the local process.
Show Example
train_step_fn(session, *args, **kwargs)
object batch(IEnumerable<IGraphNodeBase> tensors, Nullable<int> batch_size, int num_threads, Nullable<int> capacity, bool enqueue_many, object shapes, bool dynamic_pad, bool allow_smaller_final_batch, string shared_name, string name)
Creates batches of tensors in `tensors`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). The argument `tensors` can be a list or a dictionary of tensors.
The value returned by the function will be of the same type
as `tensors`. This function is implemented using a queue. A `QueueRunner` for the
queue is added to the current `Graph`'s `QUEUE_RUNNER` collection. If `enqueue_many` is `False`, `tensors` is assumed to represent a single
example. An input tensor with shape `[x, y, z]` will be output as a tensor
with shape `[batch_size, x, y, z]`. If `enqueue_many` is `True`, `tensors` is assumed to represent a batch of
examples, where the first dimension is indexed by example, and all members of
`tensors` should have the same size in the first dimension. If an input
tensor has shape `[*, x, y, z]`, the output will have shape `[batch_size, x,
y, z]`. The `capacity` argument controls the how long the prefetching is
allowed to grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. *N.B.:* If `dynamic_pad` is `False`, you must ensure that either
(i) the `shapes` argument is passed, or (ii) all of the tensors in
`tensors` must have fully-defined shapes. `ValueError` will be
raised if neither of these conditions holds. If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
tensors is known, but individual dimensions may have shape `None`.
In this case, for each enqueue the dimensions with value `None`
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See `PaddingFIFOQueue` for more info. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IEnumerable<IGraphNodeBase>
tensors - The list or dictionary of tensors to enqueue.
-
Nullable<int>
batch_size - The new batch size pulled from the queue.
-
int
num_threads - The number of threads enqueuing `tensors`. The batching will be nondeterministic if `num_threads > 1`.
-
Nullable<int>
capacity - An integer. The maximum number of elements in the queue.
-
bool
enqueue_many - Whether each tensor in `tensors` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors`.
-
bool
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (Optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same types as `tensors` (except if the input is a list of one element, then it returns a tensor, not a list).
object batch(IDictionary<string, string> tensors, Nullable<int> batch_size, int num_threads, Nullable<int> capacity, bool enqueue_many, object shapes, bool dynamic_pad, bool allow_smaller_final_batch, string shared_name, string name)
Creates batches of tensors in `tensors`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). The argument `tensors` can be a list or a dictionary of tensors.
The value returned by the function will be of the same type
as `tensors`. This function is implemented using a queue. A `QueueRunner` for the
queue is added to the current `Graph`'s `QUEUE_RUNNER` collection. If `enqueue_many` is `False`, `tensors` is assumed to represent a single
example. An input tensor with shape `[x, y, z]` will be output as a tensor
with shape `[batch_size, x, y, z]`. If `enqueue_many` is `True`, `tensors` is assumed to represent a batch of
examples, where the first dimension is indexed by example, and all members of
`tensors` should have the same size in the first dimension. If an input
tensor has shape `[*, x, y, z]`, the output will have shape `[batch_size, x,
y, z]`. The `capacity` argument controls the how long the prefetching is
allowed to grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. *N.B.:* If `dynamic_pad` is `False`, you must ensure that either
(i) the `shapes` argument is passed, or (ii) all of the tensors in
`tensors` must have fully-defined shapes. `ValueError` will be
raised if neither of these conditions holds. If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
tensors is known, but individual dimensions may have shape `None`.
In this case, for each enqueue the dimensions with value `None`
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See `PaddingFIFOQueue` for more info. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IDictionary<string, string>
tensors - The list or dictionary of tensors to enqueue.
-
Nullable<int>
batch_size - The new batch size pulled from the queue.
-
int
num_threads - The number of threads enqueuing `tensors`. The batching will be nondeterministic if `num_threads > 1`.
-
Nullable<int>
capacity - An integer. The maximum number of elements in the queue.
-
bool
enqueue_many - Whether each tensor in `tensors` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors`.
-
bool
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (Optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same types as `tensors` (except if the input is a list of one element, then it returns a tensor, not a list).
object batch_dyn(object tensors, object batch_size, ImplicitContainer<T> num_threads, ImplicitContainer<T> capacity, ImplicitContainer<T> enqueue_many, object shapes, ImplicitContainer<T> dynamic_pad, ImplicitContainer<T> allow_smaller_final_batch, object shared_name, object name)
Creates batches of tensors in `tensors`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). The argument `tensors` can be a list or a dictionary of tensors.
The value returned by the function will be of the same type
as `tensors`. This function is implemented using a queue. A `QueueRunner` for the
queue is added to the current `Graph`'s `QUEUE_RUNNER` collection. If `enqueue_many` is `False`, `tensors` is assumed to represent a single
example. An input tensor with shape `[x, y, z]` will be output as a tensor
with shape `[batch_size, x, y, z]`. If `enqueue_many` is `True`, `tensors` is assumed to represent a batch of
examples, where the first dimension is indexed by example, and all members of
`tensors` should have the same size in the first dimension. If an input
tensor has shape `[*, x, y, z]`, the output will have shape `[batch_size, x,
y, z]`. The `capacity` argument controls the how long the prefetching is
allowed to grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. *N.B.:* If `dynamic_pad` is `False`, you must ensure that either
(i) the `shapes` argument is passed, or (ii) all of the tensors in
`tensors` must have fully-defined shapes. `ValueError` will be
raised if neither of these conditions holds. If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
tensors is known, but individual dimensions may have shape `None`.
In this case, for each enqueue the dimensions with value `None`
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See `PaddingFIFOQueue` for more info. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
object
tensors - The list or dictionary of tensors to enqueue.
-
object
batch_size - The new batch size pulled from the queue.
-
ImplicitContainer<T>
num_threads - The number of threads enqueuing `tensors`. The batching will be nondeterministic if `num_threads > 1`.
-
ImplicitContainer<T>
capacity - An integer. The maximum number of elements in the queue.
-
ImplicitContainer<T>
enqueue_many - Whether each tensor in `tensors` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors`.
-
ImplicitContainer<T>
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
ImplicitContainer<T>
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional). If set, this queue will be shared under the given name across multiple sessions.
-
object
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same types as `tensors` (except if the input is a list of one element, then it returns a tensor, not a list).
object batch_join(IEnumerable<IDictionary<string, string>> tensors_list, IGraphNodeBase batch_size, int capacity, Nullable<int> enqueue_many, object shapes, bool dynamic_pad, bool allow_smaller_final_batch, string shared_name, string name)
Runs a list of tensors to fill a queue to create batches of examples. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.compat.v1.train.batch()`. WARNING: This function is nondeterministic, since it starts a separate thread
for each tensor. Enqueues a different list of tensors in different threads.
Implemented using a queue -- a `QueueRunner` for the queue
is added to the current `Graph`'s `QUEUE_RUNNER` collection. `len(tensors_list)` threads will be started,
with thread `i` enqueuing the tensors from
`tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first
dimension if `enqueue_many` is true. If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor `x` will be output as a
tensor with shape `[batch_size] + x.shape`. If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. The slices of any input tensor
`x` are treated as examples, and the output tensors will have shape
`[batch_size] + x.shape[1:]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. *N.B.:* If `dynamic_pad` is `False`, you must ensure that either
(i) the `shapes` argument is passed, or (ii) all of the tensors in
`tensors_list` must have fully-defined shapes. `ValueError` will be
raised if neither of these conditions holds. If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
tensors is known, but individual dimensions may have value `None`.
In this case, for each enqueue the dimensions with value `None`
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See `PaddingFIFOQueue` for more info. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IEnumerable<IDictionary<string, string>>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
IGraphNodeBase
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
Nullable<int>
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list_list[i]`.
-
bool
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object batch_join(IEnumerable<IDictionary<string, string>> tensors_list, IGraphNodeBase batch_size, int capacity, bool enqueue_many, object shapes, bool dynamic_pad, bool allow_smaller_final_batch, string shared_name, string name)
Runs a list of tensors to fill a queue to create batches of examples. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.compat.v1.train.batch()`. WARNING: This function is nondeterministic, since it starts a separate thread
for each tensor. Enqueues a different list of tensors in different threads.
Implemented using a queue -- a `QueueRunner` for the queue
is added to the current `Graph`'s `QUEUE_RUNNER` collection. `len(tensors_list)` threads will be started,
with thread `i` enqueuing the tensors from
`tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first
dimension if `enqueue_many` is true. If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor `x` will be output as a
tensor with shape `[batch_size] + x.shape`. If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. The slices of any input tensor
`x` are treated as examples, and the output tensors will have shape
`[batch_size] + x.shape[1:]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. *N.B.:* If `dynamic_pad` is `False`, you must ensure that either
(i) the `shapes` argument is passed, or (ii) all of the tensors in
`tensors_list` must have fully-defined shapes. `ValueError` will be
raised if neither of these conditions holds. If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
tensors is known, but individual dimensions may have value `None`.
In this case, for each enqueue the dimensions with value `None`
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See `PaddingFIFOQueue` for more info. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IEnumerable<IDictionary<string, string>>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
IGraphNodeBase
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list_list[i]`.
-
bool
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object batch_join(IEnumerable<IDictionary<string, string>> tensors_list, int batch_size, int capacity, Nullable<int> enqueue_many, object shapes, bool dynamic_pad, bool allow_smaller_final_batch, string shared_name, string name)
Runs a list of tensors to fill a queue to create batches of examples. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.compat.v1.train.batch()`. WARNING: This function is nondeterministic, since it starts a separate thread
for each tensor. Enqueues a different list of tensors in different threads.
Implemented using a queue -- a `QueueRunner` for the queue
is added to the current `Graph`'s `QUEUE_RUNNER` collection. `len(tensors_list)` threads will be started,
with thread `i` enqueuing the tensors from
`tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first
dimension if `enqueue_many` is true. If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor `x` will be output as a
tensor with shape `[batch_size] + x.shape`. If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. The slices of any input tensor
`x` are treated as examples, and the output tensors will have shape
`[batch_size] + x.shape[1:]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. *N.B.:* If `dynamic_pad` is `False`, you must ensure that either
(i) the `shapes` argument is passed, or (ii) all of the tensors in
`tensors_list` must have fully-defined shapes. `ValueError` will be
raised if neither of these conditions holds. If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
tensors is known, but individual dimensions may have value `None`.
In this case, for each enqueue the dimensions with value `None`
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See `PaddingFIFOQueue` for more info. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IEnumerable<IDictionary<string, string>>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
int
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
Nullable<int>
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list_list[i]`.
-
bool
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object batch_join(IEnumerable<IDictionary<string, string>> tensors_list, int batch_size, int capacity, bool enqueue_many, object shapes, bool dynamic_pad, bool allow_smaller_final_batch, string shared_name, string name)
Runs a list of tensors to fill a queue to create batches of examples. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.compat.v1.train.batch()`. WARNING: This function is nondeterministic, since it starts a separate thread
for each tensor. Enqueues a different list of tensors in different threads.
Implemented using a queue -- a `QueueRunner` for the queue
is added to the current `Graph`'s `QUEUE_RUNNER` collection. `len(tensors_list)` threads will be started,
with thread `i` enqueuing the tensors from
`tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first
dimension if `enqueue_many` is true. If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor `x` will be output as a
tensor with shape `[batch_size] + x.shape`. If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. The slices of any input tensor
`x` are treated as examples, and the output tensors will have shape
`[batch_size] + x.shape[1:]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. *N.B.:* If `dynamic_pad` is `False`, you must ensure that either
(i) the `shapes` argument is passed, or (ii) all of the tensors in
`tensors_list` must have fully-defined shapes. `ValueError` will be
raised if neither of these conditions holds. If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
tensors is known, but individual dimensions may have value `None`.
In this case, for each enqueue the dimensions with value `None`
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See `PaddingFIFOQueue` for more info. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IEnumerable<IDictionary<string, string>>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
int
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list_list[i]`.
-
bool
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object batch_join_dyn(object tensors_list, object batch_size, ImplicitContainer<T> capacity, ImplicitContainer<T> enqueue_many, object shapes, ImplicitContainer<T> dynamic_pad, ImplicitContainer<T> allow_smaller_final_batch, object shared_name, object name)
Runs a list of tensors to fill a queue to create batches of examples. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.compat.v1.train.batch()`. WARNING: This function is nondeterministic, since it starts a separate thread
for each tensor. Enqueues a different list of tensors in different threads.
Implemented using a queue -- a `QueueRunner` for the queue
is added to the current `Graph`'s `QUEUE_RUNNER` collection. `len(tensors_list)` threads will be started,
with thread `i` enqueuing the tensors from
`tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first
dimension if `enqueue_many` is true. If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor `x` will be output as a
tensor with shape `[batch_size] + x.shape`. If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. The slices of any input tensor
`x` are treated as examples, and the output tensors will have shape
`[batch_size] + x.shape[1:]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. *N.B.:* If `dynamic_pad` is `False`, you must ensure that either
(i) the `shapes` argument is passed, or (ii) all of the tensors in
`tensors_list` must have fully-defined shapes. `ValueError` will be
raised if neither of these conditions holds. If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
tensors is known, but individual dimensions may have value `None`.
In this case, for each enqueue the dimensions with value `None`
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See `PaddingFIFOQueue` for more info. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
object
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
object
batch_size - An integer. The new batch size pulled from the queue.
-
ImplicitContainer<T>
capacity - An integer. The maximum number of elements in the queue.
-
ImplicitContainer<T>
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list_list[i]`.
-
ImplicitContainer<T>
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
ImplicitContainer<T>
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
object
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
bool checkpoint_exists(Byte[] checkpoint_prefix)
Checks whether a V1 or V2 checkpoint exists with the specified prefix. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix. This is the recommended way to check if a checkpoint exists, since it takes
into account the naming difference between V1 and V2 formats.
Parameters
-
Byte[]
checkpoint_prefix - the prefix of a V1 or V2 checkpoint, with V2 taking priority. Typically the result of `Saver.save()` or that of `tf.train.latest_checkpoint()`, regardless of sharded/non-sharded or V1/V2.
Returns
-
bool
- A bool, true if a checkpoint referred to by `checkpoint_prefix` exists.
bool checkpoint_exists(string checkpoint_prefix)
Checks whether a V1 or V2 checkpoint exists with the specified prefix. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix. This is the recommended way to check if a checkpoint exists, since it takes
into account the naming difference between V1 and V2 formats.
Parameters
-
string
checkpoint_prefix - the prefix of a V1 or V2 checkpoint, with V2 taking priority. Typically the result of `Saver.save()` or that of `tf.train.latest_checkpoint()`, regardless of sharded/non-sharded or V1/V2.
Returns
-
bool
- A bool, true if a checkpoint referred to by `checkpoint_prefix` exists.
bool checkpoint_exists(IGraphNodeBase checkpoint_prefix)
Checks whether a V1 or V2 checkpoint exists with the specified prefix. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix. This is the recommended way to check if a checkpoint exists, since it takes
into account the naming difference between V1 and V2 formats.
Parameters
-
IGraphNodeBase
checkpoint_prefix - the prefix of a V1 or V2 checkpoint, with V2 taking priority. Typically the result of `Saver.save()` or that of `tf.train.latest_checkpoint()`, regardless of sharded/non-sharded or V1/V2.
Returns
-
bool
- A bool, true if a checkpoint referred to by `checkpoint_prefix` exists.
bool checkpoint_exists(IEnumerable<object> checkpoint_prefix)
Checks whether a V1 or V2 checkpoint exists with the specified prefix. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix. This is the recommended way to check if a checkpoint exists, since it takes
into account the naming difference between V1 and V2 formats.
Parameters
-
IEnumerable<object>
checkpoint_prefix - the prefix of a V1 or V2 checkpoint, with V2 taking priority. Typically the result of `Saver.save()` or that of `tf.train.latest_checkpoint()`, regardless of sharded/non-sharded or V1/V2.
Returns
-
bool
- A bool, true if a checkpoint referred to by `checkpoint_prefix` exists.
IEnumerator<object> checkpoints_iterator(string checkpoint_dir, int min_interval_secs, double timeout, PythonFunctionContainer timeout_fn)
Continuously yield new checkpoint files as they appear. The iterator only checks for new checkpoints when control flow has been
reverted to it. This means it can miss checkpoints if your code takes longer
to run between iterations than `min_interval_secs` or the interval at which
new checkpoints are written. The `timeout` argument is the maximum number of seconds to block waiting for
a new checkpoint. It is used in combination with the `timeout_fn` as
follows: * If the timeout expires and no `timeout_fn` was specified, the iterator
stops yielding.
* If a `timeout_fn` was specified, that function is called and if it returns
a true boolean value the iterator stops yielding.
* If the function returns a false boolean value then the iterator resumes the
wait for new checkpoints. At this point the timeout logic applies again. This behavior gives control to callers on what to do if checkpoints do not
come fast enough or stop being generated. For example, if callers have a way
to detect that the training has stopped and know that no new checkpoints
will be generated, they can provide a `timeout_fn` that returns `True` when
the training has stopped. If they know that the training is still going on
they return `False` instead.
Parameters
-
string
checkpoint_dir - The directory in which checkpoints are saved.
-
int
min_interval_secs - The minimum number of seconds between yielding checkpoints.
-
double
timeout - The maximum number of seconds to wait between checkpoints. If left as `None`, then the process will wait indefinitely.
-
PythonFunctionContainer
timeout_fn - Optional function to call after a timeout. If the function returns True, then it means that no new checkpoints will be generated and the iterator will exit. The function is called with no arguments.
object cosine_decay(double learning_rate, int global_step, int decay_steps, double alpha, string name)
Applies cosine decay to the learning rate. See [Loshchilov & Hutter, ICLR2016], SGDR: Stochastic Gradient Descent
with Warm Restarts. https://arxiv.org/abs/1608.03983 When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies a cosine decay function
to a provided initial learning rate. It requires a `global_step` value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step. The function returns the decayed learning rate. It is computed as:
Example usage:
Parameters
-
double
learning_rate - A scalar `float32` or `float64` Tensor or a Python number. The initial learning rate.
-
int
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation.
-
int
decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Number of steps to decay over.
-
double
alpha - A scalar `float32` or `float64` Tensor or a Python number. Minimum learning rate value as a fraction of learning_rate.
-
string
name - String. Optional name of the operation. Defaults to 'CosineDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
global_step = min(global_step, decay_steps) cosine_decay = 0.5 * (1 + cos(pi * global_step / decay_steps)) decayed = (1 - alpha) * cosine_decay + alpha decayed_learning_rate = learning_rate * decayed
object cosine_decay_dyn(object learning_rate, object global_step, object decay_steps, ImplicitContainer<T> alpha, object name)
Applies cosine decay to the learning rate. See [Loshchilov & Hutter, ICLR2016], SGDR: Stochastic Gradient Descent
with Warm Restarts. https://arxiv.org/abs/1608.03983 When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies a cosine decay function
to a provided initial learning rate. It requires a `global_step` value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step. The function returns the decayed learning rate. It is computed as:
Example usage:
Parameters
-
object
learning_rate - A scalar `float32` or `float64` Tensor or a Python number. The initial learning rate.
-
object
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation.
-
object
decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Number of steps to decay over.
-
ImplicitContainer<T>
alpha - A scalar `float32` or `float64` Tensor or a Python number. Minimum learning rate value as a fraction of learning_rate.
-
object
name - String. Optional name of the operation. Defaults to 'CosineDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
global_step = min(global_step, decay_steps) cosine_decay = 0.5 * (1 + cos(pi * global_step / decay_steps)) decayed = (1 - alpha) * cosine_decay + alpha decayed_learning_rate = learning_rate * decayed
object cosine_decay_restarts(double learning_rate, int global_step, int first_decay_steps, double t_mul, double m_mul, double alpha, string name)
Applies cosine decay with restarts to the learning rate. See [Loshchilov & Hutter, ICLR2016], SGDR: Stochastic Gradient Descent
with Warm Restarts. https://arxiv.org/abs/1608.03983 When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies a cosine decay function with
restarts to a provided initial learning rate. It requires a `global_step`
value to compute the decayed learning rate. You can just pass a TensorFlow
variable that you increment at each training step. The function returns the decayed learning rate while taking into account
possible warm restarts. The learning rate multiplier first decays
from 1 to `alpha` for `first_decay_steps` steps. Then, a warm
restart is performed. Each new warm restart runs for `t_mul` times more steps
and with `m_mul` times smaller initial learning rate. Example usage:
Parameters
-
double
learning_rate - A scalar `float32` or `float64` Tensor or a Python number. The initial learning rate.
-
int
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation.
-
int
first_decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Number of steps to decay over.
-
double
t_mul - A scalar `float32` or `float64` `Tensor` or a Python number. Used to derive the number of iterations in the i-th period
-
double
m_mul - A scalar `float32` or `float64` `Tensor` or a Python number. Used to derive the initial learning rate of the i-th period:
-
double
alpha - A scalar `float32` or `float64` Tensor or a Python number. Minimum learning rate value as a fraction of the learning_rate.
-
string
name - String. Optional name of the operation. Defaults to 'SGDRDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
first_decay_steps = 1000 lr_decayed = cosine_decay_restarts(learning_rate, global_step, first_decay_steps)
object cosine_decay_restarts(double learning_rate, IGraphNodeBase global_step, int first_decay_steps, double t_mul, double m_mul, double alpha, string name)
Applies cosine decay with restarts to the learning rate. See [Loshchilov & Hutter, ICLR2016], SGDR: Stochastic Gradient Descent
with Warm Restarts. https://arxiv.org/abs/1608.03983 When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies a cosine decay function with
restarts to a provided initial learning rate. It requires a `global_step`
value to compute the decayed learning rate. You can just pass a TensorFlow
variable that you increment at each training step. The function returns the decayed learning rate while taking into account
possible warm restarts. The learning rate multiplier first decays
from 1 to `alpha` for `first_decay_steps` steps. Then, a warm
restart is performed. Each new warm restart runs for `t_mul` times more steps
and with `m_mul` times smaller initial learning rate. Example usage:
Parameters
-
double
learning_rate - A scalar `float32` or `float64` Tensor or a Python number. The initial learning rate.
-
IGraphNodeBase
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation.
-
int
first_decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Number of steps to decay over.
-
double
t_mul - A scalar `float32` or `float64` `Tensor` or a Python number. Used to derive the number of iterations in the i-th period
-
double
m_mul - A scalar `float32` or `float64` `Tensor` or a Python number. Used to derive the initial learning rate of the i-th period:
-
double
alpha - A scalar `float32` or `float64` Tensor or a Python number. Minimum learning rate value as a fraction of the learning_rate.
-
string
name - String. Optional name of the operation. Defaults to 'SGDRDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
first_decay_steps = 1000 lr_decayed = cosine_decay_restarts(learning_rate, global_step, first_decay_steps)
object cosine_decay_restarts_dyn(object learning_rate, object global_step, object first_decay_steps, ImplicitContainer<T> t_mul, ImplicitContainer<T> m_mul, ImplicitContainer<T> alpha, object name)
Applies cosine decay with restarts to the learning rate. See [Loshchilov & Hutter, ICLR2016], SGDR: Stochastic Gradient Descent
with Warm Restarts. https://arxiv.org/abs/1608.03983 When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies a cosine decay function with
restarts to a provided initial learning rate. It requires a `global_step`
value to compute the decayed learning rate. You can just pass a TensorFlow
variable that you increment at each training step. The function returns the decayed learning rate while taking into account
possible warm restarts. The learning rate multiplier first decays
from 1 to `alpha` for `first_decay_steps` steps. Then, a warm
restart is performed. Each new warm restart runs for `t_mul` times more steps
and with `m_mul` times smaller initial learning rate. Example usage:
Parameters
-
object
learning_rate - A scalar `float32` or `float64` Tensor or a Python number. The initial learning rate.
-
object
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation.
-
object
first_decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Number of steps to decay over.
-
ImplicitContainer<T>
t_mul - A scalar `float32` or `float64` `Tensor` or a Python number. Used to derive the number of iterations in the i-th period
-
ImplicitContainer<T>
m_mul - A scalar `float32` or `float64` `Tensor` or a Python number. Used to derive the initial learning rate of the i-th period:
-
ImplicitContainer<T>
alpha - A scalar `float32` or `float64` Tensor or a Python number. Minimum learning rate value as a fraction of the learning_rate.
-
object
name - String. Optional name of the operation. Defaults to 'SGDRDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
first_decay_steps = 1000 lr_decayed = cosine_decay_restarts(learning_rate, global_step, first_decay_steps)
object do_quantize_training_on_graphdef(object input_graph, int num_bits)
A general quantization scheme is being developed in
tf.contrib.quantize
. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
GraphDef quantized training rewriter is deprecated in the long term Consider using that instead, though since it is in the tf.contrib namespace,
it is not subject to backward compatibility guarantees.
object do_quantize_training_on_graphdef_dyn(object input_graph, object num_bits)
A general quantization scheme is being developed in
tf.contrib.quantize
. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
GraphDef quantized training rewriter is deprecated in the long term Consider using that instead, though since it is in the tf.contrib namespace,
it is not subject to backward compatibility guarantees.
object exponential_decay(double learning_rate, IGraphNodeBase global_step, int decay_steps, double decay_rate, bool staircase, string name)
Applies exponential decay to the learning rate. When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies an exponential decay function
to a provided initial learning rate. It requires a `global_step` value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step. The function returns the decayed learning rate. It is computed as:
If the argument `staircase` is `True`, then `global_step / decay_steps` is an
integer division and the decayed learning rate follows a staircase function. Example: decay every 100000 steps with a base of 0.96:
Parameters
-
double
learning_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The initial learning rate.
-
IGraphNodeBase
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation. Must not be negative.
-
int
decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Must be positive. See the decay computation above.
-
double
decay_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The decay rate.
-
bool
staircase - Boolean. If `True` decay the learning rate at discrete intervals
-
string
name - String. Optional name of the operation. Defaults to 'ExponentialDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps)
object exponential_decay(double learning_rate, int global_step, int decay_steps, double decay_rate, bool staircase, string name)
Applies exponential decay to the learning rate. When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies an exponential decay function
to a provided initial learning rate. It requires a `global_step` value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step. The function returns the decayed learning rate. It is computed as:
If the argument `staircase` is `True`, then `global_step / decay_steps` is an
integer division and the decayed learning rate follows a staircase function. Example: decay every 100000 steps with a base of 0.96:
Parameters
-
double
learning_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The initial learning rate.
-
int
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation. Must not be negative.
-
int
decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Must be positive. See the decay computation above.
-
double
decay_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The decay rate.
-
bool
staircase - Boolean. If `True` decay the learning rate at discrete intervals
-
string
name - String. Optional name of the operation. Defaults to 'ExponentialDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps)
object exponential_decay(double learning_rate, ResourceVariable global_step, int decay_steps, double decay_rate, bool staircase, string name)
Applies exponential decay to the learning rate. When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies an exponential decay function
to a provided initial learning rate. It requires a `global_step` value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step. The function returns the decayed learning rate. It is computed as:
If the argument `staircase` is `True`, then `global_step / decay_steps` is an
integer division and the decayed learning rate follows a staircase function. Example: decay every 100000 steps with a base of 0.96:
Parameters
-
double
learning_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The initial learning rate.
-
ResourceVariable
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation. Must not be negative.
-
int
decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Must be positive. See the decay computation above.
-
double
decay_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The decay rate.
-
bool
staircase - Boolean. If `True` decay the learning rate at discrete intervals
-
string
name - String. Optional name of the operation. Defaults to 'ExponentialDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps)
object exponential_decay_dyn(object learning_rate, object global_step, object decay_steps, object decay_rate, ImplicitContainer<T> staircase, object name)
Applies exponential decay to the learning rate. When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies an exponential decay function
to a provided initial learning rate. It requires a `global_step` value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step. The function returns the decayed learning rate. It is computed as:
If the argument `staircase` is `True`, then `global_step / decay_steps` is an
integer division and the decayed learning rate follows a staircase function. Example: decay every 100000 steps with a base of 0.96:
Parameters
-
object
learning_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The initial learning rate.
-
object
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation. Must not be negative.
-
object
decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Must be positive. See the decay computation above.
-
object
decay_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The decay rate.
-
ImplicitContainer<T>
staircase - Boolean. If `True` decay the learning rate at discrete intervals
-
object
name - String. Optional name of the operation. Defaults to 'ExponentialDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps)
object export_meta_graph(IEnumerable<object> filename, object meta_info_def, IEnumerable<object> graph_def, Nullable<int> saver_def, IEnumerable<object> collection_list, bool as_text, Graph graph, object export_scope, Nullable<bool> clear_devices, bool clear_extraneous_savers, bool strip_default_attrs, bool save_debug_info, IDictionary<string, object> kwargs)
Returns `MetaGraphDef` proto. Optionally writes it to filename. This function exports the graph, saver, and collection objects into
`MetaGraphDef` protocol buffer with the intention of it being imported
at a later time or location to restart training, run inference, or be
a subgraph.
Parameters
-
IEnumerable<object>
filename - Optional filename including the path for writing the generated `MetaGraphDef` protocol buffer.
-
object
meta_info_def - `MetaInfoDef` protocol buffer.
-
IEnumerable<object>
graph_def - `GraphDef` protocol buffer.
-
Nullable<int>
saver_def - `SaverDef` protocol buffer.
-
IEnumerable<object>
collection_list - List of string keys to collect.
-
bool
as_text - If `True`, writes the `MetaGraphDef` as an ASCII proto.
-
Graph
graph - The `Graph` to export. If `None`, use the default graph.
-
object
export_scope - Optional `string`. Name scope under which to extract the subgraph. The scope name will be striped from the node definitions for easy import later into new name scopes. If `None`, the whole graph is exported. graph_def and export_scope cannot both be specified.
-
Nullable<bool>
clear_devices - Whether or not to clear the device field for an `Operation` or `Tensor` during export.
-
bool
clear_extraneous_savers - Remove any Saver-related information from the graph (both Save/Restore ops and SaverDefs) that are not associated with the provided SaverDef.
-
bool
strip_default_attrs - Boolean. If `True`, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see [Stripping Default-Valued Attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes).
-
bool
save_debug_info - If `True`, save the GraphDebugInfo to a separate file, which in the same directory of filename and with `_debug` added before the file extend.
-
IDictionary<string, object>
kwargs - Optional keyed arguments.
Returns
-
object
- A `MetaGraphDef` proto.
object export_meta_graph(IEnumerable<object> filename, object meta_info_def, IEnumerable<object> graph_def, Nullable<int> saver_def, IEnumerable<object> collection_list, bool as_text, Graph graph, object export_scope, Nullable<bool> clear_devices, bool clear_extraneous_savers, Saver strip_default_attrs, bool save_debug_info, IDictionary<string, object> kwargs)
Returns `MetaGraphDef` proto. Optionally writes it to filename. This function exports the graph, saver, and collection objects into
`MetaGraphDef` protocol buffer with the intention of it being imported
at a later time or location to restart training, run inference, or be
a subgraph.
Parameters
-
IEnumerable<object>
filename - Optional filename including the path for writing the generated `MetaGraphDef` protocol buffer.
-
object
meta_info_def - `MetaInfoDef` protocol buffer.
-
IEnumerable<object>
graph_def - `GraphDef` protocol buffer.
-
Nullable<int>
saver_def - `SaverDef` protocol buffer.
-
IEnumerable<object>
collection_list - List of string keys to collect.
-
bool
as_text - If `True`, writes the `MetaGraphDef` as an ASCII proto.
-
Graph
graph - The `Graph` to export. If `None`, use the default graph.
-
object
export_scope - Optional `string`. Name scope under which to extract the subgraph. The scope name will be striped from the node definitions for easy import later into new name scopes. If `None`, the whole graph is exported. graph_def and export_scope cannot both be specified.
-
Nullable<bool>
clear_devices - Whether or not to clear the device field for an `Operation` or `Tensor` during export.
-
bool
clear_extraneous_savers - Remove any Saver-related information from the graph (both Save/Restore ops and SaverDefs) that are not associated with the provided SaverDef.
-
Saver
strip_default_attrs - Boolean. If `True`, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see [Stripping Default-Valued Attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes).
-
bool
save_debug_info - If `True`, save the GraphDebugInfo to a separate file, which in the same directory of filename and with `_debug` added before the file extend.
-
IDictionary<string, object>
kwargs - Optional keyed arguments.
Returns
-
object
- A `MetaGraphDef` proto.
object export_meta_graph_dyn(object filename, object meta_info_def, object graph_def, object saver_def, object collection_list, ImplicitContainer<T> as_text, object graph, object export_scope, ImplicitContainer<T> clear_devices, ImplicitContainer<T> clear_extraneous_savers, ImplicitContainer<T> strip_default_attrs, ImplicitContainer<T> save_debug_info, IDictionary<string, object> kwargs)
Returns `MetaGraphDef` proto. Optionally writes it to filename. This function exports the graph, saver, and collection objects into
`MetaGraphDef` protocol buffer with the intention of it being imported
at a later time or location to restart training, run inference, or be
a subgraph.
Parameters
-
object
filename - Optional filename including the path for writing the generated `MetaGraphDef` protocol buffer.
-
object
meta_info_def - `MetaInfoDef` protocol buffer.
-
object
graph_def - `GraphDef` protocol buffer.
-
object
saver_def - `SaverDef` protocol buffer.
-
object
collection_list - List of string keys to collect.
-
ImplicitContainer<T>
as_text - If `True`, writes the `MetaGraphDef` as an ASCII proto.
-
object
graph - The `Graph` to export. If `None`, use the default graph.
-
object
export_scope - Optional `string`. Name scope under which to extract the subgraph. The scope name will be striped from the node definitions for easy import later into new name scopes. If `None`, the whole graph is exported. graph_def and export_scope cannot both be specified.
-
ImplicitContainer<T>
clear_devices - Whether or not to clear the device field for an `Operation` or `Tensor` during export.
-
ImplicitContainer<T>
clear_extraneous_savers - Remove any Saver-related information from the graph (both Save/Restore ops and SaverDefs) that are not associated with the provided SaverDef.
-
ImplicitContainer<T>
strip_default_attrs - Boolean. If `True`, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see [Stripping Default-Valued Attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes).
-
ImplicitContainer<T>
save_debug_info - If `True`, save the GraphDebugInfo to a separate file, which in the same directory of filename and with `_debug` added before the file extend.
-
IDictionary<string, object>
kwargs - Optional keyed arguments.
Returns
-
object
- A `MetaGraphDef` proto.
object generate_checkpoint_state_proto(string save_dir, Byte[] model_checkpoint_path, IEnumerable<object> all_model_checkpoint_paths, object all_model_checkpoint_timestamps, Nullable<double> last_preserved_timestamp)
Generates a checkpoint state proto.
Parameters
-
string
save_dir - Directory where the model was saved.
-
Byte[]
model_checkpoint_path - The checkpoint file.
-
IEnumerable<object>
all_model_checkpoint_paths - List of strings. Paths to all not-yet-deleted checkpoints, sorted from oldest to newest. If this is a non-empty list, the last element must be equal to model_checkpoint_path. These paths are also saved in the CheckpointState proto.
-
object
all_model_checkpoint_timestamps - A list of floats, indicating the number of seconds since the Epoch when each checkpoint was generated.
-
Nullable<double>
last_preserved_timestamp - A float, indicating the number of seconds since
the Epoch when the last preserved checkpoint was written, e.g. due to a
`keep_checkpoint_every_n_hours` parameter (see
tf.contrib.checkpoint.CheckpointManager
for an implementation).
Returns
-
object
- CheckpointState proto with model_checkpoint_path and all_model_checkpoint_paths updated to either absolute paths or relative paths to the current save_dir.
object generate_checkpoint_state_proto(string save_dir, IEnumerable<object> model_checkpoint_path, IEnumerable<object> all_model_checkpoint_paths, object all_model_checkpoint_timestamps, Nullable<double> last_preserved_timestamp)
Generates a checkpoint state proto.
Parameters
-
string
save_dir - Directory where the model was saved.
-
IEnumerable<object>
model_checkpoint_path - The checkpoint file.
-
IEnumerable<object>
all_model_checkpoint_paths - List of strings. Paths to all not-yet-deleted checkpoints, sorted from oldest to newest. If this is a non-empty list, the last element must be equal to model_checkpoint_path. These paths are also saved in the CheckpointState proto.
-
object
all_model_checkpoint_timestamps - A list of floats, indicating the number of seconds since the Epoch when each checkpoint was generated.
-
Nullable<double>
last_preserved_timestamp - A float, indicating the number of seconds since
the Epoch when the last preserved checkpoint was written, e.g. due to a
`keep_checkpoint_every_n_hours` parameter (see
tf.contrib.checkpoint.CheckpointManager
for an implementation).
Returns
-
object
- CheckpointState proto with model_checkpoint_path and all_model_checkpoint_paths updated to either absolute paths or relative paths to the current save_dir.
object generate_checkpoint_state_proto(string save_dir, IGraphNodeBase model_checkpoint_path, IEnumerable<object> all_model_checkpoint_paths, object all_model_checkpoint_timestamps, Nullable<double> last_preserved_timestamp)
Generates a checkpoint state proto.
Parameters
-
string
save_dir - Directory where the model was saved.
-
IGraphNodeBase
model_checkpoint_path - The checkpoint file.
-
IEnumerable<object>
all_model_checkpoint_paths - List of strings. Paths to all not-yet-deleted checkpoints, sorted from oldest to newest. If this is a non-empty list, the last element must be equal to model_checkpoint_path. These paths are also saved in the CheckpointState proto.
-
object
all_model_checkpoint_timestamps - A list of floats, indicating the number of seconds since the Epoch when each checkpoint was generated.
-
Nullable<double>
last_preserved_timestamp - A float, indicating the number of seconds since
the Epoch when the last preserved checkpoint was written, e.g. due to a
`keep_checkpoint_every_n_hours` parameter (see
tf.contrib.checkpoint.CheckpointManager
for an implementation).
Returns
-
object
- CheckpointState proto with model_checkpoint_path and all_model_checkpoint_paths updated to either absolute paths or relative paths to the current save_dir.
object generate_checkpoint_state_proto(string save_dir, string model_checkpoint_path, IEnumerable<object> all_model_checkpoint_paths, object all_model_checkpoint_timestamps, Nullable<double> last_preserved_timestamp)
Generates a checkpoint state proto.
Parameters
-
string
save_dir - Directory where the model was saved.
-
string
model_checkpoint_path - The checkpoint file.
-
IEnumerable<object>
all_model_checkpoint_paths - List of strings. Paths to all not-yet-deleted checkpoints, sorted from oldest to newest. If this is a non-empty list, the last element must be equal to model_checkpoint_path. These paths are also saved in the CheckpointState proto.
-
object
all_model_checkpoint_timestamps - A list of floats, indicating the number of seconds since the Epoch when each checkpoint was generated.
-
Nullable<double>
last_preserved_timestamp - A float, indicating the number of seconds since
the Epoch when the last preserved checkpoint was written, e.g. due to a
`keep_checkpoint_every_n_hours` parameter (see
tf.contrib.checkpoint.CheckpointManager
for an implementation).
Returns
-
object
- CheckpointState proto with model_checkpoint_path and all_model_checkpoint_paths updated to either absolute paths or relative paths to the current save_dir.
object generate_checkpoint_state_proto_dyn(object save_dir, object model_checkpoint_path, object all_model_checkpoint_paths, object all_model_checkpoint_timestamps, object last_preserved_timestamp)
Generates a checkpoint state proto.
Parameters
-
object
save_dir - Directory where the model was saved.
-
object
model_checkpoint_path - The checkpoint file.
-
object
all_model_checkpoint_paths - List of strings. Paths to all not-yet-deleted checkpoints, sorted from oldest to newest. If this is a non-empty list, the last element must be equal to model_checkpoint_path. These paths are also saved in the CheckpointState proto.
-
object
all_model_checkpoint_timestamps - A list of floats, indicating the number of seconds since the Epoch when each checkpoint was generated.
-
object
last_preserved_timestamp - A float, indicating the number of seconds since
the Epoch when the last preserved checkpoint was written, e.g. due to a
`keep_checkpoint_every_n_hours` parameter (see
tf.contrib.checkpoint.CheckpointManager
for an implementation).
Returns
-
object
- CheckpointState proto with model_checkpoint_path and all_model_checkpoint_paths updated to either absolute paths or relative paths to the current save_dir.
IList<object> get_checkpoint_mtimes(IEnumerable<object> checkpoint_prefixes)
Returns the mtimes (modification timestamps) of the checkpoints. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use standard file utilities to get mtimes. Globs for the checkpoints pointed to by `checkpoint_prefixes`. If the files
exist, collect their mtime. Both V2 and V1 checkpoints are considered, in
that priority. This is the recommended way to get the mtimes, since it takes into account
the naming difference between V1 and V2 formats. Note: If not all checkpoints exist, the length of the returned mtimes list
will be smaller than the length of `checkpoint_prefixes` list, so mapping
checkpoints to corresponding mtimes will not be possible.
Parameters
-
IEnumerable<object>
checkpoint_prefixes - a list of checkpoint paths, typically the results of `Saver.save()` or those of `tf.train.latest_checkpoint()`, regardless of sharded/non-sharded or V1/V2.
Returns
-
IList<object>
- A list of mtimes (in microseconds) of the found checkpoints.
object get_checkpoint_mtimes_dyn(object checkpoint_prefixes)
Returns the mtimes (modification timestamps) of the checkpoints. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use standard file utilities to get mtimes. Globs for the checkpoints pointed to by `checkpoint_prefixes`. If the files
exist, collect their mtime. Both V2 and V1 checkpoints are considered, in
that priority. This is the recommended way to get the mtimes, since it takes into account
the naming difference between V1 and V2 formats. Note: If not all checkpoints exist, the length of the returned mtimes list
will be smaller than the length of `checkpoint_prefixes` list, so mapping
checkpoints to corresponding mtimes will not be possible.
Parameters
-
object
checkpoint_prefixes - a list of checkpoint paths, typically the results of `Saver.save()` or those of `tf.train.latest_checkpoint()`, regardless of sharded/non-sharded or V1/V2.
Returns
-
object
- A list of mtimes (in microseconds) of the found checkpoints.
object get_checkpoint_state(Byte[] checkpoint_dir, string latest_filename)
Returns CheckpointState proto from the "checkpoint" file. If the "checkpoint" file contains a valid CheckpointState
proto, returns it.
Parameters
-
Byte[]
checkpoint_dir - The directory of checkpoints.
-
string
latest_filename - Optional name of the checkpoint file. Default to 'checkpoint'.
Returns
-
object
- A CheckpointState if the state was available, None otherwise.
object get_checkpoint_state(string checkpoint_dir, string latest_filename)
Returns CheckpointState proto from the "checkpoint" file. If the "checkpoint" file contains a valid CheckpointState
proto, returns it.
Parameters
-
string
checkpoint_dir - The directory of checkpoints.
-
string
latest_filename - Optional name of the checkpoint file. Default to 'checkpoint'.
Returns
-
object
- A CheckpointState if the state was available, None otherwise.
object get_checkpoint_state_dyn(object checkpoint_dir, object latest_filename)
Returns CheckpointState proto from the "checkpoint" file. If the "checkpoint" file contains a valid CheckpointState
proto, returns it.
Parameters
-
object
checkpoint_dir - The directory of checkpoints.
-
object
latest_filename - Optional name of the checkpoint file. Default to 'checkpoint'.
Returns
-
object
- A CheckpointState if the state was available, None otherwise.
object get_global_step(Graph graph)
Get the global step tensor. The global step tensor must be an integer variable. We first try to find it
in the collection `GLOBAL_STEP`, or by name `global_step:0`.
Parameters
-
Graph
graph - The graph to find the global step in. If missing, use default graph.
Returns
-
object
- The global step variable, or `None` if none was found.
int global_step(_WrappedSession sess, object global_step_tensor)
Small helper to get the global step.
Parameters
-
_WrappedSession
sess - A TensorFlow `Session` object.
-
object
global_step_tensor - `Tensor` or the `name` of the operation that contains the global step.
Returns
-
int
- The global step value.
Show Example
# Create a variable to hold the global_step. global_step_tensor = tf.Variable(10, trainable=False, name='global_step') # Create a session. sess = tf.compat.v1.Session() # Initialize the variable sess.run(global_step_tensor.initializer) # Get the variable value. print('global_step: %s' % tf.compat.v1.train.global_step(sess, global_step_tensor)) global_step: 10
int global_step(_WrappedSession sess, IEnumerable<object> global_step_tensor)
Small helper to get the global step.
Parameters
-
_WrappedSession
sess - A TensorFlow `Session` object.
-
IEnumerable<object>
global_step_tensor - `Tensor` or the `name` of the operation that contains the global step.
Returns
-
int
- The global step value.
Show Example
# Create a variable to hold the global_step. global_step_tensor = tf.Variable(10, trainable=False, name='global_step') # Create a session. sess = tf.compat.v1.Session() # Initialize the variable sess.run(global_step_tensor.initializer) # Get the variable value. print('global_step: %s' % tf.compat.v1.train.global_step(sess, global_step_tensor)) global_step: 10
int global_step(BaseSession sess, object global_step_tensor)
Small helper to get the global step.
Parameters
-
BaseSession
sess - A TensorFlow `Session` object.
-
object
global_step_tensor - `Tensor` or the `name` of the operation that contains the global step.
Returns
-
int
- The global step value.
Show Example
# Create a variable to hold the global_step. global_step_tensor = tf.Variable(10, trainable=False, name='global_step') # Create a session. sess = tf.compat.v1.Session() # Initialize the variable sess.run(global_step_tensor.initializer) # Get the variable value. print('global_step: %s' % tf.compat.v1.train.global_step(sess, global_step_tensor)) global_step: 10
int global_step(BaseSession sess, IEnumerable<object> global_step_tensor)
Small helper to get the global step.
Parameters
-
BaseSession
sess - A TensorFlow `Session` object.
-
IEnumerable<object>
global_step_tensor - `Tensor` or the `name` of the operation that contains the global step.
Returns
-
int
- The global step value.
Show Example
# Create a variable to hold the global_step. global_step_tensor = tf.Variable(10, trainable=False, name='global_step') # Create a session. sess = tf.compat.v1.Session() # Initialize the variable sess.run(global_step_tensor.initializer) # Get the variable value. print('global_step: %s' % tf.compat.v1.train.global_step(sess, global_step_tensor)) global_step: 10
object global_step_dyn(object sess, object global_step_tensor)
Small helper to get the global step.
Parameters
-
object
sess - A TensorFlow `Session` object.
-
object
global_step_tensor - `Tensor` or the `name` of the operation that contains the global step.
Returns
-
object
- The global step value.
Show Example
# Create a variable to hold the global_step. global_step_tensor = tf.Variable(10, trainable=False, name='global_step') # Create a session. sess = tf.compat.v1.Session() # Initialize the variable sess.run(global_step_tensor.initializer) # Get the variable value. print('global_step: %s' % tf.compat.v1.train.global_step(sess, global_step_tensor)) global_step: 10
Saver import_meta_graph(int meta_graph_or_file, bool clear_devices, string import_scope, IDictionary<string, object> kwargs)
Recreates a Graph saved in a `MetaGraphDef` proto. This function takes a `MetaGraphDef` protocol buffer as input. If
the argument is a file containing a `MetaGraphDef` protocol buffer ,
it constructs a protocol buffer from the file content. The function
then adds all the nodes from the `graph_def` field to the
current graph, recreates all the collections, and returns a saver
constructed from the `saver_def` field. In combination with `export_meta_graph()`, this function can be used to * Serialize a graph along with other Python objects such as `QueueRunner`,
`Variable` into a `MetaGraphDef`. * Restart training from a saved graph and checkpoints. * Run inference from a saved graph and checkpoints.
Later we can continue training from this saved `meta_graph` without building
the model from scratch.
NOTE: Restarting training from saved `meta_graph` only works if the
device assignments have not changed. Example:
Variables, placeholders, and independent operations can also be stored, as
shown in the following example.
Later this model can be restored and contents loaded.
Parameters
-
int
meta_graph_or_file - `MetaGraphDef` protocol buffer or filename (including the path) containing a `MetaGraphDef`.
-
bool
clear_devices - Whether or not to clear the device field for an `Operation` or `Tensor` during import.
-
string
import_scope - Optional `string`. Name scope to add. Only used when initializing from protocol buffer.
-
IDictionary<string, object>
kwargs - Optional keyed arguments.
Returns
-
Saver
- A saver constructed from `saver_def` in `MetaGraphDef` or None. A None value is returned if no variables exist in the `MetaGraphDef` (i.e., there are no variables to restore).
Show Example
... # Create a saver. saver = tf.compat.v1.train.Saver(...variables...) # Remember the training_op we want to run by adding it to a collection. tf.compat.v1.add_to_collection('train_op', train_op) sess = tf.compat.v1.Session() for step in xrange(1000000): sess.run(train_op) if step % 1000 == 0: # Saves checkpoint, which by default also exports a meta_graph # named 'my-model-global_step.meta'. saver.save(sess, 'my-model', global_step=step)
Saver import_meta_graph(IEnumerable<string> meta_graph_or_file, bool clear_devices, string import_scope, IDictionary<string, object> kwargs)
Recreates a Graph saved in a `MetaGraphDef` proto. This function takes a `MetaGraphDef` protocol buffer as input. If
the argument is a file containing a `MetaGraphDef` protocol buffer ,
it constructs a protocol buffer from the file content. The function
then adds all the nodes from the `graph_def` field to the
current graph, recreates all the collections, and returns a saver
constructed from the `saver_def` field. In combination with `export_meta_graph()`, this function can be used to * Serialize a graph along with other Python objects such as `QueueRunner`,
`Variable` into a `MetaGraphDef`. * Restart training from a saved graph and checkpoints. * Run inference from a saved graph and checkpoints.
Later we can continue training from this saved `meta_graph` without building
the model from scratch.
NOTE: Restarting training from saved `meta_graph` only works if the
device assignments have not changed. Example:
Variables, placeholders, and independent operations can also be stored, as
shown in the following example.
Later this model can be restored and contents loaded.
Parameters
-
IEnumerable<string>
meta_graph_or_file - `MetaGraphDef` protocol buffer or filename (including the path) containing a `MetaGraphDef`.
-
bool
clear_devices - Whether or not to clear the device field for an `Operation` or `Tensor` during import.
-
string
import_scope - Optional `string`. Name scope to add. Only used when initializing from protocol buffer.
-
IDictionary<string, object>
kwargs - Optional keyed arguments.
Returns
-
Saver
- A saver constructed from `saver_def` in `MetaGraphDef` or None. A None value is returned if no variables exist in the `MetaGraphDef` (i.e., there are no variables to restore).
Show Example
... # Create a saver. saver = tf.compat.v1.train.Saver(...variables...) # Remember the training_op we want to run by adding it to a collection. tf.compat.v1.add_to_collection('train_op', train_op) sess = tf.compat.v1.Session() for step in xrange(1000000): sess.run(train_op) if step % 1000 == 0: # Saves checkpoint, which by default also exports a meta_graph # named 'my-model-global_step.meta'. saver.save(sess, 'my-model', global_step=step)
Saver import_meta_graph(string meta_graph_or_file, bool clear_devices, string import_scope, IDictionary<string, object> kwargs)
Recreates a Graph saved in a `MetaGraphDef` proto. This function takes a `MetaGraphDef` protocol buffer as input. If
the argument is a file containing a `MetaGraphDef` protocol buffer ,
it constructs a protocol buffer from the file content. The function
then adds all the nodes from the `graph_def` field to the
current graph, recreates all the collections, and returns a saver
constructed from the `saver_def` field. In combination with `export_meta_graph()`, this function can be used to * Serialize a graph along with other Python objects such as `QueueRunner`,
`Variable` into a `MetaGraphDef`. * Restart training from a saved graph and checkpoints. * Run inference from a saved graph and checkpoints.
Later we can continue training from this saved `meta_graph` without building
the model from scratch.
NOTE: Restarting training from saved `meta_graph` only works if the
device assignments have not changed. Example:
Variables, placeholders, and independent operations can also be stored, as
shown in the following example.
Later this model can be restored and contents loaded.
Parameters
-
string
meta_graph_or_file - `MetaGraphDef` protocol buffer or filename (including the path) containing a `MetaGraphDef`.
-
bool
clear_devices - Whether or not to clear the device field for an `Operation` or `Tensor` during import.
-
string
import_scope - Optional `string`. Name scope to add. Only used when initializing from protocol buffer.
-
IDictionary<string, object>
kwargs - Optional keyed arguments.
Returns
-
Saver
- A saver constructed from `saver_def` in `MetaGraphDef` or None. A None value is returned if no variables exist in the `MetaGraphDef` (i.e., there are no variables to restore).
Show Example
... # Create a saver. saver = tf.compat.v1.train.Saver(...variables...) # Remember the training_op we want to run by adding it to a collection. tf.compat.v1.add_to_collection('train_op', train_op) sess = tf.compat.v1.Session() for step in xrange(1000000): sess.run(train_op) if step % 1000 == 0: # Saves checkpoint, which by default also exports a meta_graph # named 'my-model-global_step.meta'. saver.save(sess, 'my-model', global_step=step)
object import_meta_graph_dyn(object meta_graph_or_file, ImplicitContainer<T> clear_devices, object import_scope, IDictionary<string, object> kwargs)
Recreates a Graph saved in a `MetaGraphDef` proto. This function takes a `MetaGraphDef` protocol buffer as input. If
the argument is a file containing a `MetaGraphDef` protocol buffer ,
it constructs a protocol buffer from the file content. The function
then adds all the nodes from the `graph_def` field to the
current graph, recreates all the collections, and returns a saver
constructed from the `saver_def` field. In combination with `export_meta_graph()`, this function can be used to * Serialize a graph along with other Python objects such as `QueueRunner`,
`Variable` into a `MetaGraphDef`. * Restart training from a saved graph and checkpoints. * Run inference from a saved graph and checkpoints.
Later we can continue training from this saved `meta_graph` without building
the model from scratch.
NOTE: Restarting training from saved `meta_graph` only works if the
device assignments have not changed. Example:
Variables, placeholders, and independent operations can also be stored, as
shown in the following example.
Later this model can be restored and contents loaded.
Parameters
-
object
meta_graph_or_file - `MetaGraphDef` protocol buffer or filename (including the path) containing a `MetaGraphDef`.
-
ImplicitContainer<T>
clear_devices - Whether or not to clear the device field for an `Operation` or `Tensor` during import.
-
object
import_scope - Optional `string`. Name scope to add. Only used when initializing from protocol buffer.
-
IDictionary<string, object>
kwargs - Optional keyed arguments.
Returns
-
object
- A saver constructed from `saver_def` in `MetaGraphDef` or None. A None value is returned if no variables exist in the `MetaGraphDef` (i.e., there are no variables to restore).
Show Example
... # Create a saver. saver = tf.compat.v1.train.Saver(...variables...) # Remember the training_op we want to run by adding it to a collection. tf.compat.v1.add_to_collection('train_op', train_op) sess = tf.compat.v1.Session() for step in xrange(1000000): sess.run(train_op) if step % 1000 == 0: # Saves checkpoint, which by default also exports a meta_graph # named 'my-model-global_step.meta'. saver.save(sess, 'my-model', global_step=step)
object input_producer(IEnumerable<object> input_tensor, IEnumerable<object> element_shape, Nullable<int> num_epochs, bool shuffle, Nullable<int> seed, int capacity, string shared_name, string summary_name, PythonFunctionContainer name, object cancel_op)
Output the rows of `input_tensor` to a queue for an input pipeline. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. Note: if `num_epochs` is not `None`, this function creates local counter
`epochs`. Use `local_variables_initializer()` to initialize local variables.
Parameters
-
IEnumerable<object>
input_tensor - A tensor with the rows to produce. Must be at least one-dimensional. Must either have a fully-defined shape, or `element_shape` must be defined.
-
IEnumerable<object>
element_shape - (Optional.) A `TensorShape` representing the shape of a row of `input_tensor`, if it cannot be inferred.
-
Nullable<int>
num_epochs - (Optional.) An integer. If specified `input_producer` produces each row of `input_tensor` `num_epochs` times before generating an `OutOfRange` error. If not specified, `input_producer` can cycle through the rows of `input_tensor` an unlimited number of times.
-
bool
shuffle - (Optional.) A boolean. If true, the rows are randomly shuffled within each epoch.
-
Nullable<int>
seed - (Optional.) An integer. The seed to use if `shuffle` is true.
-
int
capacity - (Optional.) The capacity of the queue to be used for buffering the input.
-
string
shared_name - (Optional.) If set, this queue will be shared under the given name across multiple sessions.
-
string
summary_name - (Optional.) If set, a scalar summary for the current queue size will be generated, using this name as part of the tag.
-
PythonFunctionContainer
name - (Optional.) A name for queue.
-
object
cancel_op - (Optional.) Cancel op for the queue
Returns
-
object
- A queue with the output rows. A `QueueRunner` for the queue is added to the current `QUEUE_RUNNER` collection of the current graph.
object input_producer(IEnumerable<object> input_tensor, IEnumerable<object> element_shape, Nullable<int> num_epochs, bool shuffle, Nullable<int> seed, int capacity, string shared_name, string summary_name, string name, object cancel_op)
Output the rows of `input_tensor` to a queue for an input pipeline. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. Note: if `num_epochs` is not `None`, this function creates local counter
`epochs`. Use `local_variables_initializer()` to initialize local variables.
Parameters
-
IEnumerable<object>
input_tensor - A tensor with the rows to produce. Must be at least one-dimensional. Must either have a fully-defined shape, or `element_shape` must be defined.
-
IEnumerable<object>
element_shape - (Optional.) A `TensorShape` representing the shape of a row of `input_tensor`, if it cannot be inferred.
-
Nullable<int>
num_epochs - (Optional.) An integer. If specified `input_producer` produces each row of `input_tensor` `num_epochs` times before generating an `OutOfRange` error. If not specified, `input_producer` can cycle through the rows of `input_tensor` an unlimited number of times.
-
bool
shuffle - (Optional.) A boolean. If true, the rows are randomly shuffled within each epoch.
-
Nullable<int>
seed - (Optional.) An integer. The seed to use if `shuffle` is true.
-
int
capacity - (Optional.) The capacity of the queue to be used for buffering the input.
-
string
shared_name - (Optional.) If set, this queue will be shared under the given name across multiple sessions.
-
string
summary_name - (Optional.) If set, a scalar summary for the current queue size will be generated, using this name as part of the tag.
-
string
name - (Optional.) A name for queue.
-
object
cancel_op - (Optional.) Cancel op for the queue
Returns
-
object
- A queue with the output rows. A `QueueRunner` for the queue is added to the current `QUEUE_RUNNER` collection of the current graph.
object input_producer(IGraphNodeBase input_tensor, IEnumerable<object> element_shape, Nullable<int> num_epochs, bool shuffle, Nullable<int> seed, int capacity, string shared_name, string summary_name, PythonFunctionContainer name, object cancel_op)
Output the rows of `input_tensor` to a queue for an input pipeline. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. Note: if `num_epochs` is not `None`, this function creates local counter
`epochs`. Use `local_variables_initializer()` to initialize local variables.
Parameters
-
IGraphNodeBase
input_tensor - A tensor with the rows to produce. Must be at least one-dimensional. Must either have a fully-defined shape, or `element_shape` must be defined.
-
IEnumerable<object>
element_shape - (Optional.) A `TensorShape` representing the shape of a row of `input_tensor`, if it cannot be inferred.
-
Nullable<int>
num_epochs - (Optional.) An integer. If specified `input_producer` produces each row of `input_tensor` `num_epochs` times before generating an `OutOfRange` error. If not specified, `input_producer` can cycle through the rows of `input_tensor` an unlimited number of times.
-
bool
shuffle - (Optional.) A boolean. If true, the rows are randomly shuffled within each epoch.
-
Nullable<int>
seed - (Optional.) An integer. The seed to use if `shuffle` is true.
-
int
capacity - (Optional.) The capacity of the queue to be used for buffering the input.
-
string
shared_name - (Optional.) If set, this queue will be shared under the given name across multiple sessions.
-
string
summary_name - (Optional.) If set, a scalar summary for the current queue size will be generated, using this name as part of the tag.
-
PythonFunctionContainer
name - (Optional.) A name for queue.
-
object
cancel_op - (Optional.) Cancel op for the queue
Returns
-
object
- A queue with the output rows. A `QueueRunner` for the queue is added to the current `QUEUE_RUNNER` collection of the current graph.
object input_producer(IGraphNodeBase input_tensor, IEnumerable<object> element_shape, Nullable<int> num_epochs, bool shuffle, Nullable<int> seed, int capacity, string shared_name, string summary_name, string name, object cancel_op)
Output the rows of `input_tensor` to a queue for an input pipeline. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. Note: if `num_epochs` is not `None`, this function creates local counter
`epochs`. Use `local_variables_initializer()` to initialize local variables.
Parameters
-
IGraphNodeBase
input_tensor - A tensor with the rows to produce. Must be at least one-dimensional. Must either have a fully-defined shape, or `element_shape` must be defined.
-
IEnumerable<object>
element_shape - (Optional.) A `TensorShape` representing the shape of a row of `input_tensor`, if it cannot be inferred.
-
Nullable<int>
num_epochs - (Optional.) An integer. If specified `input_producer` produces each row of `input_tensor` `num_epochs` times before generating an `OutOfRange` error. If not specified, `input_producer` can cycle through the rows of `input_tensor` an unlimited number of times.
-
bool
shuffle - (Optional.) A boolean. If true, the rows are randomly shuffled within each epoch.
-
Nullable<int>
seed - (Optional.) An integer. The seed to use if `shuffle` is true.
-
int
capacity - (Optional.) The capacity of the queue to be used for buffering the input.
-
string
shared_name - (Optional.) If set, this queue will be shared under the given name across multiple sessions.
-
string
summary_name - (Optional.) If set, a scalar summary for the current queue size will be generated, using this name as part of the tag.
-
string
name - (Optional.) A name for queue.
-
object
cancel_op - (Optional.) Cancel op for the queue
Returns
-
object
- A queue with the output rows. A `QueueRunner` for the queue is added to the current `QUEUE_RUNNER` collection of the current graph.
object input_producer_dyn(object input_tensor, object element_shape, object num_epochs, ImplicitContainer<T> shuffle, object seed, ImplicitContainer<T> capacity, object shared_name, object summary_name, object name, object cancel_op)
Output the rows of `input_tensor` to a queue for an input pipeline. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. Note: if `num_epochs` is not `None`, this function creates local counter
`epochs`. Use `local_variables_initializer()` to initialize local variables.
Parameters
-
object
input_tensor - A tensor with the rows to produce. Must be at least one-dimensional. Must either have a fully-defined shape, or `element_shape` must be defined.
-
object
element_shape - (Optional.) A `TensorShape` representing the shape of a row of `input_tensor`, if it cannot be inferred.
-
object
num_epochs - (Optional.) An integer. If specified `input_producer` produces each row of `input_tensor` `num_epochs` times before generating an `OutOfRange` error. If not specified, `input_producer` can cycle through the rows of `input_tensor` an unlimited number of times.
-
ImplicitContainer<T>
shuffle - (Optional.) A boolean. If true, the rows are randomly shuffled within each epoch.
-
object
seed - (Optional.) An integer. The seed to use if `shuffle` is true.
-
ImplicitContainer<T>
capacity - (Optional.) The capacity of the queue to be used for buffering the input.
-
object
shared_name - (Optional.) If set, this queue will be shared under the given name across multiple sessions.
-
object
summary_name - (Optional.) If set, a scalar summary for the current queue size will be generated, using this name as part of the tag.
-
object
name - (Optional.) A name for queue.
-
object
cancel_op - (Optional.) Cancel op for the queue
Returns
-
object
- A queue with the output rows. A `QueueRunner` for the queue is added to the current `QUEUE_RUNNER` collection of the current graph.
object inverse_time_decay(double learning_rate, ResourceVariable global_step, int decay_steps, double decay_rate, bool staircase, string name)
Applies inverse time decay to the initial learning rate. When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies an inverse decay function
to a provided initial learning rate. It requires an `global_step` value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step. The function returns the decayed learning rate. It is computed as:
or, if `staircase` is `True`, as:
Example: decay 1/t with a rate of 0.5:
Parameters
-
double
learning_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The initial learning rate.
-
ResourceVariable
global_step - A Python number. Global step to use for the decay computation. Must not be negative.
-
int
decay_steps - How often to apply decay.
-
double
decay_rate - A Python number. The decay rate.
-
bool
staircase - Whether to apply decay in a discrete staircase, as opposed to continuous, fashion.
-
string
name - String. Optional name of the operation. Defaults to 'InverseTimeDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
decayed_learning_rate = learning_rate / (1 + decay_rate * global_step / decay_step)
object inverse_time_decay_dyn(object learning_rate, object global_step, object decay_steps, object decay_rate, ImplicitContainer<T> staircase, object name)
Applies inverse time decay to the initial learning rate. When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies an inverse decay function
to a provided initial learning rate. It requires an `global_step` value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step. The function returns the decayed learning rate. It is computed as:
or, if `staircase` is `True`, as:
Example: decay 1/t with a rate of 0.5:
Parameters
-
object
learning_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The initial learning rate.
-
object
global_step - A Python number. Global step to use for the decay computation. Must not be negative.
-
object
decay_steps - How often to apply decay.
-
object
decay_rate - A Python number. The decay rate.
-
ImplicitContainer<T>
staircase - Whether to apply decay in a discrete staircase, as opposed to continuous, fashion.
-
object
name - String. Optional name of the operation. Defaults to 'InverseTimeDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
decayed_learning_rate = learning_rate / (1 + decay_rate * global_step / decay_step)
object latest_checkpoint(string checkpoint_dir, string latest_filename)
Finds the filename of latest saved checkpoint file.
Parameters
-
string
checkpoint_dir - Directory where the variables were saved.
-
string
latest_filename - Optional name for the protocol buffer file that contains the list of most recent checkpoint filenames. See the corresponding argument to `Saver.save()`.
Returns
-
object
- The full path to the latest checkpoint or `None` if no checkpoint was found.
object latest_checkpoint(Byte[] checkpoint_dir, string latest_filename)
Finds the filename of latest saved checkpoint file.
Parameters
-
Byte[]
checkpoint_dir - Directory where the variables were saved.
-
string
latest_filename - Optional name for the protocol buffer file that contains the list of most recent checkpoint filenames. See the corresponding argument to `Saver.save()`.
Returns
-
object
- The full path to the latest checkpoint or `None` if no checkpoint was found.
object latest_checkpoint_dyn(object checkpoint_dir, object latest_filename)
Finds the filename of latest saved checkpoint file.
Parameters
-
object
checkpoint_dir - Directory where the variables were saved.
-
object
latest_filename - Optional name for the protocol buffer file that contains the list of most recent checkpoint filenames. See the corresponding argument to `Saver.save()`.
Returns
-
object
- The full path to the latest checkpoint or `None` if no checkpoint was found.
object limit_epochs(IGraphNodeBase tensor, Nullable<int> num_epochs, string name)
Returns tensor `num_epochs` times and then raises an `OutOfRange` error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.from_tensors(tensor).repeat(num_epochs)`. Note: creates local counter `epochs`. Use `local_variables_initializer()` to
initialize local variables.
Parameters
-
IGraphNodeBase
tensor - Any `Tensor`.
-
Nullable<int>
num_epochs - A positive integer (optional). If specified, limits the number of steps the output tensor may be evaluated.
-
string
name - A name for the operations (optional).
Returns
-
object
- tensor or `OutOfRange`.
object limit_epochs(IEnumerable<object> tensor, Nullable<int> num_epochs, string name)
Returns tensor `num_epochs` times and then raises an `OutOfRange` error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.from_tensors(tensor).repeat(num_epochs)`. Note: creates local counter `epochs`. Use `local_variables_initializer()` to
initialize local variables.
Parameters
-
IEnumerable<object>
tensor - Any `Tensor`.
-
Nullable<int>
num_epochs - A positive integer (optional). If specified, limits the number of steps the output tensor may be evaluated.
-
string
name - A name for the operations (optional).
Returns
-
object
- tensor or `OutOfRange`.
object limit_epochs_dyn(object tensor, object num_epochs, object name)
Returns tensor `num_epochs` times and then raises an `OutOfRange` error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.from_tensors(tensor).repeat(num_epochs)`. Note: creates local counter `epochs`. Use `local_variables_initializer()` to
initialize local variables.
Parameters
-
object
tensor - Any `Tensor`.
-
object
num_epochs - A positive integer (optional). If specified, limits the number of steps the output tensor may be evaluated.
-
object
name - A name for the operations (optional).
Returns
-
object
- tensor or `OutOfRange`.
object linear_cosine_decay(double learning_rate, int global_step, int decay_steps, double num_periods, double alpha, double beta, string name)
Applies linear cosine decay to the learning rate. See [Bello et al., ICML2017] Neural Optimizer Search with RL.
https://arxiv.org/abs/1709.07417 For the idea of warm starts here controlled by `num_periods`,
see [Loshchilov & Hutter, ICLR2016] SGDR: Stochastic Gradient Descent
with Warm Restarts. https://arxiv.org/abs/1608.03983 Note that linear cosine decay is more aggressive than cosine decay and
larger initial learning rates can typically be used. When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies a linear cosine decay function
to a provided initial learning rate. It requires a `global_step` value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step. The function returns the decayed learning rate. It is computed as:
Example usage:
Parameters
-
double
learning_rate - A scalar `float32` or `float64` Tensor or a Python number. The initial learning rate.
-
int
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation.
-
int
decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Number of steps to decay over.
-
double
num_periods - Number of periods in the cosine part of the decay. See computation above.
-
double
alpha - See computation above.
-
double
beta - See computation above.
-
string
name - String. Optional name of the operation. Defaults to 'LinearCosineDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
global_step = min(global_step, decay_steps) linear_decay = (decay_steps - global_step) / decay_steps) cosine_decay = 0.5 * ( 1 + cos(pi * 2 * num_periods * global_step / decay_steps)) decayed = (alpha + linear_decay) * cosine_decay + beta decayed_learning_rate = learning_rate * decayed
object linear_cosine_decay(double learning_rate, int global_step, int decay_steps, int num_periods, double alpha, double beta, string name)
Applies linear cosine decay to the learning rate. See [Bello et al., ICML2017] Neural Optimizer Search with RL.
https://arxiv.org/abs/1709.07417 For the idea of warm starts here controlled by `num_periods`,
see [Loshchilov & Hutter, ICLR2016] SGDR: Stochastic Gradient Descent
with Warm Restarts. https://arxiv.org/abs/1608.03983 Note that linear cosine decay is more aggressive than cosine decay and
larger initial learning rates can typically be used. When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies a linear cosine decay function
to a provided initial learning rate. It requires a `global_step` value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step. The function returns the decayed learning rate. It is computed as:
Example usage:
Parameters
-
double
learning_rate - A scalar `float32` or `float64` Tensor or a Python number. The initial learning rate.
-
int
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation.
-
int
decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Number of steps to decay over.
-
int
num_periods - Number of periods in the cosine part of the decay. See computation above.
-
double
alpha - See computation above.
-
double
beta - See computation above.
-
string
name - String. Optional name of the operation. Defaults to 'LinearCosineDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
global_step = min(global_step, decay_steps) linear_decay = (decay_steps - global_step) / decay_steps) cosine_decay = 0.5 * ( 1 + cos(pi * 2 * num_periods * global_step / decay_steps)) decayed = (alpha + linear_decay) * cosine_decay + beta decayed_learning_rate = learning_rate * decayed
object linear_cosine_decay_dyn(object learning_rate, object global_step, object decay_steps, ImplicitContainer<T> num_periods, ImplicitContainer<T> alpha, ImplicitContainer<T> beta, object name)
Applies linear cosine decay to the learning rate. See [Bello et al., ICML2017] Neural Optimizer Search with RL.
https://arxiv.org/abs/1709.07417 For the idea of warm starts here controlled by `num_periods`,
see [Loshchilov & Hutter, ICLR2016] SGDR: Stochastic Gradient Descent
with Warm Restarts. https://arxiv.org/abs/1608.03983 Note that linear cosine decay is more aggressive than cosine decay and
larger initial learning rates can typically be used. When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies a linear cosine decay function
to a provided initial learning rate. It requires a `global_step` value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step. The function returns the decayed learning rate. It is computed as:
Example usage:
Parameters
-
object
learning_rate - A scalar `float32` or `float64` Tensor or a Python number. The initial learning rate.
-
object
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation.
-
object
decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Number of steps to decay over.
-
ImplicitContainer<T>
num_periods - Number of periods in the cosine part of the decay. See computation above.
-
ImplicitContainer<T>
alpha - See computation above.
-
ImplicitContainer<T>
beta - See computation above.
-
object
name - String. Optional name of the operation. Defaults to 'LinearCosineDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
global_step = min(global_step, decay_steps) linear_decay = (decay_steps - global_step) / decay_steps) cosine_decay = 0.5 * ( 1 + cos(pi * 2 * num_periods * global_step / decay_steps)) decayed = (alpha + linear_decay) * cosine_decay + beta decayed_learning_rate = learning_rate * decayed
IList<ValueTuple<object, object>> list_variables(Byte[] ckpt_dir_or_file)
Returns list of all variables in the checkpoint.
Parameters
-
Byte[]
ckpt_dir_or_file - Directory with checkpoints file or path to checkpoint.
Returns
-
IList<ValueTuple<object, object>>
- List of tuples `(name, shape)`.
object maybe_batch(IEnumerable<object> tensors, IGraphNodeBase keep_input, int batch_size, int num_threads, int capacity, bool enqueue_many, object shapes, bool dynamic_pad, bool allow_smaller_final_batch, object shared_name, string name)
Conditionally creates batches of tensors based on `keep_input`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). See docstring in `batch` for more details.
Parameters
-
IEnumerable<object>
tensors - The list or dictionary of tensors to enqueue.
-
IGraphNodeBase
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
int
batch_size - The new batch size pulled from the queue.
-
int
num_threads - The number of threads enqueuing `tensors`. The batching will be nondeterministic if `num_threads > 1`.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
bool
enqueue_many - Whether each tensor in `tensors` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors`.
-
bool
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same types as `tensors`.
object maybe_batch(IDictionary<object, object> tensors, bool keep_input, int batch_size, int num_threads, int capacity, bool enqueue_many, object shapes, bool dynamic_pad, bool allow_smaller_final_batch, object shared_name, string name)
Conditionally creates batches of tensors based on `keep_input`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). See docstring in `batch` for more details.
Parameters
-
IDictionary<object, object>
tensors - The list or dictionary of tensors to enqueue.
-
bool
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
int
batch_size - The new batch size pulled from the queue.
-
int
num_threads - The number of threads enqueuing `tensors`. The batching will be nondeterministic if `num_threads > 1`.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
bool
enqueue_many - Whether each tensor in `tensors` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors`.
-
bool
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same types as `tensors`.
object maybe_batch(IEnumerable<object> tensors, bool keep_input, int batch_size, int num_threads, int capacity, bool enqueue_many, object shapes, bool dynamic_pad, bool allow_smaller_final_batch, object shared_name, string name)
Conditionally creates batches of tensors based on `keep_input`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). See docstring in `batch` for more details.
Parameters
-
IEnumerable<object>
tensors - The list or dictionary of tensors to enqueue.
-
bool
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
int
batch_size - The new batch size pulled from the queue.
-
int
num_threads - The number of threads enqueuing `tensors`. The batching will be nondeterministic if `num_threads > 1`.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
bool
enqueue_many - Whether each tensor in `tensors` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors`.
-
bool
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same types as `tensors`.
object maybe_batch(IDictionary<object, object> tensors, IGraphNodeBase keep_input, int batch_size, int num_threads, int capacity, bool enqueue_many, object shapes, bool dynamic_pad, bool allow_smaller_final_batch, object shared_name, string name)
Conditionally creates batches of tensors based on `keep_input`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). See docstring in `batch` for more details.
Parameters
-
IDictionary<object, object>
tensors - The list or dictionary of tensors to enqueue.
-
IGraphNodeBase
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
int
batch_size - The new batch size pulled from the queue.
-
int
num_threads - The number of threads enqueuing `tensors`. The batching will be nondeterministic if `num_threads > 1`.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
bool
enqueue_many - Whether each tensor in `tensors` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors`.
-
bool
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same types as `tensors`.
object maybe_batch(IDictionary<object, object> tensors, IEnumerable<bool> keep_input, int batch_size, int num_threads, int capacity, bool enqueue_many, object shapes, bool dynamic_pad, bool allow_smaller_final_batch, object shared_name, string name)
Conditionally creates batches of tensors based on `keep_input`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). See docstring in `batch` for more details.
Parameters
-
IDictionary<object, object>
tensors - The list or dictionary of tensors to enqueue.
-
IEnumerable<bool>
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
int
batch_size - The new batch size pulled from the queue.
-
int
num_threads - The number of threads enqueuing `tensors`. The batching will be nondeterministic if `num_threads > 1`.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
bool
enqueue_many - Whether each tensor in `tensors` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors`.
-
bool
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same types as `tensors`.
object maybe_batch(IEnumerable<object> tensors, IEnumerable<bool> keep_input, int batch_size, int num_threads, int capacity, bool enqueue_many, object shapes, bool dynamic_pad, bool allow_smaller_final_batch, object shared_name, string name)
Conditionally creates batches of tensors based on `keep_input`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). See docstring in `batch` for more details.
Parameters
-
IEnumerable<object>
tensors - The list or dictionary of tensors to enqueue.
-
IEnumerable<bool>
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
int
batch_size - The new batch size pulled from the queue.
-
int
num_threads - The number of threads enqueuing `tensors`. The batching will be nondeterministic if `num_threads > 1`.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
bool
enqueue_many - Whether each tensor in `tensors` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors`.
-
bool
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same types as `tensors`.
object maybe_batch_dyn(object tensors, object keep_input, object batch_size, ImplicitContainer<T> num_threads, ImplicitContainer<T> capacity, ImplicitContainer<T> enqueue_many, object shapes, ImplicitContainer<T> dynamic_pad, ImplicitContainer<T> allow_smaller_final_batch, object shared_name, object name)
Conditionally creates batches of tensors based on `keep_input`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). See docstring in `batch` for more details.
Parameters
-
object
tensors - The list or dictionary of tensors to enqueue.
-
object
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
object
batch_size - The new batch size pulled from the queue.
-
ImplicitContainer<T>
num_threads - The number of threads enqueuing `tensors`. The batching will be nondeterministic if `num_threads > 1`.
-
ImplicitContainer<T>
capacity - An integer. The maximum number of elements in the queue.
-
ImplicitContainer<T>
enqueue_many - Whether each tensor in `tensors` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors`.
-
ImplicitContainer<T>
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
ImplicitContainer<T>
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional). If set, this queue will be shared under the given name across multiple sessions.
-
object
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same types as `tensors`.
object maybe_batch_join(IEnumerable<object> tensors_list, IGraphNodeBase keep_input, int batch_size, int capacity, bool enqueue_many, object shapes, bool dynamic_pad, bool allow_smaller_final_batch, object shared_name, string name)
Runs a list of tensors to conditionally fill a queue to create batches. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). See docstring in `batch_join` for more details.
Parameters
-
IEnumerable<object>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
IGraphNodeBase
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
int
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list_list[i]`.
-
bool
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object maybe_batch_join(IEnumerable<object> tensors_list, IEnumerable<bool> keep_input, int batch_size, int capacity, bool enqueue_many, object shapes, bool dynamic_pad, bool allow_smaller_final_batch, object shared_name, string name)
Runs a list of tensors to conditionally fill a queue to create batches. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). See docstring in `batch_join` for more details.
Parameters
-
IEnumerable<object>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
IEnumerable<bool>
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
int
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list_list[i]`.
-
bool
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object maybe_batch_join(IEnumerable<object> tensors_list, bool keep_input, int batch_size, int capacity, bool enqueue_many, object shapes, bool dynamic_pad, bool allow_smaller_final_batch, object shared_name, string name)
Runs a list of tensors to conditionally fill a queue to create batches. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). See docstring in `batch_join` for more details.
Parameters
-
IEnumerable<object>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
bool
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
int
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list_list[i]`.
-
bool
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object maybe_batch_join_dyn(object tensors_list, object keep_input, object batch_size, ImplicitContainer<T> capacity, ImplicitContainer<T> enqueue_many, object shapes, ImplicitContainer<T> dynamic_pad, ImplicitContainer<T> allow_smaller_final_batch, object shared_name, object name)
Runs a list of tensors to conditionally fill a queue to create batches. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). See docstring in `batch_join` for more details.
Parameters
-
object
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
object
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
object
batch_size - An integer. The new batch size pulled from the queue.
-
ImplicitContainer<T>
capacity - An integer. The maximum number of elements in the queue.
-
ImplicitContainer<T>
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list_list[i]`.
-
ImplicitContainer<T>
dynamic_pad - Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
-
ImplicitContainer<T>
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
object
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object maybe_shuffle_batch(IEnumerable<object> tensors, int batch_size, int capacity, int min_after_dequeue, IEnumerable<bool> keep_input, int num_threads, Nullable<int> seed, bool enqueue_many, object shapes, bool allow_smaller_final_batch, object shared_name, string name)
Creates batches by randomly shuffling conditionally-enqueued tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.filter(...).shuffle(min_after_dequeue).batch(batch_size)`. See docstring in `shuffle_batch` for more details.
Parameters
-
IEnumerable<object>
tensors - The list or dictionary of tensors to enqueue.
-
int
batch_size - The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
int
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
IEnumerable<bool>
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
int
num_threads - The number of threads enqueuing `tensor_list`.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the types as `tensors`.
object maybe_shuffle_batch(IEnumerable<object> tensors, int batch_size, int capacity, int min_after_dequeue, IGraphNodeBase keep_input, int num_threads, Nullable<int> seed, bool enqueue_many, object shapes, bool allow_smaller_final_batch, object shared_name, string name)
Creates batches by randomly shuffling conditionally-enqueued tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.filter(...).shuffle(min_after_dequeue).batch(batch_size)`. See docstring in `shuffle_batch` for more details.
Parameters
-
IEnumerable<object>
tensors - The list or dictionary of tensors to enqueue.
-
int
batch_size - The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
int
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
IGraphNodeBase
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
int
num_threads - The number of threads enqueuing `tensor_list`.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the types as `tensors`.
object maybe_shuffle_batch(IDictionary<object, object> tensors, int batch_size, int capacity, int min_after_dequeue, bool keep_input, int num_threads, Nullable<int> seed, bool enqueue_many, object shapes, bool allow_smaller_final_batch, object shared_name, string name)
Creates batches by randomly shuffling conditionally-enqueued tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.filter(...).shuffle(min_after_dequeue).batch(batch_size)`. See docstring in `shuffle_batch` for more details.
Parameters
-
IDictionary<object, object>
tensors - The list or dictionary of tensors to enqueue.
-
int
batch_size - The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
int
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
bool
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
int
num_threads - The number of threads enqueuing `tensor_list`.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the types as `tensors`.
object maybe_shuffle_batch(IDictionary<object, object> tensors, int batch_size, int capacity, int min_after_dequeue, IEnumerable<bool> keep_input, int num_threads, Nullable<int> seed, bool enqueue_many, object shapes, bool allow_smaller_final_batch, object shared_name, string name)
Creates batches by randomly shuffling conditionally-enqueued tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.filter(...).shuffle(min_after_dequeue).batch(batch_size)`. See docstring in `shuffle_batch` for more details.
Parameters
-
IDictionary<object, object>
tensors - The list or dictionary of tensors to enqueue.
-
int
batch_size - The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
int
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
IEnumerable<bool>
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
int
num_threads - The number of threads enqueuing `tensor_list`.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the types as `tensors`.
object maybe_shuffle_batch(IDictionary<object, object> tensors, int batch_size, int capacity, int min_after_dequeue, IGraphNodeBase keep_input, int num_threads, Nullable<int> seed, bool enqueue_many, object shapes, bool allow_smaller_final_batch, object shared_name, string name)
Creates batches by randomly shuffling conditionally-enqueued tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.filter(...).shuffle(min_after_dequeue).batch(batch_size)`. See docstring in `shuffle_batch` for more details.
Parameters
-
IDictionary<object, object>
tensors - The list or dictionary of tensors to enqueue.
-
int
batch_size - The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
int
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
IGraphNodeBase
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
int
num_threads - The number of threads enqueuing `tensor_list`.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the types as `tensors`.
object maybe_shuffle_batch(IEnumerable<object> tensors, int batch_size, int capacity, int min_after_dequeue, bool keep_input, int num_threads, Nullable<int> seed, bool enqueue_many, object shapes, bool allow_smaller_final_batch, object shared_name, string name)
Creates batches by randomly shuffling conditionally-enqueued tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.filter(...).shuffle(min_after_dequeue).batch(batch_size)`. See docstring in `shuffle_batch` for more details.
Parameters
-
IEnumerable<object>
tensors - The list or dictionary of tensors to enqueue.
-
int
batch_size - The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
int
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
bool
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
int
num_threads - The number of threads enqueuing `tensor_list`.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the types as `tensors`.
object maybe_shuffle_batch_dyn(object tensors, object batch_size, object capacity, object min_after_dequeue, object keep_input, ImplicitContainer<T> num_threads, object seed, ImplicitContainer<T> enqueue_many, object shapes, ImplicitContainer<T> allow_smaller_final_batch, object shared_name, object name)
Creates batches by randomly shuffling conditionally-enqueued tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.filter(...).shuffle(min_after_dequeue).batch(batch_size)`. See docstring in `shuffle_batch` for more details.
Parameters
-
object
tensors - The list or dictionary of tensors to enqueue.
-
object
batch_size - The new batch size pulled from the queue.
-
object
capacity - An integer. The maximum number of elements in the queue.
-
object
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
object
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
ImplicitContainer<T>
num_threads - The number of threads enqueuing `tensor_list`.
-
object
seed - Seed for the random shuffling within the queue.
-
ImplicitContainer<T>
enqueue_many - Whether each tensor in `tensor_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list`.
-
ImplicitContainer<T>
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
object
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the types as `tensors`.
object maybe_shuffle_batch_join(IEnumerable<object> tensors_list, int batch_size, int capacity, int min_after_dequeue, IGraphNodeBase keep_input, object seed, bool enqueue_many, object shapes, bool allow_smaller_final_batch, object shared_name, string name)
Create batches by randomly shuffling conditionally-enqueued tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).filter(...).shuffle(min_after_dequeue).batch(batch_size)`. See docstring in `shuffle_batch_join` for more details.
Parameters
-
IEnumerable<object>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
int
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
int
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
IGraphNodeBase
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
object
seed - Seed for the random shuffling within the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors_list[i]`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object maybe_shuffle_batch_join(IEnumerable<object> tensors_list, int batch_size, int capacity, int min_after_dequeue, IEnumerable<bool> keep_input, object seed, bool enqueue_many, object shapes, bool allow_smaller_final_batch, object shared_name, string name)
Create batches by randomly shuffling conditionally-enqueued tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).filter(...).shuffle(min_after_dequeue).batch(batch_size)`. See docstring in `shuffle_batch_join` for more details.
Parameters
-
IEnumerable<object>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
int
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
int
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
IEnumerable<bool>
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
object
seed - Seed for the random shuffling within the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors_list[i]`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object maybe_shuffle_batch_join(IEnumerable<object> tensors_list, int batch_size, int capacity, int min_after_dequeue, bool keep_input, object seed, bool enqueue_many, object shapes, bool allow_smaller_final_batch, object shared_name, string name)
Create batches by randomly shuffling conditionally-enqueued tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).filter(...).shuffle(min_after_dequeue).batch(batch_size)`. See docstring in `shuffle_batch_join` for more details.
Parameters
-
IEnumerable<object>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
int
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
int
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
bool
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
object
seed - Seed for the random shuffling within the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors_list[i]`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object maybe_shuffle_batch_join_dyn(object tensors_list, object batch_size, object capacity, object min_after_dequeue, object keep_input, object seed, ImplicitContainer<T> enqueue_many, object shapes, ImplicitContainer<T> allow_smaller_final_batch, object shared_name, object name)
Create batches by randomly shuffling conditionally-enqueued tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).filter(...).shuffle(min_after_dequeue).batch(batch_size)`. See docstring in `shuffle_batch_join` for more details.
Parameters
-
object
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
object
batch_size - An integer. The new batch size pulled from the queue.
-
object
capacity - An integer. The maximum number of elements in the queue.
-
object
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
object
keep_input - A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism.
-
object
seed - Seed for the random shuffling within the queue.
-
ImplicitContainer<T>
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors_list[i]`.
-
ImplicitContainer<T>
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
object
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, Byte[] checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, int save_checkpoint_secs, int save_summaries_steps, ImplicitContainer<T> save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
Byte[]
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
int
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
int
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
ImplicitContainer<T>
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, Byte[] checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, int save_checkpoint_secs, int save_summaries_steps, double save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
Byte[]
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
int
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
int
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
double
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, Byte[] checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, int save_checkpoint_secs, ImplicitContainer<T> save_summaries_steps, ImplicitContainer<T> save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
Byte[]
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
int
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
ImplicitContainer<T>
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
ImplicitContainer<T>
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, Byte[] checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, ImplicitContainer<T> save_checkpoint_secs, int save_summaries_steps, double save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
Byte[]
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
ImplicitContainer<T>
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
int
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
double
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, Byte[] checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, ImplicitContainer<T> save_checkpoint_secs, int save_summaries_steps, ImplicitContainer<T> save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
Byte[]
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
ImplicitContainer<T>
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
int
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
ImplicitContainer<T>
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, Byte[] checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, double save_checkpoint_secs, ImplicitContainer<T> save_summaries_steps, double save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
Byte[]
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
double
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
ImplicitContainer<T>
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
double
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, Byte[] checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, ImplicitContainer<T> save_checkpoint_secs, ImplicitContainer<T> save_summaries_steps, ImplicitContainer<T> save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
Byte[]
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
ImplicitContainer<T>
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
ImplicitContainer<T>
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
ImplicitContainer<T>
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, Byte[] checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, ImplicitContainer<T> save_checkpoint_secs, ImplicitContainer<T> save_summaries_steps, double save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
Byte[]
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
ImplicitContainer<T>
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
ImplicitContainer<T>
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
double
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, Byte[] checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, double save_checkpoint_secs, int save_summaries_steps, ImplicitContainer<T> save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
Byte[]
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
double
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
int
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
ImplicitContainer<T>
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, Byte[] checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, double save_checkpoint_secs, int save_summaries_steps, double save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
Byte[]
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
double
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
int
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
double
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, Byte[] checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, int save_checkpoint_secs, ImplicitContainer<T> save_summaries_steps, double save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
Byte[]
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
int
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
ImplicitContainer<T>
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
double
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, string checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, double save_checkpoint_secs, ImplicitContainer<T> save_summaries_steps, double save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
string
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
double
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
ImplicitContainer<T>
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
double
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, Byte[] checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, double save_checkpoint_secs, ImplicitContainer<T> save_summaries_steps, ImplicitContainer<T> save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
Byte[]
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
double
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
ImplicitContainer<T>
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
ImplicitContainer<T>
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, string checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, double save_checkpoint_secs, int save_summaries_steps, double save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
string
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
double
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
int
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
double
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, string checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, double save_checkpoint_secs, int save_summaries_steps, ImplicitContainer<T> save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
string
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
double
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
int
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
ImplicitContainer<T>
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, string checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, ImplicitContainer<T> save_checkpoint_secs, ImplicitContainer<T> save_summaries_steps, double save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
string
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
ImplicitContainer<T>
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
ImplicitContainer<T>
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
double
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, string checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, int save_checkpoint_secs, int save_summaries_steps, ImplicitContainer<T> save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
string
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
int
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
int
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
ImplicitContainer<T>
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, string checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, ImplicitContainer<T> save_checkpoint_secs, ImplicitContainer<T> save_summaries_steps, ImplicitContainer<T> save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
string
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
ImplicitContainer<T>
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
ImplicitContainer<T>
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
ImplicitContainer<T>
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, string checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, ImplicitContainer<T> save_checkpoint_secs, int save_summaries_steps, double save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
string
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
ImplicitContainer<T>
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
int
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
double
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, string checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, ImplicitContainer<T> save_checkpoint_secs, int save_summaries_steps, ImplicitContainer<T> save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
string
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
ImplicitContainer<T>
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
int
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
ImplicitContainer<T>
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, string checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, int save_checkpoint_secs, ImplicitContainer<T> save_summaries_steps, double save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
string
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
int
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
ImplicitContainer<T>
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
double
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, string checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, int save_checkpoint_secs, int save_summaries_steps, double save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
string
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
int
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
int
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
double
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, string checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, double save_checkpoint_secs, ImplicitContainer<T> save_summaries_steps, ImplicitContainer<T> save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
string
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
double
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
ImplicitContainer<T>
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
ImplicitContainer<T>
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
MonitoredSession MonitoredTrainingSession(string master, Nullable<bool> is_chief, string checkpoint_dir, Scaffold scaffold, IEnumerable<object> hooks, IEnumerable<SessionRunHook> chief_only_hooks, int save_checkpoint_secs, ImplicitContainer<T> save_summaries_steps, ImplicitContainer<T> save_summaries_secs, object config, int stop_grace_period_secs, Nullable<int> log_step_count_steps, int max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
string
master - `String` the TensorFlow master to use.
-
Nullable<bool>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
string
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
Scaffold
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
IEnumerable<object>
hooks - Optional list of `SessionRunHook` objects.
-
IEnumerable<SessionRunHook>
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
int
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
ImplicitContainer<T>
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
ImplicitContainer<T>
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
int
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
Nullable<int>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
int
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
MonitoredSession
- A `MonitoredSession` object.
object MonitoredTrainingSession_dyn(ImplicitContainer<T> master, ImplicitContainer<T> is_chief, object checkpoint_dir, object scaffold, object hooks, object chief_only_hooks, ImplicitContainer<T> save_checkpoint_secs, ImplicitContainer<T> save_summaries_steps, ImplicitContainer<T> save_summaries_secs, object config, ImplicitContainer<T> stop_grace_period_secs, ImplicitContainer<T> log_step_count_steps, ImplicitContainer<T> max_wait_secs, ImplicitContainer<T> save_checkpoint_steps, object summary_dir)
Creates a `MonitoredSession` for training. For a chief, this utility sets proper session initializer/restorer. It also
creates hooks related to checkpoint and summary saving. For workers, this
utility sets proper session creator which waits for the chief to
initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for
more
information.
Parameters
-
ImplicitContainer<T>
master - `String` the TensorFlow master to use.
-
ImplicitContainer<T>
is_chief - If `True`, it will take care of initialization and recovery the underlying TensorFlow session. If `False`, it will wait on a chief to initialize or recover the TensorFlow session.
-
object
checkpoint_dir - A string. Optional path to a directory where to restore variables.
-
object
scaffold - A `Scaffold` used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph.
-
object
hooks - Optional list of `SessionRunHook` objects.
-
object
chief_only_hooks - list of `SessionRunHook` objects. Activate these hooks if `is_chief==True`, ignore otherwise.
-
ImplicitContainer<T>
save_checkpoint_secs - The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default 600.
-
ImplicitContainer<T>
save_summaries_steps - The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default 100.
-
ImplicitContainer<T>
save_summaries_secs - The frequency, in secs, that the summaries are written to disk using a default summary saver. If both `save_summaries_steps` and `save_summaries_secs` are set to `None`, then the default summary saver isn't used. Default not enabled.
-
object
config - an instance of `tf.compat.v1.ConfigProto` proto used to configure the session. It's the `config` argument of constructor of `tf.compat.v1.Session`.
-
ImplicitContainer<T>
stop_grace_period_secs - Number of seconds given to threads to stop after `close()` has been called.
-
ImplicitContainer<T>
log_step_count_steps - The frequency, in number of global steps, that the global step/sec is logged.
-
ImplicitContainer<T>
max_wait_secs - Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
-
ImplicitContainer<T>
save_checkpoint_steps - The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then the default checkpoint saver isn't used. If both are provided, then only `save_checkpoint_secs` is used. Default not enabled.
-
object
summary_dir - A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead.
Returns
-
object
- A `MonitoredSession` object.
object natural_exp_decay(double learning_rate, ResourceVariable global_step, int decay_steps, double decay_rate, bool staircase, string name)
Applies natural exponential decay to the initial learning rate. When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies an exponential decay function
to a provided initial learning rate. It requires an `global_step` value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step. The function returns the decayed learning rate. It is computed as:
or, if `staircase` is `True`, as:
Example: decay exponentially with a base of 0.96:
Parameters
-
double
learning_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The initial learning rate.
-
ResourceVariable
global_step - A Python number. Global step to use for the decay computation. Must not be negative.
-
int
decay_steps - How often to apply decay.
-
double
decay_rate - A Python number. The decay rate.
-
bool
staircase - Whether to apply decay in a discrete staircase, as opposed to continuous, fashion.
-
string
name - String. Optional name of the operation. Defaults to 'ExponentialTimeDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
decayed_learning_rate = learning_rate * exp(-decay_rate * global_step / decay_step)
object natural_exp_decay_dyn(object learning_rate, object global_step, object decay_steps, object decay_rate, ImplicitContainer<T> staircase, object name)
Applies natural exponential decay to the initial learning rate. When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies an exponential decay function
to a provided initial learning rate. It requires an `global_step` value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step. The function returns the decayed learning rate. It is computed as:
or, if `staircase` is `True`, as:
Example: decay exponentially with a base of 0.96:
Parameters
-
object
learning_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The initial learning rate.
-
object
global_step - A Python number. Global step to use for the decay computation. Must not be negative.
-
object
decay_steps - How often to apply decay.
-
object
decay_rate - A Python number. The decay rate.
-
ImplicitContainer<T>
staircase - Whether to apply decay in a discrete staircase, as opposed to continuous, fashion.
-
object
name - String. Optional name of the operation. Defaults to 'ExponentialTimeDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
decayed_learning_rate = learning_rate * exp(-decay_rate * global_step / decay_step)
object noisy_linear_cosine_decay(double learning_rate, int global_step, int decay_steps, double initial_variance, double variance_decay, int num_periods, double alpha, double beta, string name)
Applies noisy linear cosine decay to the learning rate. See [Bello et al., ICML2017] Neural Optimizer Search with RL.
https://arxiv.org/abs/1709.07417 For the idea of warm starts here controlled by `num_periods`,
see [Loshchilov & Hutter, ICLR2016] SGDR: Stochastic Gradient Descent
with Warm Restarts. https://arxiv.org/abs/1608.03983 Note that linear cosine decay is more aggressive than cosine decay and
larger initial learning rates can typically be used. When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies a noisy linear
cosine decay function to a provided initial learning rate.
It requires a `global_step` value to compute the decayed learning rate.
You can just pass a TensorFlow variable that you increment at each
training step. The function returns the decayed learning rate. It is computed as:
where eps_t is 0-centered gaussian noise with variance
initial_variance / (1 + global_step) ** variance_decay Example usage:
Parameters
-
double
learning_rate - A scalar `float32` or `float64` Tensor or a Python number. The initial learning rate.
-
int
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation.
-
int
decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Number of steps to decay over.
-
double
initial_variance - initial variance for the noise. See computation above.
-
double
variance_decay - decay for the noise's variance. See computation above.
-
int
num_periods - Number of periods in the cosine part of the decay. See computation above.
-
double
alpha - See computation above.
-
double
beta - See computation above.
-
string
name - String. Optional name of the operation. Defaults to 'NoisyLinearCosineDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
global_step = min(global_step, decay_steps) linear_decay = (decay_steps - global_step) / decay_steps) cosine_decay = 0.5 * ( 1 + cos(pi * 2 * num_periods * global_step / decay_steps)) decayed = (alpha + linear_decay + eps_t) * cosine_decay + beta decayed_learning_rate = learning_rate * decayed
object noisy_linear_cosine_decay(double learning_rate, int global_step, int decay_steps, double initial_variance, double variance_decay, double num_periods, double alpha, double beta, string name)
Applies noisy linear cosine decay to the learning rate. See [Bello et al., ICML2017] Neural Optimizer Search with RL.
https://arxiv.org/abs/1709.07417 For the idea of warm starts here controlled by `num_periods`,
see [Loshchilov & Hutter, ICLR2016] SGDR: Stochastic Gradient Descent
with Warm Restarts. https://arxiv.org/abs/1608.03983 Note that linear cosine decay is more aggressive than cosine decay and
larger initial learning rates can typically be used. When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies a noisy linear
cosine decay function to a provided initial learning rate.
It requires a `global_step` value to compute the decayed learning rate.
You can just pass a TensorFlow variable that you increment at each
training step. The function returns the decayed learning rate. It is computed as:
where eps_t is 0-centered gaussian noise with variance
initial_variance / (1 + global_step) ** variance_decay Example usage:
Parameters
-
double
learning_rate - A scalar `float32` or `float64` Tensor or a Python number. The initial learning rate.
-
int
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation.
-
int
decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Number of steps to decay over.
-
double
initial_variance - initial variance for the noise. See computation above.
-
double
variance_decay - decay for the noise's variance. See computation above.
-
double
num_periods - Number of periods in the cosine part of the decay. See computation above.
-
double
alpha - See computation above.
-
double
beta - See computation above.
-
string
name - String. Optional name of the operation. Defaults to 'NoisyLinearCosineDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
global_step = min(global_step, decay_steps) linear_decay = (decay_steps - global_step) / decay_steps) cosine_decay = 0.5 * ( 1 + cos(pi * 2 * num_periods * global_step / decay_steps)) decayed = (alpha + linear_decay + eps_t) * cosine_decay + beta decayed_learning_rate = learning_rate * decayed
object noisy_linear_cosine_decay_dyn(object learning_rate, object global_step, object decay_steps, ImplicitContainer<T> initial_variance, ImplicitContainer<T> variance_decay, ImplicitContainer<T> num_periods, ImplicitContainer<T> alpha, ImplicitContainer<T> beta, object name)
Applies noisy linear cosine decay to the learning rate. See [Bello et al., ICML2017] Neural Optimizer Search with RL.
https://arxiv.org/abs/1709.07417 For the idea of warm starts here controlled by `num_periods`,
see [Loshchilov & Hutter, ICLR2016] SGDR: Stochastic Gradient Descent
with Warm Restarts. https://arxiv.org/abs/1608.03983 Note that linear cosine decay is more aggressive than cosine decay and
larger initial learning rates can typically be used. When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies a noisy linear
cosine decay function to a provided initial learning rate.
It requires a `global_step` value to compute the decayed learning rate.
You can just pass a TensorFlow variable that you increment at each
training step. The function returns the decayed learning rate. It is computed as:
where eps_t is 0-centered gaussian noise with variance
initial_variance / (1 + global_step) ** variance_decay Example usage:
Parameters
-
object
learning_rate - A scalar `float32` or `float64` Tensor or a Python number. The initial learning rate.
-
object
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation.
-
object
decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Number of steps to decay over.
-
ImplicitContainer<T>
initial_variance - initial variance for the noise. See computation above.
-
ImplicitContainer<T>
variance_decay - decay for the noise's variance. See computation above.
-
ImplicitContainer<T>
num_periods - Number of periods in the cosine part of the decay. See computation above.
-
ImplicitContainer<T>
alpha - See computation above.
-
ImplicitContainer<T>
beta - See computation above.
-
object
name - String. Optional name of the operation. Defaults to 'NoisyLinearCosineDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
global_step = min(global_step, decay_steps) linear_decay = (decay_steps - global_step) / decay_steps) cosine_decay = 0.5 * ( 1 + cos(pi * 2 * num_periods * global_step / decay_steps)) decayed = (alpha + linear_decay + eps_t) * cosine_decay + beta decayed_learning_rate = learning_rate * decayed
object piecewise_constant(Variable x, IEnumerable<double> boundaries, IEnumerable<int> values, string name)
Piecewise constant from boundaries and interval values. Example: use a learning rate that's 1.0 for the first 100001 steps, 0.5
for the next 10000 steps, and 0.1 for any additional steps.
Parameters
-
Variable
x - A 0-D scalar `Tensor`. Must be one of the following types: `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`.
-
IEnumerable<double>
boundaries - A list of `Tensor`s or `int`s or `float`s with strictly increasing entries, and with all elements having the same type as `x`.
-
IEnumerable<int>
values - A list of `Tensor`s or `float`s or `int`s that specifies the values for the intervals defined by `boundaries`. It should have one more element than `boundaries`, and all elements should have the same type.
-
string
name - A string. Optional name of the operation. Defaults to 'PiecewiseConstant'.
Returns
-
object
- A 0-D Tensor. Its value is `values[0]` when `x <= boundaries[0]`, `values[1]` when `x > boundaries[0]` and `x <= boundaries[1]`,..., and values[-1] when `x > boundaries[-1]`.
Show Example
global_step = tf.Variable(0, trainable=False) boundaries = [100000, 110000] values = [1.0, 0.5, 0.1] learning_rate = tf.compat.v1.train.piecewise_constant(global_step, boundaries, values) # Later, whenever we perform an optimization step, we increment global_step.
object piecewise_constant_dyn(object x, object boundaries, object values, object name)
Piecewise constant from boundaries and interval values. Example: use a learning rate that's 1.0 for the first 100001 steps, 0.5
for the next 10000 steps, and 0.1 for any additional steps.
Parameters
-
object
x - A 0-D scalar `Tensor`. Must be one of the following types: `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`.
-
object
boundaries - A list of `Tensor`s or `int`s or `float`s with strictly increasing entries, and with all elements having the same type as `x`.
-
object
values - A list of `Tensor`s or `float`s or `int`s that specifies the values for the intervals defined by `boundaries`. It should have one more element than `boundaries`, and all elements should have the same type.
-
object
name - A string. Optional name of the operation. Defaults to 'PiecewiseConstant'.
Returns
-
object
- A 0-D Tensor. Its value is `values[0]` when `x <= boundaries[0]`, `values[1]` when `x > boundaries[0]` and `x <= boundaries[1]`,..., and values[-1] when `x > boundaries[-1]`.
Show Example
global_step = tf.Variable(0, trainable=False) boundaries = [100000, 110000] values = [1.0, 0.5, 0.1] learning_rate = tf.compat.v1.train.piecewise_constant(global_step, boundaries, values) # Later, whenever we perform an optimization step, we increment global_step.
object polynomial_decay(double learning_rate, int global_step, int decay_steps, double end_learning_rate, double power, bool cycle, string name)
Applies a polynomial decay to the learning rate. It is commonly observed that a monotonically decreasing learning rate, whose
degree of change is carefully chosen, results in a better performing model.
This function applies a polynomial decay function to a provided initial
`learning_rate` to reach an `end_learning_rate` in the given `decay_steps`. It requires a `global_step` value to compute the decayed learning rate. You
can just pass a TensorFlow variable that you increment at each training step. The function returns the decayed learning rate. It is computed as:
If `cycle` is True then a multiple of `decay_steps` is used, the first one
that is bigger than `global_steps`.
Example: decay from 0.1 to 0.01 in 10000 steps using sqrt (i.e. power=0.5):
Parameters
-
double
learning_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The initial learning rate.
-
int
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation. Must not be negative.
-
int
decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Must be positive. See the decay computation above.
-
double
end_learning_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The minimal end learning rate.
-
double
power - A scalar `float32` or `float64` `Tensor` or a Python number. The power of the polynomial. Defaults to linear, 1.0.
-
bool
cycle - A boolean, whether or not it should cycle beyond decay_steps.
-
string
name - String. Optional name of the operation. Defaults to 'PolynomialDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
global_step = min(global_step, decay_steps) decayed_learning_rate = (learning_rate - end_learning_rate) * (1 - global_step / decay_steps) ^ (power) + end_learning_rate
object polynomial_decay_dyn(object learning_rate, object global_step, object decay_steps, ImplicitContainer<T> end_learning_rate, ImplicitContainer<T> power, ImplicitContainer<T> cycle, object name)
Applies a polynomial decay to the learning rate. It is commonly observed that a monotonically decreasing learning rate, whose
degree of change is carefully chosen, results in a better performing model.
This function applies a polynomial decay function to a provided initial
`learning_rate` to reach an `end_learning_rate` in the given `decay_steps`. It requires a `global_step` value to compute the decayed learning rate. You
can just pass a TensorFlow variable that you increment at each training step. The function returns the decayed learning rate. It is computed as:
If `cycle` is True then a multiple of `decay_steps` is used, the first one
that is bigger than `global_steps`.
Example: decay from 0.1 to 0.01 in 10000 steps using sqrt (i.e. power=0.5):
Parameters
-
object
learning_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The initial learning rate.
-
object
global_step - A scalar `int32` or `int64` `Tensor` or a Python number. Global step to use for the decay computation. Must not be negative.
-
object
decay_steps - A scalar `int32` or `int64` `Tensor` or a Python number. Must be positive. See the decay computation above.
-
ImplicitContainer<T>
end_learning_rate - A scalar `float32` or `float64` `Tensor` or a Python number. The minimal end learning rate.
-
ImplicitContainer<T>
power - A scalar `float32` or `float64` `Tensor` or a Python number. The power of the polynomial. Defaults to linear, 1.0.
-
ImplicitContainer<T>
cycle - A boolean, whether or not it should cycle beyond decay_steps.
-
object
name - String. Optional name of the operation. Defaults to 'PolynomialDecay'.
Returns
-
object
- A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate.
Show Example
global_step = min(global_step, decay_steps) decayed_learning_rate = (learning_rate - end_learning_rate) * (1 - global_step / decay_steps) ^ (power) + end_learning_rate
object range_input_producer(int limit, Nullable<int> num_epochs, bool shuffle, Nullable<int> seed, int capacity, string shared_name, string name)
Produces the integers from 0 to limit-1 in a queue. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.range(limit).shuffle(limit).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. Note: if `num_epochs` is not `None`, this function creates local counter
`epochs`. Use `local_variables_initializer()` to initialize local variables.
Parameters
-
int
limit - An int32 scalar tensor.
-
Nullable<int>
num_epochs - An integer (optional). If specified, `range_input_producer` produces each integer `num_epochs` times before generating an OutOfRange error. If not specified, `range_input_producer` can cycle through the integers an unlimited number of times.
-
bool
shuffle - Boolean. If true, the integers are randomly shuffled within each epoch.
-
Nullable<int>
seed - An integer (optional). Seed used if shuffle == True.
-
int
capacity - An integer. Sets the queue capacity.
-
string
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - A name for the operations (optional).
Returns
-
object
- A Queue with the output integers. A `QueueRunner` for the Queue is added to the current `Graph`'s `QUEUE_RUNNER` collection.
object range_input_producer_dyn(object limit, object num_epochs, ImplicitContainer<T> shuffle, object seed, ImplicitContainer<T> capacity, object shared_name, object name)
Produces the integers from 0 to limit-1 in a queue. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.range(limit).shuffle(limit).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. Note: if `num_epochs` is not `None`, this function creates local counter
`epochs`. Use `local_variables_initializer()` to initialize local variables.
Parameters
-
object
limit - An int32 scalar tensor.
-
object
num_epochs - An integer (optional). If specified, `range_input_producer` produces each integer `num_epochs` times before generating an OutOfRange error. If not specified, `range_input_producer` can cycle through the integers an unlimited number of times.
-
ImplicitContainer<T>
shuffle - Boolean. If true, the integers are randomly shuffled within each epoch.
-
object
seed - An integer (optional). Seed used if shuffle == True.
-
ImplicitContainer<T>
capacity - An integer. Sets the queue capacity.
-
object
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
object
name - A name for the operations (optional).
Returns
-
object
- A Queue with the output integers. A `QueueRunner` for the Queue is added to the current `Graph`'s `QUEUE_RUNNER` collection.
void remove_checkpoint(Byte[] checkpoint_prefix, ImplicitContainer<T> checkpoint_format_version, string meta_graph_suffix)
Removes a checkpoint given by `checkpoint_prefix`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
Parameters
-
Byte[]
checkpoint_prefix - The prefix of a V1 or V2 checkpoint. Typically the result of `Saver.save()` or that of `tf.train.latest_checkpoint()`, regardless of sharded/non-sharded or V1/V2.
-
ImplicitContainer<T>
checkpoint_format_version - `SaverDef.CheckpointFormatVersion`, defaults to `SaverDef.V2`.
-
string
meta_graph_suffix - Suffix for `MetaGraphDef` file. Defaults to 'meta'.
void remove_checkpoint(IEnumerable<object> checkpoint_prefix, ImplicitContainer<T> checkpoint_format_version, string meta_graph_suffix)
Removes a checkpoint given by `checkpoint_prefix`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
Parameters
-
IEnumerable<object>
checkpoint_prefix - The prefix of a V1 or V2 checkpoint. Typically the result of `Saver.save()` or that of `tf.train.latest_checkpoint()`, regardless of sharded/non-sharded or V1/V2.
-
ImplicitContainer<T>
checkpoint_format_version - `SaverDef.CheckpointFormatVersion`, defaults to `SaverDef.V2`.
-
string
meta_graph_suffix - Suffix for `MetaGraphDef` file. Defaults to 'meta'.
void remove_checkpoint(string checkpoint_prefix, ImplicitContainer<T> checkpoint_format_version, string meta_graph_suffix)
Removes a checkpoint given by `checkpoint_prefix`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
Parameters
-
string
checkpoint_prefix - The prefix of a V1 or V2 checkpoint. Typically the result of `Saver.save()` or that of `tf.train.latest_checkpoint()`, regardless of sharded/non-sharded or V1/V2.
-
ImplicitContainer<T>
checkpoint_format_version - `SaverDef.CheckpointFormatVersion`, defaults to `SaverDef.V2`.
-
string
meta_graph_suffix - Suffix for `MetaGraphDef` file. Defaults to 'meta'.
object remove_checkpoint_dyn(object checkpoint_prefix, ImplicitContainer<T> checkpoint_format_version, ImplicitContainer<T> meta_graph_suffix)
Removes a checkpoint given by `checkpoint_prefix`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
Parameters
-
object
checkpoint_prefix - The prefix of a V1 or V2 checkpoint. Typically the result of `Saver.save()` or that of `tf.train.latest_checkpoint()`, regardless of sharded/non-sharded or V1/V2.
-
ImplicitContainer<T>
checkpoint_format_version - `SaverDef.CheckpointFormatVersion`, defaults to `SaverDef.V2`.
-
ImplicitContainer<T>
meta_graph_suffix - Suffix for `MetaGraphDef` file. Defaults to 'meta'.
PythonFunctionContainer replica_device_setter(int ps_tasks, string ps_device, string worker_device, bool merge_devices, IDictionary<object, object> cluster, IEnumerable<string> ps_ops, GreedyLoadBalancingStrategy ps_strategy)
Return a `device function` to use when building a Graph for replicas. Device Functions are used in `with tf.device(device_function):` statement to
automatically assign devices to `Operation` objects as they are constructed,
Device constraints are added from the inner-most context first, working
outwards. The merging behavior adds constraints to fields that are yet unset
by a more inner context. Currently the fields are (job, task, cpu/gpu). If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op.
Otherwise, the value of `ps_tasks` is derived from `cluster`. By default, only Variable ops are placed on ps tasks, and the placement
strategy is round-robin over all ps tasks. A custom `ps_strategy` may be used
to do more intelligent placement, such as
tf.contrib.training.GreedyLoadBalancingStrategy
. For example,
Parameters
-
int
ps_tasks - Number of tasks in the `ps` job. Ignored if `cluster` is provided.
-
string
ps_device - String. Device of the `ps` job. If empty no `ps` job is used. Defaults to `ps`.
-
string
worker_device - String. Device of the `worker` job. If empty no `worker` job is used.
-
bool
merge_devices - `Boolean`. If `True`, merges or only sets a device if the device constraint is completely unset. merges device specification rather than overriding them.
-
IDictionary<object, object>
cluster - `ClusterDef` proto or `ClusterSpec`.
-
IEnumerable<string>
ps_ops - List of strings representing `Operation` types that need to be placed on `ps` devices. If `None`, defaults to `STANDARD_PS_OPS`.
-
GreedyLoadBalancingStrategy
ps_strategy - A callable invoked for every ps `Operation` (i.e. matched by `ps_ops`), that takes the `Operation` and returns the ps task index to use. If `None`, defaults to a round-robin strategy across all `ps` devices.
Returns
-
PythonFunctionContainer
- A function to pass to `tf.device()`.
Show Example
# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker # jobs on hosts worker0, worker1 and worker2. cluster_spec = { "ps": ["ps0:2222", "ps1:2222"], "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]} with tf.device(tf.compat.v1.train.replica_device_setter(cluster=cluster_spec)): # Build your graph v1 = tf.Variable(...) # assigned to /job:ps/task:0 v2 = tf.Variable(...) # assigned to /job:ps/task:1 v3 = tf.Variable(...) # assigned to /job:ps/task:0 # Run compute
PythonFunctionContainer replica_device_setter(int ps_tasks, string ps_device, string worker_device, bool merge_devices, IDictionary<object, object> cluster, IEnumerable<string> ps_ops, _OpRoundRobinStrategy ps_strategy)
Return a `device function` to use when building a Graph for replicas. Device Functions are used in `with tf.device(device_function):` statement to
automatically assign devices to `Operation` objects as they are constructed,
Device constraints are added from the inner-most context first, working
outwards. The merging behavior adds constraints to fields that are yet unset
by a more inner context. Currently the fields are (job, task, cpu/gpu). If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op.
Otherwise, the value of `ps_tasks` is derived from `cluster`. By default, only Variable ops are placed on ps tasks, and the placement
strategy is round-robin over all ps tasks. A custom `ps_strategy` may be used
to do more intelligent placement, such as
tf.contrib.training.GreedyLoadBalancingStrategy
. For example,
Parameters
-
int
ps_tasks - Number of tasks in the `ps` job. Ignored if `cluster` is provided.
-
string
ps_device - String. Device of the `ps` job. If empty no `ps` job is used. Defaults to `ps`.
-
string
worker_device - String. Device of the `worker` job. If empty no `worker` job is used.
-
bool
merge_devices - `Boolean`. If `True`, merges or only sets a device if the device constraint is completely unset. merges device specification rather than overriding them.
-
IDictionary<object, object>
cluster - `ClusterDef` proto or `ClusterSpec`.
-
IEnumerable<string>
ps_ops - List of strings representing `Operation` types that need to be placed on `ps` devices. If `None`, defaults to `STANDARD_PS_OPS`.
-
_OpRoundRobinStrategy
ps_strategy - A callable invoked for every ps `Operation` (i.e. matched by `ps_ops`), that takes the `Operation` and returns the ps task index to use. If `None`, defaults to a round-robin strategy across all `ps` devices.
Returns
-
PythonFunctionContainer
- A function to pass to `tf.device()`.
Show Example
# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker # jobs on hosts worker0, worker1 and worker2. cluster_spec = { "ps": ["ps0:2222", "ps1:2222"], "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]} with tf.device(tf.compat.v1.train.replica_device_setter(cluster=cluster_spec)): # Build your graph v1 = tf.Variable(...) # assigned to /job:ps/task:0 v2 = tf.Variable(...) # assigned to /job:ps/task:1 v3 = tf.Variable(...) # assigned to /job:ps/task:0 # Run compute
PythonFunctionContainer replica_device_setter(int ps_tasks, string ps_device, string worker_device, bool merge_devices, IDictionary<object, object> cluster, IEnumerable<string> ps_ops, RandomStrategy ps_strategy)
Return a `device function` to use when building a Graph for replicas. Device Functions are used in `with tf.device(device_function):` statement to
automatically assign devices to `Operation` objects as they are constructed,
Device constraints are added from the inner-most context first, working
outwards. The merging behavior adds constraints to fields that are yet unset
by a more inner context. Currently the fields are (job, task, cpu/gpu). If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op.
Otherwise, the value of `ps_tasks` is derived from `cluster`. By default, only Variable ops are placed on ps tasks, and the placement
strategy is round-robin over all ps tasks. A custom `ps_strategy` may be used
to do more intelligent placement, such as
tf.contrib.training.GreedyLoadBalancingStrategy
. For example,
Parameters
-
int
ps_tasks - Number of tasks in the `ps` job. Ignored if `cluster` is provided.
-
string
ps_device - String. Device of the `ps` job. If empty no `ps` job is used. Defaults to `ps`.
-
string
worker_device - String. Device of the `worker` job. If empty no `worker` job is used.
-
bool
merge_devices - `Boolean`. If `True`, merges or only sets a device if the device constraint is completely unset. merges device specification rather than overriding them.
-
IDictionary<object, object>
cluster - `ClusterDef` proto or `ClusterSpec`.
-
IEnumerable<string>
ps_ops - List of strings representing `Operation` types that need to be placed on `ps` devices. If `None`, defaults to `STANDARD_PS_OPS`.
-
RandomStrategy
ps_strategy - A callable invoked for every ps `Operation` (i.e. matched by `ps_ops`), that takes the `Operation` and returns the ps task index to use. If `None`, defaults to a round-robin strategy across all `ps` devices.
Returns
-
PythonFunctionContainer
- A function to pass to `tf.device()`.
Show Example
# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker # jobs on hosts worker0, worker1 and worker2. cluster_spec = { "ps": ["ps0:2222", "ps1:2222"], "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]} with tf.device(tf.compat.v1.train.replica_device_setter(cluster=cluster_spec)): # Build your graph v1 = tf.Variable(...) # assigned to /job:ps/task:0 v2 = tf.Variable(...) # assigned to /job:ps/task:1 v3 = tf.Variable(...) # assigned to /job:ps/task:0 # Run compute
PythonFunctionContainer replica_device_setter(int ps_tasks, string ps_device, string worker_device, bool merge_devices, ValueTuple<IDictionary<object, object>, PythonClassContainer> cluster, IEnumerable<string> ps_ops, GreedyLoadBalancingStrategy ps_strategy)
Return a `device function` to use when building a Graph for replicas. Device Functions are used in `with tf.device(device_function):` statement to
automatically assign devices to `Operation` objects as they are constructed,
Device constraints are added from the inner-most context first, working
outwards. The merging behavior adds constraints to fields that are yet unset
by a more inner context. Currently the fields are (job, task, cpu/gpu). If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op.
Otherwise, the value of `ps_tasks` is derived from `cluster`. By default, only Variable ops are placed on ps tasks, and the placement
strategy is round-robin over all ps tasks. A custom `ps_strategy` may be used
to do more intelligent placement, such as
tf.contrib.training.GreedyLoadBalancingStrategy
. For example,
Parameters
-
int
ps_tasks - Number of tasks in the `ps` job. Ignored if `cluster` is provided.
-
string
ps_device - String. Device of the `ps` job. If empty no `ps` job is used. Defaults to `ps`.
-
string
worker_device - String. Device of the `worker` job. If empty no `worker` job is used.
-
bool
merge_devices - `Boolean`. If `True`, merges or only sets a device if the device constraint is completely unset. merges device specification rather than overriding them.
-
ValueTuple<IDictionary<object, object>, PythonClassContainer>
cluster - `ClusterDef` proto or `ClusterSpec`.
-
IEnumerable<string>
ps_ops - List of strings representing `Operation` types that need to be placed on `ps` devices. If `None`, defaults to `STANDARD_PS_OPS`.
-
GreedyLoadBalancingStrategy
ps_strategy - A callable invoked for every ps `Operation` (i.e. matched by `ps_ops`), that takes the `Operation` and returns the ps task index to use. If `None`, defaults to a round-robin strategy across all `ps` devices.
Returns
-
PythonFunctionContainer
- A function to pass to `tf.device()`.
Show Example
# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker # jobs on hosts worker0, worker1 and worker2. cluster_spec = { "ps": ["ps0:2222", "ps1:2222"], "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]} with tf.device(tf.compat.v1.train.replica_device_setter(cluster=cluster_spec)): # Build your graph v1 = tf.Variable(...) # assigned to /job:ps/task:0 v2 = tf.Variable(...) # assigned to /job:ps/task:1 v3 = tf.Variable(...) # assigned to /job:ps/task:0 # Run compute
PythonFunctionContainer replica_device_setter(int ps_tasks, string ps_device, string worker_device, bool merge_devices, ValueTuple<IDictionary<object, object>, PythonClassContainer> cluster, IEnumerable<string> ps_ops, RandomStrategy ps_strategy)
Return a `device function` to use when building a Graph for replicas. Device Functions are used in `with tf.device(device_function):` statement to
automatically assign devices to `Operation` objects as they are constructed,
Device constraints are added from the inner-most context first, working
outwards. The merging behavior adds constraints to fields that are yet unset
by a more inner context. Currently the fields are (job, task, cpu/gpu). If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op.
Otherwise, the value of `ps_tasks` is derived from `cluster`. By default, only Variable ops are placed on ps tasks, and the placement
strategy is round-robin over all ps tasks. A custom `ps_strategy` may be used
to do more intelligent placement, such as
tf.contrib.training.GreedyLoadBalancingStrategy
. For example,
Parameters
-
int
ps_tasks - Number of tasks in the `ps` job. Ignored if `cluster` is provided.
-
string
ps_device - String. Device of the `ps` job. If empty no `ps` job is used. Defaults to `ps`.
-
string
worker_device - String. Device of the `worker` job. If empty no `worker` job is used.
-
bool
merge_devices - `Boolean`. If `True`, merges or only sets a device if the device constraint is completely unset. merges device specification rather than overriding them.
-
ValueTuple<IDictionary<object, object>, PythonClassContainer>
cluster - `ClusterDef` proto or `ClusterSpec`.
-
IEnumerable<string>
ps_ops - List of strings representing `Operation` types that need to be placed on `ps` devices. If `None`, defaults to `STANDARD_PS_OPS`.
-
RandomStrategy
ps_strategy - A callable invoked for every ps `Operation` (i.e. matched by `ps_ops`), that takes the `Operation` and returns the ps task index to use. If `None`, defaults to a round-robin strategy across all `ps` devices.
Returns
-
PythonFunctionContainer
- A function to pass to `tf.device()`.
Show Example
# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker # jobs on hosts worker0, worker1 and worker2. cluster_spec = { "ps": ["ps0:2222", "ps1:2222"], "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]} with tf.device(tf.compat.v1.train.replica_device_setter(cluster=cluster_spec)): # Build your graph v1 = tf.Variable(...) # assigned to /job:ps/task:0 v2 = tf.Variable(...) # assigned to /job:ps/task:1 v3 = tf.Variable(...) # assigned to /job:ps/task:0 # Run compute
PythonFunctionContainer replica_device_setter(int ps_tasks, string ps_device, string worker_device, bool merge_devices, ClusterSpec cluster, IEnumerable<string> ps_ops, _OpRoundRobinStrategy ps_strategy)
Return a `device function` to use when building a Graph for replicas. Device Functions are used in `with tf.device(device_function):` statement to
automatically assign devices to `Operation` objects as they are constructed,
Device constraints are added from the inner-most context first, working
outwards. The merging behavior adds constraints to fields that are yet unset
by a more inner context. Currently the fields are (job, task, cpu/gpu). If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op.
Otherwise, the value of `ps_tasks` is derived from `cluster`. By default, only Variable ops are placed on ps tasks, and the placement
strategy is round-robin over all ps tasks. A custom `ps_strategy` may be used
to do more intelligent placement, such as
tf.contrib.training.GreedyLoadBalancingStrategy
. For example,
Parameters
-
int
ps_tasks - Number of tasks in the `ps` job. Ignored if `cluster` is provided.
-
string
ps_device - String. Device of the `ps` job. If empty no `ps` job is used. Defaults to `ps`.
-
string
worker_device - String. Device of the `worker` job. If empty no `worker` job is used.
-
bool
merge_devices - `Boolean`. If `True`, merges or only sets a device if the device constraint is completely unset. merges device specification rather than overriding them.
-
ClusterSpec
cluster - `ClusterDef` proto or `ClusterSpec`.
-
IEnumerable<string>
ps_ops - List of strings representing `Operation` types that need to be placed on `ps` devices. If `None`, defaults to `STANDARD_PS_OPS`.
-
_OpRoundRobinStrategy
ps_strategy - A callable invoked for every ps `Operation` (i.e. matched by `ps_ops`), that takes the `Operation` and returns the ps task index to use. If `None`, defaults to a round-robin strategy across all `ps` devices.
Returns
-
PythonFunctionContainer
- A function to pass to `tf.device()`.
Show Example
# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker # jobs on hosts worker0, worker1 and worker2. cluster_spec = { "ps": ["ps0:2222", "ps1:2222"], "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]} with tf.device(tf.compat.v1.train.replica_device_setter(cluster=cluster_spec)): # Build your graph v1 = tf.Variable(...) # assigned to /job:ps/task:0 v2 = tf.Variable(...) # assigned to /job:ps/task:1 v3 = tf.Variable(...) # assigned to /job:ps/task:0 # Run compute
PythonFunctionContainer replica_device_setter(int ps_tasks, string ps_device, string worker_device, bool merge_devices, ValueTuple<IDictionary<object, object>, PythonClassContainer> cluster, IEnumerable<string> ps_ops, _OpRoundRobinStrategy ps_strategy)
Return a `device function` to use when building a Graph for replicas. Device Functions are used in `with tf.device(device_function):` statement to
automatically assign devices to `Operation` objects as they are constructed,
Device constraints are added from the inner-most context first, working
outwards. The merging behavior adds constraints to fields that are yet unset
by a more inner context. Currently the fields are (job, task, cpu/gpu). If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op.
Otherwise, the value of `ps_tasks` is derived from `cluster`. By default, only Variable ops are placed on ps tasks, and the placement
strategy is round-robin over all ps tasks. A custom `ps_strategy` may be used
to do more intelligent placement, such as
tf.contrib.training.GreedyLoadBalancingStrategy
. For example,
Parameters
-
int
ps_tasks - Number of tasks in the `ps` job. Ignored if `cluster` is provided.
-
string
ps_device - String. Device of the `ps` job. If empty no `ps` job is used. Defaults to `ps`.
-
string
worker_device - String. Device of the `worker` job. If empty no `worker` job is used.
-
bool
merge_devices - `Boolean`. If `True`, merges or only sets a device if the device constraint is completely unset. merges device specification rather than overriding them.
-
ValueTuple<IDictionary<object, object>, PythonClassContainer>
cluster - `ClusterDef` proto or `ClusterSpec`.
-
IEnumerable<string>
ps_ops - List of strings representing `Operation` types that need to be placed on `ps` devices. If `None`, defaults to `STANDARD_PS_OPS`.
-
_OpRoundRobinStrategy
ps_strategy - A callable invoked for every ps `Operation` (i.e. matched by `ps_ops`), that takes the `Operation` and returns the ps task index to use. If `None`, defaults to a round-robin strategy across all `ps` devices.
Returns
-
PythonFunctionContainer
- A function to pass to `tf.device()`.
Show Example
# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker # jobs on hosts worker0, worker1 and worker2. cluster_spec = { "ps": ["ps0:2222", "ps1:2222"], "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]} with tf.device(tf.compat.v1.train.replica_device_setter(cluster=cluster_spec)): # Build your graph v1 = tf.Variable(...) # assigned to /job:ps/task:0 v2 = tf.Variable(...) # assigned to /job:ps/task:1 v3 = tf.Variable(...) # assigned to /job:ps/task:0 # Run compute
PythonFunctionContainer replica_device_setter(int ps_tasks, string ps_device, string worker_device, bool merge_devices, ClusterSpec cluster, IEnumerable<string> ps_ops, GreedyLoadBalancingStrategy ps_strategy)
Return a `device function` to use when building a Graph for replicas. Device Functions are used in `with tf.device(device_function):` statement to
automatically assign devices to `Operation` objects as they are constructed,
Device constraints are added from the inner-most context first, working
outwards. The merging behavior adds constraints to fields that are yet unset
by a more inner context. Currently the fields are (job, task, cpu/gpu). If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op.
Otherwise, the value of `ps_tasks` is derived from `cluster`. By default, only Variable ops are placed on ps tasks, and the placement
strategy is round-robin over all ps tasks. A custom `ps_strategy` may be used
to do more intelligent placement, such as
tf.contrib.training.GreedyLoadBalancingStrategy
. For example,
Parameters
-
int
ps_tasks - Number of tasks in the `ps` job. Ignored if `cluster` is provided.
-
string
ps_device - String. Device of the `ps` job. If empty no `ps` job is used. Defaults to `ps`.
-
string
worker_device - String. Device of the `worker` job. If empty no `worker` job is used.
-
bool
merge_devices - `Boolean`. If `True`, merges or only sets a device if the device constraint is completely unset. merges device specification rather than overriding them.
-
ClusterSpec
cluster - `ClusterDef` proto or `ClusterSpec`.
-
IEnumerable<string>
ps_ops - List of strings representing `Operation` types that need to be placed on `ps` devices. If `None`, defaults to `STANDARD_PS_OPS`.
-
GreedyLoadBalancingStrategy
ps_strategy - A callable invoked for every ps `Operation` (i.e. matched by `ps_ops`), that takes the `Operation` and returns the ps task index to use. If `None`, defaults to a round-robin strategy across all `ps` devices.
Returns
-
PythonFunctionContainer
- A function to pass to `tf.device()`.
Show Example
# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker # jobs on hosts worker0, worker1 and worker2. cluster_spec = { "ps": ["ps0:2222", "ps1:2222"], "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]} with tf.device(tf.compat.v1.train.replica_device_setter(cluster=cluster_spec)): # Build your graph v1 = tf.Variable(...) # assigned to /job:ps/task:0 v2 = tf.Variable(...) # assigned to /job:ps/task:1 v3 = tf.Variable(...) # assigned to /job:ps/task:0 # Run compute
PythonFunctionContainer replica_device_setter(int ps_tasks, string ps_device, string worker_device, bool merge_devices, ClusterSpec cluster, IEnumerable<string> ps_ops, RandomStrategy ps_strategy)
Return a `device function` to use when building a Graph for replicas. Device Functions are used in `with tf.device(device_function):` statement to
automatically assign devices to `Operation` objects as they are constructed,
Device constraints are added from the inner-most context first, working
outwards. The merging behavior adds constraints to fields that are yet unset
by a more inner context. Currently the fields are (job, task, cpu/gpu). If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op.
Otherwise, the value of `ps_tasks` is derived from `cluster`. By default, only Variable ops are placed on ps tasks, and the placement
strategy is round-robin over all ps tasks. A custom `ps_strategy` may be used
to do more intelligent placement, such as
tf.contrib.training.GreedyLoadBalancingStrategy
. For example,
Parameters
-
int
ps_tasks - Number of tasks in the `ps` job. Ignored if `cluster` is provided.
-
string
ps_device - String. Device of the `ps` job. If empty no `ps` job is used. Defaults to `ps`.
-
string
worker_device - String. Device of the `worker` job. If empty no `worker` job is used.
-
bool
merge_devices - `Boolean`. If `True`, merges or only sets a device if the device constraint is completely unset. merges device specification rather than overriding them.
-
ClusterSpec
cluster - `ClusterDef` proto or `ClusterSpec`.
-
IEnumerable<string>
ps_ops - List of strings representing `Operation` types that need to be placed on `ps` devices. If `None`, defaults to `STANDARD_PS_OPS`.
-
RandomStrategy
ps_strategy - A callable invoked for every ps `Operation` (i.e. matched by `ps_ops`), that takes the `Operation` and returns the ps task index to use. If `None`, defaults to a round-robin strategy across all `ps` devices.
Returns
-
PythonFunctionContainer
- A function to pass to `tf.device()`.
Show Example
# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker # jobs on hosts worker0, worker1 and worker2. cluster_spec = { "ps": ["ps0:2222", "ps1:2222"], "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]} with tf.device(tf.compat.v1.train.replica_device_setter(cluster=cluster_spec)): # Build your graph v1 = tf.Variable(...) # assigned to /job:ps/task:0 v2 = tf.Variable(...) # assigned to /job:ps/task:1 v3 = tf.Variable(...) # assigned to /job:ps/task:0 # Run compute
object replica_device_setter_dyn(ImplicitContainer<T> ps_tasks, ImplicitContainer<T> ps_device, ImplicitContainer<T> worker_device, ImplicitContainer<T> merge_devices, object cluster, object ps_ops, object ps_strategy)
Return a `device function` to use when building a Graph for replicas. Device Functions are used in `with tf.device(device_function):` statement to
automatically assign devices to `Operation` objects as they are constructed,
Device constraints are added from the inner-most context first, working
outwards. The merging behavior adds constraints to fields that are yet unset
by a more inner context. Currently the fields are (job, task, cpu/gpu). If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op.
Otherwise, the value of `ps_tasks` is derived from `cluster`. By default, only Variable ops are placed on ps tasks, and the placement
strategy is round-robin over all ps tasks. A custom `ps_strategy` may be used
to do more intelligent placement, such as
tf.contrib.training.GreedyLoadBalancingStrategy
. For example,
Parameters
-
ImplicitContainer<T>
ps_tasks - Number of tasks in the `ps` job. Ignored if `cluster` is provided.
-
ImplicitContainer<T>
ps_device - String. Device of the `ps` job. If empty no `ps` job is used. Defaults to `ps`.
-
ImplicitContainer<T>
worker_device - String. Device of the `worker` job. If empty no `worker` job is used.
-
ImplicitContainer<T>
merge_devices - `Boolean`. If `True`, merges or only sets a device if the device constraint is completely unset. merges device specification rather than overriding them.
-
object
cluster - `ClusterDef` proto or `ClusterSpec`.
-
object
ps_ops - List of strings representing `Operation` types that need to be placed on `ps` devices. If `None`, defaults to `STANDARD_PS_OPS`.
-
object
ps_strategy - A callable invoked for every ps `Operation` (i.e. matched by `ps_ops`), that takes the `Operation` and returns the ps task index to use. If `None`, defaults to a round-robin strategy across all `ps` devices.
Returns
-
object
- A function to pass to `tf.device()`.
Show Example
# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker # jobs on hosts worker0, worker1 and worker2. cluster_spec = { "ps": ["ps0:2222", "ps1:2222"], "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]} with tf.device(tf.compat.v1.train.replica_device_setter(cluster=cluster_spec)): # Build your graph v1 = tf.Variable(...) # assigned to /job:ps/task:0 v2 = tf.Variable(...) # assigned to /job:ps/task:1 v3 = tf.Variable(...) # assigned to /job:ps/task:0 # Run compute
Tensor sdca_fprint(IGraphNodeBase input, string name)
Computes fingerprints of the input strings.
Parameters
-
IGraphNodeBase
input - A `Tensor` of type `string`. vector of strings to compute fingerprints on.
-
string
name - A name for the operation (optional).
Returns
-
Tensor
- A `Tensor` of type `int64`.
object sdca_fprint_dyn(object input, object name)
Computes fingerprints of the input strings.
Parameters
-
object
input - A `Tensor` of type `string`. vector of strings to compute fingerprints on.
-
object
name - A name for the operation (optional).
Returns
-
object
- A `Tensor` of type `int64`.
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, int loss_type, double l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
int
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
double
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, int loss_type, int l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
int
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
int
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, bool loss_type, string l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
bool
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
string
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, bool loss_type, double l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
bool
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
double
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, bool loss_type, bool l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
bool
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
bool
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, double loss_type, int l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
double
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
int
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, int loss_type, bool l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
int
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
bool
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, double loss_type, string l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
double
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
string
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, int loss_type, string l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
int
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
string
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, string loss_type, bool l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
string
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
bool
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, bool loss_type, int l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
bool
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
int
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, string loss_type, double l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
string
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
double
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, double loss_type, bool l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
double
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
bool
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, string loss_type, string l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
string
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
string
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, double loss_type, double l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
double
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
double
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer(IEnumerable<object> sparse_example_indices, IEnumerable<object> sparse_feature_indices, IEnumerable<object> sparse_feature_values, IEnumerable<IGraphNodeBase> dense_features, IGraphNodeBase example_weights, IGraphNodeBase example_labels, IEnumerable<object> sparse_indices, IEnumerable<IGraphNodeBase> sparse_weights, IEnumerable<IGraphNodeBase> dense_weights, IGraphNodeBase example_state_data, string loss_type, int l1, object l2, object num_loss_partitions, int num_inner_iterations, ImplicitContainer<T> adaptative, string name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
IEnumerable<object>
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
IEnumerable<object>
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
IEnumerable<object>
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
IEnumerable<IGraphNodeBase>
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
IGraphNodeBase
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
IGraphNodeBase
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
IEnumerable<object>
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
IEnumerable<IGraphNodeBase>
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
IEnumerable<IGraphNodeBase>
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
IGraphNodeBase
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
string
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
int
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
int
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
string
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_optimizer_dyn(object sparse_example_indices, object sparse_feature_indices, object sparse_feature_values, object dense_features, object example_weights, object example_labels, object sparse_indices, object sparse_weights, object dense_weights, object example_state_data, object loss_type, object l1, object l2, object num_loss_partitions, object num_inner_iterations, ImplicitContainer<T> adaptative, object name)
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is
strongly-convex, the optimizer optimizes the dual objective at each step. The
optimizer applies each update one example at a time. Examples are sampled
uniformly, and the optimizer is learning rate free and enjoys linear convergence
rate. [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
Parameters
-
object
sparse_example_indices - A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices.
-
object
sparse_feature_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices.
-
object
sparse_feature_values - A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group.
-
object
dense_features - A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values.
-
object
example_weights - A `Tensor` of type `float32`. a vector which contains the weight associated with each example.
-
object
example_labels - A `Tensor` of type `float32`. a vector which contains the label/target associated with each example.
-
object
sparse_indices - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
-
object
sparse_weights - A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group.
-
object
dense_weights - A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group.
-
object
example_state_data - A `Tensor` of type `float32`. a list of vectors containing the example state data.
-
object
loss_type - A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
-
object
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength.
-
object
num_loss_partitions - An `int` that is `>= 1`. Number of partitions of the global loss function.
-
object
num_inner_iterations - An `int` that is `>= 1`. Number of iterations per mini-batch.
-
ImplicitContainer<T>
adaptative - An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop.
-
object
name - A name for the operation (optional).
Returns
-
object
- A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).
object sdca_shrink_l1(IEnumerable<IGraphNodeBase> weights, int l1, double l2, string name)
Applies L1 regularization shrink step on the parameters.
Parameters
-
IEnumerable<IGraphNodeBase>
weights - A list of `Tensor` objects with type mutable `float32`. a list of vectors where each value is the weight associated with a feature group.
-
int
l1 - A `float`. Symmetric l1 regularization strength.
-
double
l2 - A `float`. Symmetric l2 regularization strength. Should be a positive float.
-
string
name - A name for the operation (optional).
Returns
-
object
- The created Operation.
object sdca_shrink_l1(IEnumerable<IGraphNodeBase> weights, int l1, int l2, string name)
Applies L1 regularization shrink step on the parameters.
Parameters
-
IEnumerable<IGraphNodeBase>
weights - A list of `Tensor` objects with type mutable `float32`. a list of vectors where each value is the weight associated with a feature group.
-
int
l1 - A `float`. Symmetric l1 regularization strength.
-
int
l2 - A `float`. Symmetric l2 regularization strength. Should be a positive float.
-
string
name - A name for the operation (optional).
Returns
-
object
- The created Operation.
object sdca_shrink_l1_dyn(object weights, object l1, object l2, object name)
Applies L1 regularization shrink step on the parameters.
Parameters
-
object
weights - A list of `Tensor` objects with type mutable `float32`. a list of vectors where each value is the weight associated with a feature group.
-
object
l1 - A `float`. Symmetric l1 regularization strength.
-
object
l2 - A `float`. Symmetric l2 regularization strength. Should be a positive float.
-
object
name - A name for the operation (optional).
Returns
-
object
- The created Operation.
object shuffle_batch(IEnumerable<object> tensors, int batch_size, int capacity, int min_after_dequeue, int num_threads, Nullable<int> seed, bool enqueue_many, object shapes, bool allow_smaller_final_batch, string shared_name, string name)
Creates batches by randomly shuffling tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.shuffle(min_after_dequeue).batch(batch_size)`. This function adds the following to the current `Graph`: * A shuffling queue into which tensors from `tensors` are enqueued.
* A `dequeue_many` operation to create batches from the queue.
* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
from `tensors`. If `enqueue_many` is `False`, `tensors` is assumed to represent a
single example. An input tensor with shape `[x, y, z]` will be output
as a tensor with shape `[batch_size, x, y, z]`. If `enqueue_many` is `True`, `tensors` is assumed to represent a
batch of examples, where the first dimension is indexed by example,
and all members of `tensors` should have the same size in the
first dimension. If an input tensor has shape `[*, x, y, z]`, the
output will have shape `[batch_size, x, y, z]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
*N.B.:* You must ensure that either (i) the `shapes` argument is
passed, or (ii) all of the tensors in `tensors` must have
fully-defined shapes. `ValueError` will be raised if neither of
these conditions holds. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IEnumerable<object>
tensors - The list or dictionary of tensors to enqueue.
-
int
batch_size - The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
int
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
int
num_threads - The number of threads enqueuing `tensor_list`.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the types as `tensors`.
Show Example
# Creates batches of 32 images and 32 labels. image_batch, label_batch = tf.compat.v1.train.shuffle_batch( [single_image, single_label], batch_size=32, num_threads=4, capacity=50000, min_after_dequeue=10000)
object shuffle_batch(IDictionary<string, string> tensors, int batch_size, int capacity, int min_after_dequeue, int num_threads, Nullable<int> seed, bool enqueue_many, object shapes, bool allow_smaller_final_batch, string shared_name, string name)
Creates batches by randomly shuffling tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.shuffle(min_after_dequeue).batch(batch_size)`. This function adds the following to the current `Graph`: * A shuffling queue into which tensors from `tensors` are enqueued.
* A `dequeue_many` operation to create batches from the queue.
* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
from `tensors`. If `enqueue_many` is `False`, `tensors` is assumed to represent a
single example. An input tensor with shape `[x, y, z]` will be output
as a tensor with shape `[batch_size, x, y, z]`. If `enqueue_many` is `True`, `tensors` is assumed to represent a
batch of examples, where the first dimension is indexed by example,
and all members of `tensors` should have the same size in the
first dimension. If an input tensor has shape `[*, x, y, z]`, the
output will have shape `[batch_size, x, y, z]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
*N.B.:* You must ensure that either (i) the `shapes` argument is
passed, or (ii) all of the tensors in `tensors` must have
fully-defined shapes. `ValueError` will be raised if neither of
these conditions holds. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IDictionary<string, string>
tensors - The list or dictionary of tensors to enqueue.
-
int
batch_size - The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
int
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
int
num_threads - The number of threads enqueuing `tensor_list`.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the types as `tensors`.
Show Example
# Creates batches of 32 images and 32 labels. image_batch, label_batch = tf.compat.v1.train.shuffle_batch( [single_image, single_label], batch_size=32, num_threads=4, capacity=50000, min_after_dequeue=10000)
object shuffle_batch_dyn(object tensors, object batch_size, object capacity, object min_after_dequeue, ImplicitContainer<T> num_threads, object seed, ImplicitContainer<T> enqueue_many, object shapes, ImplicitContainer<T> allow_smaller_final_batch, object shared_name, object name)
Creates batches by randomly shuffling tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.shuffle(min_after_dequeue).batch(batch_size)`. This function adds the following to the current `Graph`: * A shuffling queue into which tensors from `tensors` are enqueued.
* A `dequeue_many` operation to create batches from the queue.
* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
from `tensors`. If `enqueue_many` is `False`, `tensors` is assumed to represent a
single example. An input tensor with shape `[x, y, z]` will be output
as a tensor with shape `[batch_size, x, y, z]`. If `enqueue_many` is `True`, `tensors` is assumed to represent a
batch of examples, where the first dimension is indexed by example,
and all members of `tensors` should have the same size in the
first dimension. If an input tensor has shape `[*, x, y, z]`, the
output will have shape `[batch_size, x, y, z]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
*N.B.:* You must ensure that either (i) the `shapes` argument is
passed, or (ii) all of the tensors in `tensors` must have
fully-defined shapes. `ValueError` will be raised if neither of
these conditions holds. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
object
tensors - The list or dictionary of tensors to enqueue.
-
object
batch_size - The new batch size pulled from the queue.
-
object
capacity - An integer. The maximum number of elements in the queue.
-
object
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
ImplicitContainer<T>
num_threads - The number of threads enqueuing `tensor_list`.
-
object
seed - Seed for the random shuffling within the queue.
-
ImplicitContainer<T>
enqueue_many - Whether each tensor in `tensor_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensor_list`.
-
ImplicitContainer<T>
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (Optional) If set, this queue will be shared under the given name across multiple sessions.
-
object
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the types as `tensors`.
Show Example
# Creates batches of 32 images and 32 labels. image_batch, label_batch = tf.compat.v1.train.shuffle_batch( [single_image, single_label], batch_size=32, num_threads=4, capacity=50000, min_after_dequeue=10000)
object shuffle_batch_join(IEnumerable<IDictionary<string, string>> tensors_list, int batch_size, int capacity, int min_after_dequeue, Nullable<int> seed, bool enqueue_many, object shapes, bool allow_smaller_final_batch, string shared_name, string name)
Create batches by randomly shuffling tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).shuffle(min_after_dequeue).batch(batch_size)`. The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.compat.v1.train.shuffle_batch()`. This version enqueues a different list of tensors in different threads.
It adds the following to the current `Graph`: * A shuffling queue into which tensors from `tensors_list` are enqueued.
* A `dequeue_many` operation to create batches from the queue.
* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
from `tensors_list`. `len(tensors_list)` threads will be started, with thread `i` enqueuing
the tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first dimension if
`enqueue_many` is true. If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor with shape `[x, y, z]`
will be output as a tensor with shape `[batch_size, x, y, z]`. If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. If an input tensor has shape `[*, x,
y, z]`, the output will have shape `[batch_size, x, y, z]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IEnumerable<IDictionary<string, string>>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
int
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
int
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors_list[i]`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object shuffle_batch_join(IEnumerable<IDictionary<string, string>> tensors_list, int batch_size, int capacity, int min_after_dequeue, Nullable<int> seed, Nullable<int> enqueue_many, object shapes, bool allow_smaller_final_batch, string shared_name, string name)
Create batches by randomly shuffling tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).shuffle(min_after_dequeue).batch(batch_size)`. The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.compat.v1.train.shuffle_batch()`. This version enqueues a different list of tensors in different threads.
It adds the following to the current `Graph`: * A shuffling queue into which tensors from `tensors_list` are enqueued.
* A `dequeue_many` operation to create batches from the queue.
* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
from `tensors_list`. `len(tensors_list)` threads will be started, with thread `i` enqueuing
the tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first dimension if
`enqueue_many` is true. If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor with shape `[x, y, z]`
will be output as a tensor with shape `[batch_size, x, y, z]`. If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. If an input tensor has shape `[*, x,
y, z]`, the output will have shape `[batch_size, x, y, z]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IEnumerable<IDictionary<string, string>>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
int
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
int
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
Nullable<int>
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors_list[i]`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object shuffle_batch_join(IEnumerable<IDictionary<string, string>> tensors_list, IGraphNodeBase batch_size, int capacity, int min_after_dequeue, Nullable<int> seed, bool enqueue_many, object shapes, bool allow_smaller_final_batch, string shared_name, string name)
Create batches by randomly shuffling tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).shuffle(min_after_dequeue).batch(batch_size)`. The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.compat.v1.train.shuffle_batch()`. This version enqueues a different list of tensors in different threads.
It adds the following to the current `Graph`: * A shuffling queue into which tensors from `tensors_list` are enqueued.
* A `dequeue_many` operation to create batches from the queue.
* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
from `tensors_list`. `len(tensors_list)` threads will be started, with thread `i` enqueuing
the tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first dimension if
`enqueue_many` is true. If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor with shape `[x, y, z]`
will be output as a tensor with shape `[batch_size, x, y, z]`. If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. If an input tensor has shape `[*, x,
y, z]`, the output will have shape `[batch_size, x, y, z]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IEnumerable<IDictionary<string, string>>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
IGraphNodeBase
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
int
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors_list[i]`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object shuffle_batch_join(IEnumerable<IDictionary<string, string>> tensors_list, int batch_size, int capacity, IGraphNodeBase min_after_dequeue, Nullable<int> seed, bool enqueue_many, object shapes, bool allow_smaller_final_batch, string shared_name, string name)
Create batches by randomly shuffling tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).shuffle(min_after_dequeue).batch(batch_size)`. The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.compat.v1.train.shuffle_batch()`. This version enqueues a different list of tensors in different threads.
It adds the following to the current `Graph`: * A shuffling queue into which tensors from `tensors_list` are enqueued.
* A `dequeue_many` operation to create batches from the queue.
* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
from `tensors_list`. `len(tensors_list)` threads will be started, with thread `i` enqueuing
the tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first dimension if
`enqueue_many` is true. If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor with shape `[x, y, z]`
will be output as a tensor with shape `[batch_size, x, y, z]`. If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. If an input tensor has shape `[*, x,
y, z]`, the output will have shape `[batch_size, x, y, z]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IEnumerable<IDictionary<string, string>>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
int
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
IGraphNodeBase
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors_list[i]`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object shuffle_batch_join(IEnumerable<IDictionary<string, string>> tensors_list, IGraphNodeBase batch_size, int capacity, int min_after_dequeue, Nullable<int> seed, Nullable<int> enqueue_many, object shapes, bool allow_smaller_final_batch, string shared_name, string name)
Create batches by randomly shuffling tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).shuffle(min_after_dequeue).batch(batch_size)`. The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.compat.v1.train.shuffle_batch()`. This version enqueues a different list of tensors in different threads.
It adds the following to the current `Graph`: * A shuffling queue into which tensors from `tensors_list` are enqueued.
* A `dequeue_many` operation to create batches from the queue.
* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
from `tensors_list`. `len(tensors_list)` threads will be started, with thread `i` enqueuing
the tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first dimension if
`enqueue_many` is true. If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor with shape `[x, y, z]`
will be output as a tensor with shape `[batch_size, x, y, z]`. If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. If an input tensor has shape `[*, x,
y, z]`, the output will have shape `[batch_size, x, y, z]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IEnumerable<IDictionary<string, string>>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
IGraphNodeBase
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
int
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
Nullable<int>
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors_list[i]`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object shuffle_batch_join(IEnumerable<IDictionary<string, string>> tensors_list, IGraphNodeBase batch_size, int capacity, IGraphNodeBase min_after_dequeue, Nullable<int> seed, bool enqueue_many, object shapes, bool allow_smaller_final_batch, string shared_name, string name)
Create batches by randomly shuffling tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).shuffle(min_after_dequeue).batch(batch_size)`. The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.compat.v1.train.shuffle_batch()`. This version enqueues a different list of tensors in different threads.
It adds the following to the current `Graph`: * A shuffling queue into which tensors from `tensors_list` are enqueued.
* A `dequeue_many` operation to create batches from the queue.
* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
from `tensors_list`. `len(tensors_list)` threads will be started, with thread `i` enqueuing
the tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first dimension if
`enqueue_many` is true. If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor with shape `[x, y, z]`
will be output as a tensor with shape `[batch_size, x, y, z]`. If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. If an input tensor has shape `[*, x,
y, z]`, the output will have shape `[batch_size, x, y, z]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IEnumerable<IDictionary<string, string>>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
IGraphNodeBase
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
IGraphNodeBase
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
bool
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors_list[i]`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object shuffle_batch_join(IEnumerable<IDictionary<string, string>> tensors_list, IGraphNodeBase batch_size, int capacity, IGraphNodeBase min_after_dequeue, Nullable<int> seed, Nullable<int> enqueue_many, object shapes, bool allow_smaller_final_batch, string shared_name, string name)
Create batches by randomly shuffling tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).shuffle(min_after_dequeue).batch(batch_size)`. The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.compat.v1.train.shuffle_batch()`. This version enqueues a different list of tensors in different threads.
It adds the following to the current `Graph`: * A shuffling queue into which tensors from `tensors_list` are enqueued.
* A `dequeue_many` operation to create batches from the queue.
* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
from `tensors_list`. `len(tensors_list)` threads will be started, with thread `i` enqueuing
the tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first dimension if
`enqueue_many` is true. If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor with shape `[x, y, z]`
will be output as a tensor with shape `[batch_size, x, y, z]`. If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. If an input tensor has shape `[*, x,
y, z]`, the output will have shape `[batch_size, x, y, z]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IEnumerable<IDictionary<string, string>>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
IGraphNodeBase
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
IGraphNodeBase
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
Nullable<int>
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors_list[i]`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object shuffle_batch_join(IEnumerable<IDictionary<string, string>> tensors_list, int batch_size, int capacity, IGraphNodeBase min_after_dequeue, Nullable<int> seed, Nullable<int> enqueue_many, object shapes, bool allow_smaller_final_batch, string shared_name, string name)
Create batches by randomly shuffling tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).shuffle(min_after_dequeue).batch(batch_size)`. The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.compat.v1.train.shuffle_batch()`. This version enqueues a different list of tensors in different threads.
It adds the following to the current `Graph`: * A shuffling queue into which tensors from `tensors_list` are enqueued.
* A `dequeue_many` operation to create batches from the queue.
* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
from `tensors_list`. `len(tensors_list)` threads will be started, with thread `i` enqueuing
the tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first dimension if
`enqueue_many` is true. If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor with shape `[x, y, z]`
will be output as a tensor with shape `[batch_size, x, y, z]`. If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. If an input tensor has shape `[*, x,
y, z]`, the output will have shape `[batch_size, x, y, z]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
IEnumerable<IDictionary<string, string>>
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
int
batch_size - An integer. The new batch size pulled from the queue.
-
int
capacity - An integer. The maximum number of elements in the queue.
-
IGraphNodeBase
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
Nullable<int>
seed - Seed for the random shuffling within the queue.
-
Nullable<int>
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors_list[i]`.
-
bool
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
string
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
object shuffle_batch_join_dyn(object tensors_list, object batch_size, object capacity, object min_after_dequeue, object seed, ImplicitContainer<T> enqueue_many, object shapes, ImplicitContainer<T> allow_smaller_final_batch, object shared_name, object name)
Create batches by randomly shuffling tensors. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.interleave(...).shuffle(min_after_dequeue).batch(batch_size)`. The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.compat.v1.train.shuffle_batch()`. This version enqueues a different list of tensors in different threads.
It adds the following to the current `Graph`: * A shuffling queue into which tensors from `tensors_list` are enqueued.
* A `dequeue_many` operation to create batches from the queue.
* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
from `tensors_list`. `len(tensors_list)` threads will be started, with thread `i` enqueuing
the tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first dimension if
`enqueue_many` is true. If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor with shape `[x, y, z]`
will be output as a tensor with shape `[batch_size, x, y, z]`. If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. If an input tensor has shape `[*, x,
y, z]`, the output will have shape `[batch_size, x, y, z]`. The `capacity` argument controls the how long the prefetching is allowed to
grow the queues. The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself. If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Parameters
-
object
tensors_list - A list of tuples or dictionaries of tensors to enqueue.
-
object
batch_size - An integer. The new batch size pulled from the queue.
-
object
capacity - An integer. The maximum number of elements in the queue.
-
object
min_after_dequeue - Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.
-
object
seed - Seed for the random shuffling within the queue.
-
ImplicitContainer<T>
enqueue_many - Whether each tensor in `tensor_list_list` is a single example.
-
object
shapes - (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors_list[i]`.
-
ImplicitContainer<T>
allow_smaller_final_batch - (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue.
-
object
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
object
name - (Optional) A name for the operations.
Returns
-
object
- A list or dictionary of tensors with the same number and types as `tensors_list[i]`.
IList<Tensor> slice_input_producer(IEnumerable<IGraphNodeBase> tensor_list, Nullable<int> num_epochs, bool shuffle, Nullable<int> seed, int capacity, string shared_name, string name)
Produces a slice of each `Tensor` in `tensor_list`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.from_tensor_slices(tuple(tensor_list)).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. Implemented using a Queue -- a `QueueRunner` for the Queue
is added to the current `Graph`'s `QUEUE_RUNNER` collection.
Parameters
-
IEnumerable<IGraphNodeBase>
tensor_list - A list of `Tensor` objects. Every `Tensor` in `tensor_list` must have the same size in the first dimension.
-
Nullable<int>
num_epochs - An integer (optional). If specified, `slice_input_producer` produces each slice `num_epochs` times before generating an `OutOfRange` error. If not specified, `slice_input_producer` can cycle through the slices an unlimited number of times.
-
bool
shuffle - Boolean. If true, the integers are randomly shuffled within each epoch.
-
Nullable<int>
seed - An integer (optional). Seed used if shuffle == True.
-
int
capacity - An integer. Sets the queue capacity.
-
string
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
string
name - A name for the operations (optional).
Returns
-
IList<Tensor>
- A list of tensors, one for each element of `tensor_list`. If the tensor in `tensor_list` has shape `[N, a, b,.., z]`, then the corresponding output tensor will have shape `[a, b,..., z]`.
object slice_input_producer_dyn(object tensor_list, object num_epochs, ImplicitContainer<T> shuffle, object seed, ImplicitContainer<T> capacity, object shared_name, object name)
Produces a slice of each `Tensor` in `tensor_list`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.from_tensor_slices(tuple(tensor_list)).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. Implemented using a Queue -- a `QueueRunner` for the Queue
is added to the current `Graph`'s `QUEUE_RUNNER` collection.
Parameters
-
object
tensor_list - A list of `Tensor` objects. Every `Tensor` in `tensor_list` must have the same size in the first dimension.
-
object
num_epochs - An integer (optional). If specified, `slice_input_producer` produces each slice `num_epochs` times before generating an `OutOfRange` error. If not specified, `slice_input_producer` can cycle through the slices an unlimited number of times.
-
ImplicitContainer<T>
shuffle - Boolean. If true, the integers are randomly shuffled within each epoch.
-
object
seed - An integer (optional). Seed used if shuffle == True.
-
ImplicitContainer<T>
capacity - An integer. Sets the queue capacity.
-
object
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions.
-
object
name - A name for the operations (optional).
Returns
-
object
- A list of tensors, one for each element of `tensor_list`. If the tensor in `tensor_list` has shape `[N, a, b,.., z]`, then the corresponding output tensor will have shape `[a, b,..., z]`.
IList<object> start_queue_runners(MonitoredSession sess, Coordinator coord, bool daemon, bool start, ImplicitContainer<T> collection)
Starts all queue runners collected in the graph. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the
tf.data
module. This is a companion method to `add_queue_runner()`. It just starts
threads for all queue runners collected in the graph. It returns
the list of all threads.
Parameters
-
MonitoredSession
sess - `Session` used to run the queue ops. Defaults to the default session.
-
Coordinator
coord - Optional `Coordinator` for coordinating the started threads.
-
bool
daemon - Whether the threads should be marked as `daemons`, meaning they don't block program exit.
-
bool
start - Set to `False` to only create the threads, not start them.
-
ImplicitContainer<T>
collection - A `GraphKey` specifying the graph collection to get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`.
Returns
-
IList<object>
- A list of threads.
IList<object> start_queue_runners(string sess, Coordinator coord, bool daemon, bool start, ImplicitContainer<T> collection)
Starts all queue runners collected in the graph. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the
tf.data
module. This is a companion method to `add_queue_runner()`. It just starts
threads for all queue runners collected in the graph. It returns
the list of all threads.
Parameters
-
string
sess - `Session` used to run the queue ops. Defaults to the default session.
-
Coordinator
coord - Optional `Coordinator` for coordinating the started threads.
-
bool
daemon - Whether the threads should be marked as `daemons`, meaning they don't block program exit.
-
bool
start - Set to `False` to only create the threads, not start them.
-
ImplicitContainer<T>
collection - A `GraphKey` specifying the graph collection to get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`.
Returns
-
IList<object>
- A list of threads.
IList<object> start_queue_runners(Session sess, Coordinator coord, bool daemon, bool start, ImplicitContainer<T> collection)
Starts all queue runners collected in the graph. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the
tf.data
module. This is a companion method to `add_queue_runner()`. It just starts
threads for all queue runners collected in the graph. It returns
the list of all threads.
Parameters
-
Session
sess - `Session` used to run the queue ops. Defaults to the default session.
-
Coordinator
coord - Optional `Coordinator` for coordinating the started threads.
-
bool
daemon - Whether the threads should be marked as `daemons`, meaning they don't block program exit.
-
bool
start - Set to `False` to only create the threads, not start them.
-
ImplicitContainer<T>
collection - A `GraphKey` specifying the graph collection to get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`.
Returns
-
IList<object>
- A list of threads.
IList<object> start_queue_runners(_CoordinatedSession sess, Coordinator coord, bool daemon, bool start, ImplicitContainer<T> collection)
Starts all queue runners collected in the graph. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the
tf.data
module. This is a companion method to `add_queue_runner()`. It just starts
threads for all queue runners collected in the graph. It returns
the list of all threads.
Parameters
-
_CoordinatedSession
sess - `Session` used to run the queue ops. Defaults to the default session.
-
Coordinator
coord - Optional `Coordinator` for coordinating the started threads.
-
bool
daemon - Whether the threads should be marked as `daemons`, meaning they don't block program exit.
-
bool
start - Set to `False` to only create the threads, not start them.
-
ImplicitContainer<T>
collection - A `GraphKey` specifying the graph collection to get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`.
Returns
-
IList<object>
- A list of threads.
IList<object> start_queue_runners(LocalCLIDebugWrapperSession sess, Coordinator coord, bool daemon, bool start, ImplicitContainer<T> collection)
Starts all queue runners collected in the graph. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the
tf.data
module. This is a companion method to `add_queue_runner()`. It just starts
threads for all queue runners collected in the graph. It returns
the list of all threads.
Parameters
-
LocalCLIDebugWrapperSession
sess - `Session` used to run the queue ops. Defaults to the default session.
-
Coordinator
coord - Optional `Coordinator` for coordinating the started threads.
-
bool
daemon - Whether the threads should be marked as `daemons`, meaning they don't block program exit.
-
bool
start - Set to `False` to only create the threads, not start them.
-
ImplicitContainer<T>
collection - A `GraphKey` specifying the graph collection to get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`.
Returns
-
IList<object>
- A list of threads.
object start_queue_runners_dyn(object sess, object coord, ImplicitContainer<T> daemon, ImplicitContainer<T> start, ImplicitContainer<T> collection)
Starts all queue runners collected in the graph. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the
tf.data
module. This is a companion method to `add_queue_runner()`. It just starts
threads for all queue runners collected in the graph. It returns
the list of all threads.
Parameters
-
object
sess - `Session` used to run the queue ops. Defaults to the default session.
-
object
coord - Optional `Coordinator` for coordinating the started threads.
-
ImplicitContainer<T>
daemon - Whether the threads should be marked as `daemons`, meaning they don't block program exit.
-
ImplicitContainer<T>
start - Set to `False` to only create the threads, not start them.
-
ImplicitContainer<T>
collection - A `GraphKey` specifying the graph collection to get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`.
Returns
-
object
- A list of threads.
object string_input_producer(IGraphNodeBase string_tensor, Nullable<int> num_epochs, bool shuffle, Nullable<int> seed, int capacity, string shared_name, string name, object cancel_op)
Output strings (e.g. filenames) to a queue for an input pipeline. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. Note: if `num_epochs` is not `None`, this function creates local counter
`epochs`. Use `local_variables_initializer()` to initialize local variables.
Parameters
-
IGraphNodeBase
string_tensor - A 1-D string tensor with the strings to produce.
-
Nullable<int>
num_epochs - An integer (optional). If specified, `string_input_producer` produces each string from `string_tensor` `num_epochs` times before generating an `OutOfRange` error. If not specified, `string_input_producer` can cycle through the strings in `string_tensor` an unlimited number of times.
-
bool
shuffle - Boolean. If true, the strings are randomly shuffled within each epoch.
-
Nullable<int>
seed - An integer (optional). Seed used if shuffle == True.
-
int
capacity - An integer. Sets the queue capacity.
-
string
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions. All sessions open to the device which has this queue will be able to access it via the shared_name. Using this in a distributed setting means each name will only be seen by one of the sessions which has access to this operation.
-
string
name - A name for the operations (optional).
-
object
cancel_op - Cancel op for the queue (optional).
Returns
-
object
- A queue with the output strings. A `QueueRunner` for the Queue is added to the current `Graph`'s `QUEUE_RUNNER` collection.
object string_input_producer(IEnumerable<object> string_tensor, Nullable<int> num_epochs, bool shuffle, Nullable<int> seed, int capacity, string shared_name, string name, object cancel_op)
Output strings (e.g. filenames) to a queue for an input pipeline. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. Note: if `num_epochs` is not `None`, this function creates local counter
`epochs`. Use `local_variables_initializer()` to initialize local variables.
Parameters
-
IEnumerable<object>
string_tensor - A 1-D string tensor with the strings to produce.
-
Nullable<int>
num_epochs - An integer (optional). If specified, `string_input_producer` produces each string from `string_tensor` `num_epochs` times before generating an `OutOfRange` error. If not specified, `string_input_producer` can cycle through the strings in `string_tensor` an unlimited number of times.
-
bool
shuffle - Boolean. If true, the strings are randomly shuffled within each epoch.
-
Nullable<int>
seed - An integer (optional). Seed used if shuffle == True.
-
int
capacity - An integer. Sets the queue capacity.
-
string
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions. All sessions open to the device which has this queue will be able to access it via the shared_name. Using this in a distributed setting means each name will only be seen by one of the sessions which has access to this operation.
-
string
name - A name for the operations (optional).
-
object
cancel_op - Cancel op for the queue (optional).
Returns
-
object
- A queue with the output strings. A `QueueRunner` for the Queue is added to the current `Graph`'s `QUEUE_RUNNER` collection.
object string_input_producer_dyn(object string_tensor, object num_epochs, ImplicitContainer<T> shuffle, object seed, ImplicitContainer<T> capacity, object shared_name, object name, object cancel_op)
Output strings (e.g. filenames) to a queue for an input pipeline. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by
tf.data
. Use `tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. Note: if `num_epochs` is not `None`, this function creates local counter
`epochs`. Use `local_variables_initializer()` to initialize local variables.
Parameters
-
object
string_tensor - A 1-D string tensor with the strings to produce.
-
object
num_epochs - An integer (optional). If specified, `string_input_producer` produces each string from `string_tensor` `num_epochs` times before generating an `OutOfRange` error. If not specified, `string_input_producer` can cycle through the strings in `string_tensor` an unlimited number of times.
-
ImplicitContainer<T>
shuffle - Boolean. If true, the strings are randomly shuffled within each epoch.
-
object
seed - An integer (optional). Seed used if shuffle == True.
-
ImplicitContainer<T>
capacity - An integer. Sets the queue capacity.
-
object
shared_name - (optional). If set, this queue will be shared under the given name across multiple sessions. All sessions open to the device which has this queue will be able to access it via the shared_name. Using this in a distributed setting means each name will only be seen by one of the sessions which has access to this operation.
-
object
name - A name for the operations (optional).
-
object
cancel_op - Cancel op for the queue (optional).
Returns
-
object
- A queue with the output strings. A `QueueRunner` for the Queue is added to the current `Graph`'s `QUEUE_RUNNER` collection.
IEnumerator<object> summary_iterator(string path)
An iterator for reading `Event` protocol buffers from an event file. You can use this function to read events written to an event file. It returns
a Python iterator that yields `Event` protocol buffers. Example: Print the contents of an events file.
Example: Print selected summary values.
See the protocol buffer definitions of
[Event](https://www.tensorflow.org/code/tensorflow/core/util/event.proto)
and
[Summary](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)
for more information about their attributes.
Parameters
-
string
path - The path to an event file created by a `SummaryWriter`.
Show Example
for e in tf.compat.v1.train.summary_iterator(path to events file): print(e)
IEnumerator<object> summary_iterator(IEnumerable<object> path)
An iterator for reading `Event` protocol buffers from an event file. You can use this function to read events written to an event file. It returns
a Python iterator that yields `Event` protocol buffers. Example: Print the contents of an events file.
Example: Print selected summary values.
See the protocol buffer definitions of
[Event](https://www.tensorflow.org/code/tensorflow/core/util/event.proto)
and
[Summary](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)
for more information about their attributes.
Parameters
-
IEnumerable<object>
path - The path to an event file created by a `SummaryWriter`.
Show Example
for e in tf.compat.v1.train.summary_iterator(path to events file): print(e)
object summary_iterator_dyn(object path)
An iterator for reading `Event` protocol buffers from an event file. You can use this function to read events written to an event file. It returns
a Python iterator that yields `Event` protocol buffers. Example: Print the contents of an events file.
Example: Print selected summary values.
See the protocol buffer definitions of
[Event](https://www.tensorflow.org/code/tensorflow/core/util/event.proto)
and
[Summary](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)
for more information about their attributes.
Parameters
-
object
path - The path to an event file created by a `SummaryWriter`.
Show Example
for e in tf.compat.v1.train.summary_iterator(path to events file): print(e)
void update_checkpoint_state(string save_dir, object model_checkpoint_path, IEnumerable<object> all_model_checkpoint_paths, object latest_filename, object all_model_checkpoint_timestamps, object last_preserved_timestamp)
Updates the content of the 'checkpoint' file. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.train.CheckpointManager
to manage checkpoints rather than manually editing the Checkpoint proto. This updates the checkpoint file containing a CheckpointState
proto.
Parameters
-
string
save_dir - Directory where the model was saved.
-
object
model_checkpoint_path - The checkpoint file.
-
IEnumerable<object>
all_model_checkpoint_paths - List of strings. Paths to all not-yet-deleted checkpoints, sorted from oldest to newest. If this is a non-empty list, the last element must be equal to model_checkpoint_path. These paths are also saved in the CheckpointState proto.
-
object
latest_filename - Optional name of the checkpoint file. Default to 'checkpoint'.
-
object
all_model_checkpoint_timestamps - Optional list of timestamps (floats, seconds since the Epoch) indicating when the checkpoints in `all_model_checkpoint_paths` were created.
-
object
last_preserved_timestamp - A float, indicating the number of seconds since
the Epoch when the last preserved checkpoint was written, e.g. due to a
`keep_checkpoint_every_n_hours` parameter (see
tf.contrib.checkpoint.CheckpointManager
for an implementation).
object update_checkpoint_state_dyn(object save_dir, object model_checkpoint_path, object all_model_checkpoint_paths, object latest_filename, object all_model_checkpoint_timestamps, object last_preserved_timestamp)
Updates the content of the 'checkpoint' file. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.train.CheckpointManager
to manage checkpoints rather than manually editing the Checkpoint proto. This updates the checkpoint file containing a CheckpointState
proto.
Parameters
-
object
save_dir - Directory where the model was saved.
-
object
model_checkpoint_path - The checkpoint file.
-
object
all_model_checkpoint_paths - List of strings. Paths to all not-yet-deleted checkpoints, sorted from oldest to newest. If this is a non-empty list, the last element must be equal to model_checkpoint_path. These paths are also saved in the CheckpointState proto.
-
object
latest_filename - Optional name of the checkpoint file. Default to 'checkpoint'.
-
object
all_model_checkpoint_timestamps - Optional list of timestamps (floats, seconds since the Epoch) indicating when the checkpoints in `all_model_checkpoint_paths` were created.
-
object
last_preserved_timestamp - A float, indicating the number of seconds since
the Epoch when the last preserved checkpoint was written, e.g. due to a
`keep_checkpoint_every_n_hours` parameter (see
tf.contrib.checkpoint.CheckpointManager
for an implementation).
void warm_start(string ckpt_to_initialize_from, IEnumerable<string> vars_to_warm_start, IDictionary<object, object> var_name_to_vocab_info, IDictionary<object, object> var_name_to_prev_var_name)
Warm-starts a model using the given settings. If you are using a tf.estimator.Estimator, this will automatically be called
during training.
Parameters
-
string
ckpt_to_initialize_from - [Required] A string specifying the directory with checkpoint file(s) or path to checkpoint from which to warm-start the model parameters.
-
IEnumerable<string>
vars_to_warm_start - [Optional] One of the following: - A regular expression (string) that captures which variables to warm-start (see tf.compat.v1.get_collection). This expression will only consider variables in the TRAINABLE_VARIABLES collection -- if you need to warm-start non_TRAINABLE vars (such as optimizer accumulators or batch norm statistics), please use the below option. - A list of strings, each a regex scope provided to tf.compat.v1.get_collection with GLOBAL_VARIABLES (please see tf.compat.v1.get_collection). For backwards compatibility reasons, this is separate from the single-string argument type. - A list of Variables to warm-start. If you do not have access to the `Variable` objects at the call site, please use the above option. - `None`, in which case only TRAINABLE variables specified in `var_name_to_vocab_info` will be warm-started. Defaults to `'.*'`, which warm-starts all variables in the TRAINABLE_VARIABLES collection. Note that this excludes variables such as accumulators and moving statistics from batch norm.
-
IDictionary<object, object>
var_name_to_vocab_info - [Optional] Dict of variable names (strings) to
tf.estimator.VocabInfo
. The variable names should be "full" variables, not the names of the partitions. If not explicitly provided, the variable is assumed to have no (changes to) vocabulary. -
IDictionary<object, object>
var_name_to_prev_var_name - [Optional] Dict of variable names (strings) to name of the previously-trained variable in `ckpt_to_initialize_from`. If not explicitly provided, the name of the variable is assumed to be same between previous checkpoint and current model. Note that this has no effect on the set of variables that is warm-started, and only controls name mapping (use `vars_to_warm_start` for controlling what variables to warm-start).
void warm_start(string ckpt_to_initialize_from, string vars_to_warm_start, IDictionary<object, object> var_name_to_vocab_info, IDictionary<object, object> var_name_to_prev_var_name)
Warm-starts a model using the given settings. If you are using a tf.estimator.Estimator, this will automatically be called
during training.
Parameters
-
string
ckpt_to_initialize_from - [Required] A string specifying the directory with checkpoint file(s) or path to checkpoint from which to warm-start the model parameters.
-
string
vars_to_warm_start - [Optional] One of the following: - A regular expression (string) that captures which variables to warm-start (see tf.compat.v1.get_collection). This expression will only consider variables in the TRAINABLE_VARIABLES collection -- if you need to warm-start non_TRAINABLE vars (such as optimizer accumulators or batch norm statistics), please use the below option. - A list of strings, each a regex scope provided to tf.compat.v1.get_collection with GLOBAL_VARIABLES (please see tf.compat.v1.get_collection). For backwards compatibility reasons, this is separate from the single-string argument type. - A list of Variables to warm-start. If you do not have access to the `Variable` objects at the call site, please use the above option. - `None`, in which case only TRAINABLE variables specified in `var_name_to_vocab_info` will be warm-started. Defaults to `'.*'`, which warm-starts all variables in the TRAINABLE_VARIABLES collection. Note that this excludes variables such as accumulators and moving statistics from batch norm.
-
IDictionary<object, object>
var_name_to_vocab_info - [Optional] Dict of variable names (strings) to
tf.estimator.VocabInfo
. The variable names should be "full" variables, not the names of the partitions. If not explicitly provided, the variable is assumed to have no (changes to) vocabulary. -
IDictionary<object, object>
var_name_to_prev_var_name - [Optional] Dict of variable names (strings) to name of the previously-trained variable in `ckpt_to_initialize_from`. If not explicitly provided, the name of the variable is assumed to be same between previous checkpoint and current model. Note that this has no effect on the set of variables that is warm-started, and only controls name mapping (use `vars_to_warm_start` for controlling what variables to warm-start).
object warm_start_dyn(object ckpt_to_initialize_from, ImplicitContainer<T> vars_to_warm_start, object var_name_to_vocab_info, object var_name_to_prev_var_name)
Warm-starts a model using the given settings. If you are using a tf.estimator.Estimator, this will automatically be called
during training.
Parameters
-
object
ckpt_to_initialize_from - [Required] A string specifying the directory with checkpoint file(s) or path to checkpoint from which to warm-start the model parameters.
-
ImplicitContainer<T>
vars_to_warm_start - [Optional] One of the following: - A regular expression (string) that captures which variables to warm-start (see tf.compat.v1.get_collection). This expression will only consider variables in the TRAINABLE_VARIABLES collection -- if you need to warm-start non_TRAINABLE vars (such as optimizer accumulators or batch norm statistics), please use the below option. - A list of strings, each a regex scope provided to tf.compat.v1.get_collection with GLOBAL_VARIABLES (please see tf.compat.v1.get_collection). For backwards compatibility reasons, this is separate from the single-string argument type. - A list of Variables to warm-start. If you do not have access to the `Variable` objects at the call site, please use the above option. - `None`, in which case only TRAINABLE variables specified in `var_name_to_vocab_info` will be warm-started. Defaults to `'.*'`, which warm-starts all variables in the TRAINABLE_VARIABLES collection. Note that this excludes variables such as accumulators and moving statistics from batch norm.
-
object
var_name_to_vocab_info - [Optional] Dict of variable names (strings) to
tf.estimator.VocabInfo
. The variable names should be "full" variables, not the names of the partitions. If not explicitly provided, the variable is assumed to have no (changes to) vocabulary. -
object
var_name_to_prev_var_name - [Optional] Dict of variable names (strings) to name of the previously-trained variable in `ckpt_to_initialize_from`. If not explicitly provided, the name of the variable is assumed to be same between previous checkpoint and current model. Note that this has no effect on the set of variables that is warm-started, and only controls name mapping (use `vars_to_warm_start` for controlling what variables to warm-start).