LostTech.TensorFlow : API Documentation

Type tf.estimator.experimental

Namespace tensorflow

Public static methods

object build_raw_supervised_input_receiver_fn(object features, object labels, object default_batch_size)

Build a supervised_input_receiver_fn for raw features and labels.

This function wraps tensor placeholders in a supervised_receiver_fn with the expectation that the features and labels appear precisely as the model_fn expects them. Features and labels can therefore be dicts of tensors, or raw tensors.
Parameters
object features
a dict of string to `Tensor` or `Tensor`.
object labels
a dict of string to `Tensor` or `Tensor`.
object default_batch_size
the number of query examples expected per batch. Leave unset for variable batch size (recommended).
Returns
object
A supervised_input_receiver_fn.

object build_raw_supervised_input_receiver_fn(object features, IEnumerable<_DatasetInitializerHook> labels, object default_batch_size)

Build a supervised_input_receiver_fn for raw features and labels.

This function wraps tensor placeholders in a supervised_receiver_fn with the expectation that the features and labels appear precisely as the model_fn expects them. Features and labels can therefore be dicts of tensors, or raw tensors.
Parameters
object features
a dict of string to `Tensor` or `Tensor`.
IEnumerable<_DatasetInitializerHook> labels
a dict of string to `Tensor` or `Tensor`.
object default_batch_size
the number of query examples expected per batch. Leave unset for variable batch size (recommended).
Returns
object
A supervised_input_receiver_fn.

object build_raw_supervised_input_receiver_fn(IEnumerable<_DatasetInitializerHook> features, object labels, object default_batch_size)

Build a supervised_input_receiver_fn for raw features and labels.

This function wraps tensor placeholders in a supervised_receiver_fn with the expectation that the features and labels appear precisely as the model_fn expects them. Features and labels can therefore be dicts of tensors, or raw tensors.
Parameters
IEnumerable<_DatasetInitializerHook> features
a dict of string to `Tensor` or `Tensor`.
object labels
a dict of string to `Tensor` or `Tensor`.
object default_batch_size
the number of query examples expected per batch. Leave unset for variable batch size (recommended).
Returns
object
A supervised_input_receiver_fn.

object build_raw_supervised_input_receiver_fn(IEnumerable<_DatasetInitializerHook> features, IEnumerable<_DatasetInitializerHook> labels, object default_batch_size)

Build a supervised_input_receiver_fn for raw features and labels.

This function wraps tensor placeholders in a supervised_receiver_fn with the expectation that the features and labels appear precisely as the model_fn expects them. Features and labels can therefore be dicts of tensors, or raw tensors.
Parameters
IEnumerable<_DatasetInitializerHook> features
a dict of string to `Tensor` or `Tensor`.
IEnumerable<_DatasetInitializerHook> labels
a dict of string to `Tensor` or `Tensor`.
object default_batch_size
the number of query examples expected per batch. Leave unset for variable batch size (recommended).
Returns
object
A supervised_input_receiver_fn.

object build_raw_supervised_input_receiver_fn_dyn(object features, object labels, object default_batch_size)

Build a supervised_input_receiver_fn for raw features and labels.

This function wraps tensor placeholders in a supervised_receiver_fn with the expectation that the features and labels appear precisely as the model_fn expects them. Features and labels can therefore be dicts of tensors, or raw tensors.
Parameters
object features
a dict of string to `Tensor` or `Tensor`.
object labels
a dict of string to `Tensor` or `Tensor`.
object default_batch_size
the number of query examples expected per batch. Leave unset for variable batch size (recommended).
Returns
object
A supervised_input_receiver_fn.

object dnn_logit_fn_builder(int units, IEnumerable<int> hidden_units, IEnumerable<object> feature_columns, PythonFunctionContainer activation_fn, object dropout, object input_layer_partitioner, bool batch_norm)

Function builder for a dnn logit_fn.
Parameters
int units
An int indicating the dimension of the logit layer. In the MultiHead case, this should be the sum of all component Heads' logit dimensions.
IEnumerable<int> hidden_units
Iterable of integer number of hidden units per layer.
IEnumerable<object> feature_columns
Iterable of `feature_column._FeatureColumn` model inputs.
PythonFunctionContainer activation_fn
Activation function applied to each layer.
object dropout
When not `None`, the probability we will drop out a given coordinate.
object input_layer_partitioner
Partitioner for input layer.
bool batch_norm
Whether to use batch normalization after each hidden layer.
Returns
object
A logit_fn (see below).

object dnn_logit_fn_builder(int units, IEnumerable<int> hidden_units, ValueTuple feature_columns, PythonFunctionContainer activation_fn, object dropout, object input_layer_partitioner, bool batch_norm)

Function builder for a dnn logit_fn.
Parameters
int units
An int indicating the dimension of the logit layer. In the MultiHead case, this should be the sum of all component Heads' logit dimensions.
IEnumerable<int> hidden_units
Iterable of integer number of hidden units per layer.
ValueTuple feature_columns
Iterable of `feature_column._FeatureColumn` model inputs.
PythonFunctionContainer activation_fn
Activation function applied to each layer.
object dropout
When not `None`, the probability we will drop out a given coordinate.
object input_layer_partitioner
Partitioner for input layer.
bool batch_norm
Whether to use batch normalization after each hidden layer.
Returns
object
A logit_fn (see below).

object dnn_logit_fn_builder(object units, IEnumerable<int> hidden_units, IEnumerable<object> feature_columns, PythonFunctionContainer activation_fn, object dropout, object input_layer_partitioner, bool batch_norm)

Function builder for a dnn logit_fn.
Parameters
object units
An int indicating the dimension of the logit layer. In the MultiHead case, this should be the sum of all component Heads' logit dimensions.
IEnumerable<int> hidden_units
Iterable of integer number of hidden units per layer.
IEnumerable<object> feature_columns
Iterable of `feature_column._FeatureColumn` model inputs.
PythonFunctionContainer activation_fn
Activation function applied to each layer.
object dropout
When not `None`, the probability we will drop out a given coordinate.
object input_layer_partitioner
Partitioner for input layer.
bool batch_norm
Whether to use batch normalization after each hidden layer.
Returns
object
A logit_fn (see below).

object dnn_logit_fn_builder(object units, IEnumerable<int> hidden_units, ValueTuple feature_columns, PythonFunctionContainer activation_fn, object dropout, object input_layer_partitioner, bool batch_norm)

Function builder for a dnn logit_fn.
Parameters
object units
An int indicating the dimension of the logit layer. In the MultiHead case, this should be the sum of all component Heads' logit dimensions.
IEnumerable<int> hidden_units
Iterable of integer number of hidden units per layer.
ValueTuple feature_columns
Iterable of `feature_column._FeatureColumn` model inputs.
PythonFunctionContainer activation_fn
Activation function applied to each layer.
object dropout
When not `None`, the probability we will drop out a given coordinate.
object input_layer_partitioner
Partitioner for input layer.
bool batch_norm
Whether to use batch normalization after each hidden layer.
Returns
object
A logit_fn (see below).

object dnn_logit_fn_builder_dyn(object units, object hidden_units, object feature_columns, object activation_fn, object dropout, object input_layer_partitioner, object batch_norm)

Function builder for a dnn logit_fn.
Parameters
object units
An int indicating the dimension of the logit layer. In the MultiHead case, this should be the sum of all component Heads' logit dimensions.
object hidden_units
Iterable of integer number of hidden units per layer.
object feature_columns
Iterable of `feature_column._FeatureColumn` model inputs.
object activation_fn
Activation function applied to each layer.
object dropout
When not `None`, the probability we will drop out a given coordinate.
object input_layer_partitioner
Partitioner for input layer.
object batch_norm
Whether to use batch normalization after each hidden layer.
Returns
object
A logit_fn (see below).

object linear_logit_fn_builder(int units, IEnumerable<NumericColumn> feature_columns, string sparse_combiner)

Function builder for a linear logit_fn.
Parameters
int units
An int indicating the dimension of the logit layer.
IEnumerable<NumericColumn> feature_columns
An iterable containing all the feature columns used by the model.
string sparse_combiner
A string specifying how to reduce if a categorical column is multivalent. One of "mean", "sqrtn", and "sum".
Returns
object
A logit_fn (see below).

object linear_logit_fn_builder(object units, IEnumerable<NumericColumn> feature_columns, string sparse_combiner)

Function builder for a linear logit_fn.
Parameters
object units
An int indicating the dimension of the logit layer.
IEnumerable<NumericColumn> feature_columns
An iterable containing all the feature columns used by the model.
string sparse_combiner
A string specifying how to reduce if a categorical column is multivalent. One of "mean", "sqrtn", and "sum".
Returns
object
A logit_fn (see below).

object linear_logit_fn_builder_dyn(object units, object feature_columns, ImplicitContainer<T> sparse_combiner)

Function builder for a linear logit_fn.
Parameters
object units
An int indicating the dimension of the logit layer.
object feature_columns
An iterable containing all the feature columns used by the model.
ImplicitContainer<T> sparse_combiner
A string specifying how to reduce if a categorical column is multivalent. One of "mean", "sqrtn", and "sum".
Returns
object
A logit_fn (see below).

object make_early_stopping_hook(object estimator, PythonFunctionContainer should_stop_fn, int run_every_secs, object run_every_steps)

Creates early-stopping hook.

Returns a `SessionRunHook` that stops training when `should_stop_fn` returns `True`.

Usage example: Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in `train_and_evaluate` API and will be addressed in a future revision.
Parameters
object estimator
A tf.estimator.Estimator instance.
PythonFunctionContainer should_stop_fn
`callable`, function that takes no arguments and returns a `bool`. If the function returns `True`, stopping will be initiated by the chief.
int run_every_secs
If specified, calls `should_stop_fn` at an interval of `run_every_secs` seconds. Defaults to 60 seconds. Either this or `run_every_steps` must be set.
object run_every_steps
If specified, calls `should_stop_fn` every `run_every_steps` steps. Either this or `run_every_secs` must be set.
Returns
object
A `SessionRunHook` that periodically executes `should_stop_fn` and initiates early stopping if the function returns `True`.
Show Example
estimator =...
            hook = early_stopping.make_early_stopping_hook(
                estimator, should_stop_fn=make_stop_fn(...))
            train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
            tf.estimator.train_and_evaluate(estimator, train_spec,...) 

object make_early_stopping_hook_dyn(object estimator, object should_stop_fn, ImplicitContainer<T> run_every_secs, object run_every_steps)

Creates early-stopping hook.

Returns a `SessionRunHook` that stops training when `should_stop_fn` returns `True`.

Usage example: Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in `train_and_evaluate` API and will be addressed in a future revision.
Parameters
object estimator
A tf.estimator.Estimator instance.
object should_stop_fn
`callable`, function that takes no arguments and returns a `bool`. If the function returns `True`, stopping will be initiated by the chief.
ImplicitContainer<T> run_every_secs
If specified, calls `should_stop_fn` at an interval of `run_every_secs` seconds. Defaults to 60 seconds. Either this or `run_every_steps` must be set.
object run_every_steps
If specified, calls `should_stop_fn` every `run_every_steps` steps. Either this or `run_every_secs` must be set.
Returns
object
A `SessionRunHook` that periodically executes `should_stop_fn` and initiates early stopping if the function returns `True`.
Show Example
estimator =...
            hook = early_stopping.make_early_stopping_hook(
                estimator, should_stop_fn=make_stop_fn(...))
            train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
            tf.estimator.train_and_evaluate(estimator, train_spec,...) 

object make_stop_at_checkpoint_step_hook(object estimator, object last_step, int wait_after_file_check_secs)

Creates a proper StopAtCheckpointStepHook based on chief status.

object make_stop_at_checkpoint_step_hook_dyn(object estimator, object last_step, ImplicitContainer<T> wait_after_file_check_secs)

Creates a proper StopAtCheckpointStepHook based on chief status.

object stop_if_higher_hook(object estimator, object metric_name, object threshold, object eval_dir, int min_steps, int run_every_secs, object run_every_steps)

Creates hook to stop if the given metric is higher than the threshold.

Usage example: Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in `train_and_evaluate` API and will be addressed in a future revision.
Parameters
object estimator
A tf.estimator.Estimator instance.
object metric_name
`str`, metric to track. "loss", "accuracy", etc.
object threshold
Numeric threshold for the given metric.
object eval_dir
If set, directory containing summary files with eval metrics. By default, `estimator.eval_dir()` will be used.
int min_steps
`int`, stop is never requested if global step is less than this value. Defaults to 0.
int run_every_secs
If specified, calls `should_stop_fn` at an interval of `run_every_secs` seconds. Defaults to 60 seconds. Either this or `run_every_steps` must be set.
object run_every_steps
If specified, calls `should_stop_fn` every `run_every_steps` steps. Either this or `run_every_secs` must be set.
Returns
object
An early-stopping hook of type `SessionRunHook` that periodically checks if the given metric is higher than specified threshold and initiates early stopping if true.
Show Example
estimator =...
            # Hook to stop training if accuracy becomes higher than 0.9.
            hook = early_stopping.stop_if_higher_hook(estimator, "accuracy", 0.9)
            train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
            tf.estimator.train_and_evaluate(estimator, train_spec,...) 

object stop_if_higher_hook_dyn(object estimator, object metric_name, object threshold, object eval_dir, ImplicitContainer<T> min_steps, ImplicitContainer<T> run_every_secs, object run_every_steps)

Creates hook to stop if the given metric is higher than the threshold.

Usage example: Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in `train_and_evaluate` API and will be addressed in a future revision.
Parameters
object estimator
A tf.estimator.Estimator instance.
object metric_name
`str`, metric to track. "loss", "accuracy", etc.
object threshold
Numeric threshold for the given metric.
object eval_dir
If set, directory containing summary files with eval metrics. By default, `estimator.eval_dir()` will be used.
ImplicitContainer<T> min_steps
`int`, stop is never requested if global step is less than this value. Defaults to 0.
ImplicitContainer<T> run_every_secs
If specified, calls `should_stop_fn` at an interval of `run_every_secs` seconds. Defaults to 60 seconds. Either this or `run_every_steps` must be set.
object run_every_steps
If specified, calls `should_stop_fn` every `run_every_steps` steps. Either this or `run_every_secs` must be set.
Returns
object
An early-stopping hook of type `SessionRunHook` that periodically checks if the given metric is higher than specified threshold and initiates early stopping if true.
Show Example
estimator =...
            # Hook to stop training if accuracy becomes higher than 0.9.
            hook = early_stopping.stop_if_higher_hook(estimator, "accuracy", 0.9)
            train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
            tf.estimator.train_and_evaluate(estimator, train_spec,...) 

object stop_if_lower_hook(object estimator, object metric_name, object threshold, object eval_dir, int min_steps, int run_every_secs, object run_every_steps)

Creates hook to stop if the given metric is lower than the threshold.

Usage example: Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in `train_and_evaluate` API and will be addressed in a future revision.
Parameters
object estimator
A tf.estimator.Estimator instance.
object metric_name
`str`, metric to track. "loss", "accuracy", etc.
object threshold
Numeric threshold for the given metric.
object eval_dir
If set, directory containing summary files with eval metrics. By default, `estimator.eval_dir()` will be used.
int min_steps
`int`, stop is never requested if global step is less than this value. Defaults to 0.
int run_every_secs
If specified, calls `should_stop_fn` at an interval of `run_every_secs` seconds. Defaults to 60 seconds. Either this or `run_every_steps` must be set.
object run_every_steps
If specified, calls `should_stop_fn` every `run_every_steps` steps. Either this or `run_every_secs` must be set.
Returns
object
An early-stopping hook of type `SessionRunHook` that periodically checks if the given metric is lower than specified threshold and initiates early stopping if true.
Show Example
estimator =...
            # Hook to stop training if loss becomes lower than 100.
            hook = early_stopping.stop_if_lower_hook(estimator, "loss", 100)
            train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
            tf.estimator.train_and_evaluate(estimator, train_spec,...) 

object stop_if_lower_hook_dyn(object estimator, object metric_name, object threshold, object eval_dir, ImplicitContainer<T> min_steps, ImplicitContainer<T> run_every_secs, object run_every_steps)

Creates hook to stop if the given metric is lower than the threshold.

Usage example: Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in `train_and_evaluate` API and will be addressed in a future revision.
Parameters
object estimator
A tf.estimator.Estimator instance.
object metric_name
`str`, metric to track. "loss", "accuracy", etc.
object threshold
Numeric threshold for the given metric.
object eval_dir
If set, directory containing summary files with eval metrics. By default, `estimator.eval_dir()` will be used.
ImplicitContainer<T> min_steps
`int`, stop is never requested if global step is less than this value. Defaults to 0.
ImplicitContainer<T> run_every_secs
If specified, calls `should_stop_fn` at an interval of `run_every_secs` seconds. Defaults to 60 seconds. Either this or `run_every_steps` must be set.
object run_every_steps
If specified, calls `should_stop_fn` every `run_every_steps` steps. Either this or `run_every_secs` must be set.
Returns
object
An early-stopping hook of type `SessionRunHook` that periodically checks if the given metric is lower than specified threshold and initiates early stopping if true.
Show Example
estimator =...
            # Hook to stop training if loss becomes lower than 100.
            hook = early_stopping.stop_if_lower_hook(estimator, "loss", 100)
            train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
            tf.estimator.train_and_evaluate(estimator, train_spec,...) 

object stop_if_no_decrease_hook(object estimator, object metric_name, object max_steps_without_decrease, object eval_dir, int min_steps, int run_every_secs, object run_every_steps)

Creates hook to stop if metric does not decrease within given max steps.

Usage example: Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in `train_and_evaluate` API and will be addressed in a future revision.
Parameters
object estimator
A tf.estimator.Estimator instance.
object metric_name
`str`, metric to track. "loss", "accuracy", etc.
object max_steps_without_decrease
`int`, maximum number of training steps with no decrease in the given metric.
object eval_dir
If set, directory containing summary files with eval metrics. By default, `estimator.eval_dir()` will be used.
int min_steps
`int`, stop is never requested if global step is less than this value. Defaults to 0.
int run_every_secs
If specified, calls `should_stop_fn` at an interval of `run_every_secs` seconds. Defaults to 60 seconds. Either this or `run_every_steps` must be set.
object run_every_steps
If specified, calls `should_stop_fn` every `run_every_steps` steps. Either this or `run_every_secs` must be set.
Returns
object
An early-stopping hook of type `SessionRunHook` that periodically checks if the given metric shows no decrease over given maximum number of training steps, and initiates early stopping if true.
Show Example
estimator =...
            # Hook to stop training if loss does not decrease in over 100000 steps.
            hook = early_stopping.stop_if_no_decrease_hook(estimator, "loss", 100000)
            train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
            tf.estimator.train_and_evaluate(estimator, train_spec,...) 

object stop_if_no_decrease_hook_dyn(object estimator, object metric_name, object max_steps_without_decrease, object eval_dir, ImplicitContainer<T> min_steps, ImplicitContainer<T> run_every_secs, object run_every_steps)

Creates hook to stop if metric does not decrease within given max steps.

Usage example: Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in `train_and_evaluate` API and will be addressed in a future revision.
Parameters
object estimator
A tf.estimator.Estimator instance.
object metric_name
`str`, metric to track. "loss", "accuracy", etc.
object max_steps_without_decrease
`int`, maximum number of training steps with no decrease in the given metric.
object eval_dir
If set, directory containing summary files with eval metrics. By default, `estimator.eval_dir()` will be used.
ImplicitContainer<T> min_steps
`int`, stop is never requested if global step is less than this value. Defaults to 0.
ImplicitContainer<T> run_every_secs
If specified, calls `should_stop_fn` at an interval of `run_every_secs` seconds. Defaults to 60 seconds. Either this or `run_every_steps` must be set.
object run_every_steps
If specified, calls `should_stop_fn` every `run_every_steps` steps. Either this or `run_every_secs` must be set.
Returns
object
An early-stopping hook of type `SessionRunHook` that periodically checks if the given metric shows no decrease over given maximum number of training steps, and initiates early stopping if true.
Show Example
estimator =...
            # Hook to stop training if loss does not decrease in over 100000 steps.
            hook = early_stopping.stop_if_no_decrease_hook(estimator, "loss", 100000)
            train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
            tf.estimator.train_and_evaluate(estimator, train_spec,...) 

object stop_if_no_increase_hook(object estimator, object metric_name, object max_steps_without_increase, object eval_dir, int min_steps, int run_every_secs, object run_every_steps)

Creates hook to stop if metric does not increase within given max steps.

Usage example: Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in `train_and_evaluate` API and will be addressed in a future revision.
Parameters
object estimator
A tf.estimator.Estimator instance.
object metric_name
`str`, metric to track. "loss", "accuracy", etc.
object max_steps_without_increase
`int`, maximum number of training steps with no increase in the given metric.
object eval_dir
If set, directory containing summary files with eval metrics. By default, `estimator.eval_dir()` will be used.
int min_steps
`int`, stop is never requested if global step is less than this value. Defaults to 0.
int run_every_secs
If specified, calls `should_stop_fn` at an interval of `run_every_secs` seconds. Defaults to 60 seconds. Either this or `run_every_steps` must be set.
object run_every_steps
If specified, calls `should_stop_fn` every `run_every_steps` steps. Either this or `run_every_secs` must be set.
Returns
object
An early-stopping hook of type `SessionRunHook` that periodically checks if the given metric shows no increase over given maximum number of training steps, and initiates early stopping if true.
Show Example
estimator =...
            # Hook to stop training if accuracy does not increase in over 100000 steps.
            hook = early_stopping.stop_if_no_increase_hook(estimator, "accuracy", 100000)
            train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
            tf.estimator.train_and_evaluate(estimator, train_spec,...) 

object stop_if_no_increase_hook_dyn(object estimator, object metric_name, object max_steps_without_increase, object eval_dir, ImplicitContainer<T> min_steps, ImplicitContainer<T> run_every_secs, object run_every_steps)

Creates hook to stop if metric does not increase within given max steps.

Usage example: Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in `train_and_evaluate` API and will be addressed in a future revision.
Parameters
object estimator
A tf.estimator.Estimator instance.
object metric_name
`str`, metric to track. "loss", "accuracy", etc.
object max_steps_without_increase
`int`, maximum number of training steps with no increase in the given metric.
object eval_dir
If set, directory containing summary files with eval metrics. By default, `estimator.eval_dir()` will be used.
ImplicitContainer<T> min_steps
`int`, stop is never requested if global step is less than this value. Defaults to 0.
ImplicitContainer<T> run_every_secs
If specified, calls `should_stop_fn` at an interval of `run_every_secs` seconds. Defaults to 60 seconds. Either this or `run_every_steps` must be set.
object run_every_steps
If specified, calls `should_stop_fn` every `run_every_steps` steps. Either this or `run_every_secs` must be set.
Returns
object
An early-stopping hook of type `SessionRunHook` that periodically checks if the given metric shows no increase over given maximum number of training steps, and initiates early stopping if true.
Show Example
estimator =...
            # Hook to stop training if accuracy does not increase in over 100000 steps.
            hook = early_stopping.stop_if_no_increase_hook(estimator, "accuracy", 100000)
            train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
            tf.estimator.train_and_evaluate(estimator, train_spec,...) 

Public properties

PythonFunctionContainer build_raw_supervised_input_receiver_fn_fn get;

PythonFunctionContainer call_logit_fn_fn get;

PythonFunctionContainer dnn_logit_fn_builder_fn get;

PythonFunctionContainer linear_logit_fn_builder_fn get;

PythonFunctionContainer make_early_stopping_hook_fn get;

PythonFunctionContainer make_stop_at_checkpoint_step_hook_fn get;

PythonFunctionContainer stop_if_higher_hook_fn get;

PythonFunctionContainer stop_if_lower_hook_fn get;

PythonFunctionContainer stop_if_no_decrease_hook_fn get;

PythonFunctionContainer stop_if_no_increase_hook_fn get;