LostTech.TensorFlow : API Documentation

Type estimator

Namespace tensorflow_estimator.contrib.estimator

Methods

Properties

Public instance methods

bool Equals(object obj)

Public static methods

Estimator add_metrics(TimeSeriesRegressor estimator, PythonFunctionContainer metric_fn)

object add_metrics_dyn(object estimator, object metric_fn)

Creates a new tf.estimator.Estimator which has given metrics.

Example: Example usage of custom metric which uses features:
Parameters
object estimator
A tf.estimator.Estimator object.
object metric_fn
A function which should obey the following signature: - Args: can only have following four arguments in any order: * predictions: Predictions `Tensor` or dict of `Tensor` created by given `estimator`. * features: Input `dict` of `Tensor` objects created by `input_fn` which is given to `estimator.evaluate` as an argument. * labels: Labels `Tensor` or dict of `Tensor` created by `input_fn` which is given to `estimator.evaluate` as an argument. * config: config attribute of the `estimator`. - Returns: Dict of metric results keyed by name. Final metrics are a union of this and `estimator's` existing metrics. If there is a name conflict between this and `estimator`s existing metrics, this will override the existing one. The values of the dict are the results of calling a metric function, namely a `(metric_tensor, update_op)` tuple.
Returns
object
A new tf.estimator.Estimator which has a union of original metrics with given ones.
Show Example
def my_auc(labels, predictions):
              auc_metric = tf.keras.metrics.AUC(name="my_auc")
              auc_metric.update_state(y_true=labels, y_pred=predictions['logistic'])
              return {'auc': auc_metric} 

estimator = tf.estimator.DNNClassifier(...) estimator = tf.estimator.add_metrics(estimator, my_auc) estimator.train(...) estimator.evaluate(...)

_BinaryLogisticHeadWithSigmoidCrossEntropyLoss binary_classification_head(object weight_column, object thresholds, object label_vocabulary, ImplicitContainer<T> loss_reduction, PythonFunctionContainer loss_fn, Nullable<int> name)

object binary_classification_head_dyn(object weight_column, object thresholds, object label_vocabulary, ImplicitContainer<T> loss_reduction, object loss_fn, object name)

_BoostedTreesBase boosted_trees_classifier_train_in_memory(PythonFunctionContainer train_input_fn, object feature_columns, object model_dir, ImplicitContainer<T> n_classes, object weight_column, object label_vocabulary, int n_trees, int max_depth, double learning_rate, double l1_regularization, double l2_regularization, double tree_complexity, double min_node_weight, object config, object train_hooks, bool center_bias, string pruning_mode, double quantile_sketch_epsilon)

_BoostedTreesBase boosted_trees_classifier_train_in_memory(PythonFunctionContainer train_input_fn, object feature_columns, object model_dir, int n_classes, object weight_column, object label_vocabulary, int n_trees, int max_depth, double learning_rate, double l1_regularization, double l2_regularization, double tree_complexity, double min_node_weight, object config, object train_hooks, bool center_bias, string pruning_mode, double quantile_sketch_epsilon)

object boosted_trees_classifier_train_in_memory_dyn(object train_input_fn, object feature_columns, object model_dir, ImplicitContainer<T> n_classes, object weight_column, object label_vocabulary, ImplicitContainer<T> n_trees, ImplicitContainer<T> max_depth, ImplicitContainer<T> learning_rate, ImplicitContainer<T> l1_regularization, ImplicitContainer<T> l2_regularization, ImplicitContainer<T> tree_complexity, ImplicitContainer<T> min_node_weight, object config, object train_hooks, ImplicitContainer<T> center_bias, ImplicitContainer<T> pruning_mode, ImplicitContainer<T> quantile_sketch_epsilon)

_BoostedTreesBase boosted_trees_regressor_train_in_memory(PythonFunctionContainer train_input_fn, object feature_columns, object model_dir, int label_dimension, object weight_column, int n_trees, int max_depth, double learning_rate, double l1_regularization, double l2_regularization, double tree_complexity, double min_node_weight, object config, object train_hooks, bool center_bias, string pruning_mode, double quantile_sketch_epsilon)

_BoostedTreesBase boosted_trees_regressor_train_in_memory(PythonFunctionContainer train_input_fn, object feature_columns, object model_dir, ImplicitContainer<T> label_dimension, object weight_column, int n_trees, int max_depth, double learning_rate, double l1_regularization, double l2_regularization, double tree_complexity, double min_node_weight, object config, object train_hooks, bool center_bias, string pruning_mode, double quantile_sketch_epsilon)

object boosted_trees_regressor_train_in_memory_dyn(object train_input_fn, object feature_columns, object model_dir, ImplicitContainer<T> label_dimension, object weight_column, ImplicitContainer<T> n_trees, ImplicitContainer<T> max_depth, ImplicitContainer<T> learning_rate, ImplicitContainer<T> l1_regularization, ImplicitContainer<T> l2_regularization, ImplicitContainer<T> tree_complexity, ImplicitContainer<T> min_node_weight, object config, object train_hooks, ImplicitContainer<T> center_bias, ImplicitContainer<T> pruning_mode, ImplicitContainer<T> quantile_sketch_epsilon)

object build_supervised_input_receiver_fn_from_input_fn(PythonFunctionContainer input_fn, IDictionary<string, object> input_fn_args)

object build_supervised_input_receiver_fn_from_input_fn_dyn(object input_fn, IDictionary<string, object> input_fn_args)

PythonFunctionContainer call_logit_fn(PythonFunctionContainer logit_fn, object features, object mode, object params, object config)

Calls logit_fn (experimental).

THIS FUNCTION IS EXPERIMENTAL. Keras layers/models are the recommended APIs for logit and model composition.

A utility function that calls the provided logit_fn with the relevant subset of provided arguments. Similar to tf.estimator._call_model_fn().
Parameters
PythonFunctionContainer logit_fn
A logit_fn as defined above.
object features
The features dict.
object mode
TRAIN / EVAL / PREDICT ModeKeys.
object params
The hyperparameter dict.
object config
The configuration object.
Returns
PythonFunctionContainer
A logit Tensor, the output of logit_fn.

object call_logit_fn_dyn(object logit_fn, object features, object mode, object params, object config)

Calls logit_fn (experimental).

THIS FUNCTION IS EXPERIMENTAL. Keras layers/models are the recommended APIs for logit and model composition.

A utility function that calls the provided logit_fn with the relevant subset of provided arguments. Similar to tf.estimator._call_model_fn().
Parameters
object logit_fn
A logit_fn as defined above.
object features
The features dict.
object mode
TRAIN / EVAL / PREDICT ModeKeys.
object params
The hyperparameter dict.
object config
The configuration object.
Returns
object
A logit Tensor, the output of logit_fn.

_TransformGradients clip_gradients_by_norm(object optimizer, double clip_norm)

_TransformGradients clip_gradients_by_norm(string optimizer, double clip_norm)

_TransformGradients clip_gradients_by_norm(Optimizer optimizer, double clip_norm)

object clip_gradients_by_norm_dyn(object optimizer, object clip_norm)

Estimator DNNClassifierWithLayerAnnotations(object hidden_units, object feature_columns, object model_dir, int n_classes, object weight_column, object label_vocabulary, string optimizer, ImplicitContainer<T> activation_fn, object dropout, object input_layer_partitioner, object config, object warm_start_from, ImplicitContainer<T> loss_reduction)

object DNNClassifierWithLayerAnnotations_dyn(object hidden_units, object feature_columns, object model_dir, ImplicitContainer<T> n_classes, object weight_column, object label_vocabulary, ImplicitContainer<T> optimizer, ImplicitContainer<T> activation_fn, object dropout, object input_layer_partitioner, object config, object warm_start_from, ImplicitContainer<T> loss_reduction)

Estimator DNNRegressorWithLayerAnnotations(object hidden_units, object feature_columns, object model_dir, int label_dimension, object weight_column, string optimizer, ImplicitContainer<T> activation_fn, object dropout, object input_layer_partitioner, object config, object warm_start_from, ImplicitContainer<T> loss_reduction)

object DNNRegressorWithLayerAnnotations_dyn(object hidden_units, object feature_columns, object model_dir, ImplicitContainer<T> label_dimension, object weight_column, ImplicitContainer<T> optimizer, ImplicitContainer<T> activation_fn, object dropout, object input_layer_partitioner, object config, object warm_start_from, ImplicitContainer<T> loss_reduction)

object export_all_saved_models(object estimator, object export_dir_base, object input_receiver_fn_map, object assets_extra, bool as_text, object checkpoint_path)

object export_all_saved_models_dyn(object estimator, object export_dir_base, object input_receiver_fn_map, object assets_extra, ImplicitContainer<T> as_text, object checkpoint_path)

object export_saved_model_for_mode(object estimator, object export_dir_base, PythonFunctionContainer input_receiver_fn, object assets_extra, bool as_text, object checkpoint_path, ImplicitContainer<T> mode)

object export_saved_model_for_mode_dyn(object estimator, object export_dir_base, object input_receiver_fn, object assets_extra, ImplicitContainer<T> as_text, object checkpoint_path, ImplicitContainer<T> mode)

Estimator forward_features(object estimator, object keys, object sparse_default_values)

object forward_features_dyn(object estimator, object keys, object sparse_default_values)

_RegressionHeadWithMeanSquaredErrorLoss logistic_regression_head(object weight_column, ImplicitContainer<T> loss_reduction, string name)

object logistic_regression_head_dyn(object weight_column, ImplicitContainer<T> loss_reduction, object name)

object make_input_layer_with_layer_annotations(object original_input_layer)

object make_input_layer_with_layer_annotations_dyn(object original_input_layer)

_MultiClassHeadWithSoftmaxCrossEntropyLoss multi_class_head(object n_classes, object weight_column, object label_vocabulary, ImplicitContainer<T> loss_reduction, PythonFunctionContainer loss_fn, Nullable<int> name)

object multi_class_head_dyn(object n_classes, object weight_column, object label_vocabulary, ImplicitContainer<T> loss_reduction, object loss_fn, object name)

_MultiLabelHead multi_label_head(object n_classes, object weight_column, object thresholds, object label_vocabulary, ImplicitContainer<T> loss_reduction, PythonFunctionContainer loss_fn, object classes_for_class_based_metrics, string name)

object multi_label_head_dyn(object n_classes, object weight_column, object thresholds, object label_vocabulary, ImplicitContainer<T> loss_reduction, object loss_fn, object classes_for_class_based_metrics, object name)

_RegressionHeadWithMeanSquaredErrorLoss poisson_regression_head(object weight_column, int label_dimension, ImplicitContainer<T> loss_reduction, bool compute_full_loss, string name)

object poisson_regression_head_dyn(object weight_column, ImplicitContainer<T> label_dimension, ImplicitContainer<T> loss_reduction, ImplicitContainer<T> compute_full_loss, object name)

_RegressionHeadWithMeanSquaredErrorLoss regression_head(object weight_column, int label_dimension, ImplicitContainer<T> loss_reduction, PythonFunctionContainer loss_fn, PythonFunctionContainer inverse_link_fn, Nullable<int> name)

object regression_head_dyn(object weight_column, ImplicitContainer<T> label_dimension, ImplicitContainer<T> loss_reduction, object loss_fn, object inverse_link_fn, object name)

object replicate_model_fn_(PythonFunctionContainer model_fn, ImplicitContainer<T> loss_reduction, object devices)

object replicate_model_fn__dyn(object model_fn, ImplicitContainer<T> loss_reduction, object devices)

Byte[] serialize_feature_column(_EmbeddingColumn feature_column)

object serialize_feature_column_dyn(object feature_column)

object wrap_and_check_input_tensors(IGraphNodeBase tensors, string field_name, bool allow_int_keys)

object wrap_and_check_input_tensors(IDictionary<object, IGraphNodeBase> tensors, string field_name, bool allow_int_keys)

object wrap_and_check_input_tensors(PythonClassContainer tensors, string field_name, bool allow_int_keys)

object wrap_and_check_input_tensors_dyn(object tensors, object field_name, ImplicitContainer<T> allow_int_keys)

Public properties

PythonFunctionContainer _MultiHead_fn get;

PythonFunctionContainer _MultiLabelHead_fn get;

PythonFunctionContainer _TransformGradients_fn get;

PythonFunctionContainer _VariableDistributionMode_fn get;

PythonFunctionContainer add_metrics_fn get;

PythonFunctionContainer binary_classification_head_fn get;

PythonFunctionContainer boosted_trees_classifier_train_in_memory_fn get;

PythonFunctionContainer boosted_trees_regressor_train_in_memory_fn get;

PythonFunctionContainer build_supervised_input_receiver_fn_from_input_fn_fn get;

PythonFunctionContainer call_logit_fn_fn get;

PythonFunctionContainer clip_gradients_by_norm_fn get;

PythonFunctionContainer DNNClassifierWithLayerAnnotations_fn get;

PythonFunctionContainer DNNRegressorWithLayerAnnotations_fn get;

PythonFunctionContainer export_all_saved_models_fn get;

PythonFunctionContainer export_saved_model_for_mode_fn get;

PythonFunctionContainer forward_features_fn get;

PythonFunctionContainer LayerAnnotationsCollectionNames_fn get;

PythonFunctionContainer logistic_regression_head_fn get;

PythonFunctionContainer make_input_layer_with_layer_annotations_fn get;

PythonFunctionContainer multi_class_head_fn get;

PythonFunctionContainer multi_label_head_fn get;

PythonFunctionContainer poisson_regression_head_fn get;

PythonFunctionContainer regression_head_fn get;

PythonFunctionContainer replicate_model_fn__fn get;

PythonFunctionContainer RNNClassifier_fn get;

PythonFunctionContainer RNNEstimator_fn get;

PythonFunctionContainer serialize_feature_column_fn get;

PythonFunctionContainer ServingInputReceiver_fn get;

PythonFunctionContainer SupervisedInputReceiver_fn get;

PythonFunctionContainer TensorServingInputReceiver_fn get;

PythonFunctionContainer TowerOptimizer_fn get;

PythonFunctionContainer UnsupervisedInputReceiver_fn get;

object USE_DEFAULT get; set;

object USE_DEFAULT_dyn get; set;

PythonFunctionContainer wrap_and_check_input_tensors_fn get;