LostTech.TensorFlow : API Documentation

Type DNNLinearCombinedRegressor

Namespace tensorflow.contrib.learn

Parent Estimator

Interfaces IDNNLinearCombinedRegressor

A regressor for TensorFlow Linear and DNN joined training models.

THIS CLASS IS DEPRECATED. See [contrib/learn/README.md](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/README.md) for general migration instructions.

Note: New users must set `fix_global_step_increment_bug=True` when creating an estimator.

Example: Input of `fit`, `train`, and `evaluate` should have following features, otherwise there will be a `KeyError`: if `weight_column_name` is not `None`, a feature with `key=weight_column_name` whose value is a `Tensor`. for each `column` in `dnn_feature_columns` + `linear_feature_columns`: - if `column` is a `SparseColumn`, a feature with `key=column.name` whose `value` is a `SparseTensor`. - if `column` is a `WeightedSparseColumn`, two features: the first with `key` the id column name, the second with `key` the weight column name. Both features' `value` must be a `SparseTensor`. - if `column` is a `RealValuedColumn, a feature with `key=column.name` whose `value` is a `Tensor`.
Show Example
sparse_feature_a = sparse_column_with_hash_bucket(...)
            sparse_feature_b = sparse_column_with_hash_bucket(...) 

sparse_feature_a_x_sparse_feature_b = crossed_column(...)

sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a, ...) sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b, ...)

estimator = DNNLinearCombinedRegressor( # common settings weight_column_name=weight_column_name, # wide settings linear_feature_columns=[sparse_feature_a_x_sparse_feature_b], linear_optimizer=tf.compat.v1.train.FtrlOptimizer(...), # deep settings dnn_feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], dnn_hidden_units=[1000, 500, 100], dnn_optimizer=tf.compat.v1.train.ProximalAdagradOptimizer(...))

# To apply L1 and L2 regularization, you can set optimizers as follows: tf.compat.v1.train.ProximalAdagradOptimizer( learning_rate=0.1, l1_regularization_strength=0.001, l2_regularization_strength=0.001) # It is same for FtrlOptimizer.

# Input builders def input_fn_train: # returns x, y ... def input_fn_eval: # returns x, y ... def input_fn_predict: # returns x, None ... estimator.train(input_fn_train) estimator.evaluate(input_fn_eval) estimator.predict(input_fn_predict)

Methods

Properties

Public static methods

DNNLinearCombinedRegressor NewDyn(object model_dir, object weight_column_name, object linear_feature_columns, object linear_optimizer, ImplicitContainer<T> _joint_linear_weights, object dnn_feature_columns, object dnn_optimizer, object dnn_hidden_units, ImplicitContainer<T> dnn_activation_fn, object dnn_dropout, object gradient_clip_norm, ImplicitContainer<T> enable_centered_bias, ImplicitContainer<T> label_dimension, object config, object feature_engineering_fn, object embedding_lr_multipliers, object input_layer_min_slice_size, ImplicitContainer<T> fix_global_step_increment_bug)

Initializes a DNNLinearCombinedRegressor instance. (deprecated argument values)

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(fix_global_step_increment_bug=False)`. They will be removed after 2017-04-15. Instructions for updating: Please set fix_global_step_increment_bug=True and update training steps in your pipeline. See pydoc for details.

Note: New users must set `fix_global_step_increment_bug=True` when creating an estimator.
Parameters
object model_dir
Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
object weight_column_name
A string defining feature column name representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example.
object linear_feature_columns
An iterable containing all the feature columns used by linear part of the model. All items in the set must be instances of classes derived from `FeatureColumn`.
object linear_optimizer
An instance of `tf.Optimizer` used to apply gradients to the linear part of the model. If `None`, will use a FTRL optimizer.
ImplicitContainer<T> _joint_linear_weights
If True a single (possibly partitioned) variable will be used to store the linear model weights. It's faster, but requires that all columns are sparse and have the 'sum' combiner.
object dnn_feature_columns
An iterable containing all the feature columns used by deep part of the model. All items in the set must be instances of classes derived from `FeatureColumn`.
object dnn_optimizer
An instance of `tf.Optimizer` used to apply gradients to the deep part of the model. If `None`, will use an Adagrad optimizer.
object dnn_hidden_units
List of hidden units per layer. All layers are fully connected.
ImplicitContainer<T> dnn_activation_fn
Activation function applied to each layer. If None, will use tf.nn.relu.
object dnn_dropout
When not None, the probability we will drop out a given coordinate.
object gradient_clip_norm
A float > 0. If provided, gradients are clipped to their global norm with this clipping ratio. See tf.clip_by_global_norm for more details.
ImplicitContainer<T> enable_centered_bias
A bool. If True, estimator will learn a centered bias variable for each class. Rest of the model structure learns the residual after centered bias.
ImplicitContainer<T> label_dimension
Number of regression targets per example. This is the size of the last dimension of the labels and logits `Tensor` objects (typically, these have shape `[batch_size, label_dimension]`).
object config
RunConfig object to configure the runtime settings.
object feature_engineering_fn
Feature engineering function. Takes features and labels which are the output of `input_fn` and returns features and labels which will be fed into the model.
object embedding_lr_multipliers
Optional. A dictionary from `EmbeddingColumn` to a `float` multiplier. Multiplier will be used to multiply with learning rate for the embedding variables.
object input_layer_min_slice_size
Optional. The min slice size of input layer partitions. If not provided, will use the default of 64M.
ImplicitContainer<T> fix_global_step_increment_bug
If `False`, the estimator needs two fit steps to optimize both linear and dnn parts. If `True`, this bug is fixed. New users must set this to `True`, but it the default value is `False` for backwards compatibility.

Public properties

object config get;

object config_dyn get;

string model_dir get;

object model_dir_dyn get;

object model_fn get;

object model_fn_dyn get;

object PythonObject get;