LostTech.TensorFlow : API Documentation

Type LinearClassifier

Namespace tensorflow.contrib.learn

Parent Estimator

Interfaces ILinearClassifier

Linear classifier model.

THIS CLASS IS DEPRECATED. See [contrib/learn/README.md](https://www.tensorflow.org/code/tensorflow/contrib/learn/README.md) for general migration instructions.

Train a linear model to classify instances into one of multiple possible classes. When number of possible classes is 2, this is binary classification.

Example: If the user specifies `label_keys` in constructor, labels must be strings from the `label_keys` vocabulary. Example: Input of `fit` and `evaluate` should have following features, otherwise there will be a `KeyError`:

* if `weight_column_name` is not `None`, a feature with `key=weight_column_name` whose value is a `Tensor`. * for each `column` in `feature_columns`: - if `column` is a `SparseColumn`, a feature with `key=column.name` whose `value` is a `SparseTensor`. - if `column` is a `WeightedSparseColumn`, two features: the first with `key` the id column name, the second with `key` the weight column name. Both features' `value` must be a `SparseTensor`. - if `column` is a `RealValuedColumn`, a feature with `key=column.name` whose `value` is a `Tensor`.
Show Example
sparse_column_a = sparse_column_with_hash_bucket(...)
            sparse_column_b = sparse_column_with_hash_bucket(...) 

sparse_feature_a_x_sparse_feature_b = crossed_column(...)

# Estimator using the default optimizer. estimator = LinearClassifier( feature_columns=[sparse_column_a, sparse_feature_a_x_sparse_feature_b])

# Or estimator using the FTRL optimizer with regularization. estimator = LinearClassifier( feature_columns=[sparse_column_a, sparse_feature_a_x_sparse_feature_b], optimizer=tf.compat.v1.train.FtrlOptimizer( learning_rate=0.1, l1_regularization_strength=0.001 ))

# Or estimator using the SDCAOptimizer. estimator = LinearClassifier( feature_columns=[sparse_column_a, sparse_feature_a_x_sparse_feature_b], optimizer=tf.contrib.linear_optimizer.SDCAOptimizer( example_id_column='example_id', num_loss_partitions=..., symmetric_l2_regularization=2.0 ))

# Input builders def input_fn_train: # returns x, y (where y represents label's class index). ... def input_fn_eval: # returns x, y (where y represents label's class index). ... def input_fn_predict: # returns x, None. ... estimator.fit(input_fn=input_fn_train) estimator.evaluate(input_fn=input_fn_eval) # predict_classes returns class indices. estimator.predict_classes(input_fn=input_fn_predict)

Methods

Properties

Public static methods

LinearClassifier NewDyn(object feature_columns, object model_dir, ImplicitContainer<T> n_classes, object weight_column_name, object optimizer, object gradient_clip_norm, ImplicitContainer<T> enable_centered_bias, ImplicitContainer<T> _joint_weight, object config, object feature_engineering_fn, object label_keys)

Construct a `LinearClassifier` estimator object.
Parameters
object feature_columns
An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from `FeatureColumn`.
object model_dir
Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
ImplicitContainer<T> n_classes
number of label classes. Default is binary classification. Note that class labels are integers representing the class index (i.e. values from 0 to n_classes-1). For arbitrary label values (e.g. string labels), convert to class indices first.
object weight_column_name
A string defining feature column name representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example.
object optimizer
The optimizer used to train the model. If specified, it should be either an instance of `tf.Optimizer` or the SDCAOptimizer. If `None`, the Ftrl optimizer will be used.
object gradient_clip_norm
A `float` > 0. If provided, gradients are clipped to their global norm with this clipping ratio. See tf.clip_by_global_norm for more details.
ImplicitContainer<T> enable_centered_bias
A bool. If True, estimator will learn a centered bias variable for each class. Rest of the model structure learns the residual after centered bias.
ImplicitContainer<T> _joint_weight
If True, the weights for all columns will be stored in a single (possibly partitioned) variable. It's more efficient, but it's incompatible with SDCAOptimizer, and requires all feature columns are sparse and use the 'sum' combiner.
object config
`RunConfig` object to configure the runtime settings.
object feature_engineering_fn
Feature engineering function. Takes features and labels which are the output of `input_fn` and returns features and labels which will be fed into the model.
object label_keys
Optional list of strings with size `[n_classes]` defining the label vocabulary. Only supported for `n_classes` > 2.
Returns
LinearClassifier
A `LinearClassifier` estimator.

Public properties

object config get;

object config_dyn get;

string model_dir get;

object model_dir_dyn get;

object model_fn get;

object model_fn_dyn get;

object PythonObject get;