LostTech.TensorFlow : API Documentation

Type tf.feature_column

Namespace tensorflow

Methods

Properties

Public static methods

BucketizedColumn bucketized_column(_FeatureColumn source_column, double boundaries)

Represents discretized dense input.

Buckets include the left boundary, and exclude the right boundary. Namely, `boundaries=[0., 1., 2.]` generates buckets `(-inf, 0.)`, `[0., 1.)`, `[1., 2.)`, and `[2., +inf)`.

For example, if the inputs are then the output will be Example: `bucketized_column` can also be crossed with another categorical column using `crossed_column`:
Parameters
_FeatureColumn source_column
A one-dimensional dense column which is generated with `numeric_column`.
double boundaries
A sorted list or tuple of floats specifying the boundaries.
Returns
BucketizedColumn
A `BucketizedColumn`.
Show Example
boundaries = [0, 10, 100]
            input tensor = [[-5, 10000]
                            [150,   10]
                            [5,    100]] 

BucketizedColumn bucketized_column(_FeatureColumn source_column, IEnumerable<int> boundaries)

Represents discretized dense input.

Buckets include the left boundary, and exclude the right boundary. Namely, `boundaries=[0., 1., 2.]` generates buckets `(-inf, 0.)`, `[0., 1.)`, `[1., 2.)`, and `[2., +inf)`.

For example, if the inputs are then the output will be Example: `bucketized_column` can also be crossed with another categorical column using `crossed_column`:
Parameters
_FeatureColumn source_column
A one-dimensional dense column which is generated with `numeric_column`.
IEnumerable<int> boundaries
A sorted list or tuple of floats specifying the boundaries.
Returns
BucketizedColumn
A `BucketizedColumn`.
Show Example
boundaries = [0, 10, 100]
            input tensor = [[-5, 10000]
                            [150,   10]
                            [5,    100]] 

HashedCategoricalColumn categorical_column_with_hash_bucket(string key, int hash_bucket_size, ImplicitContainer<T> dtype)

Represents sparse feature where ids are set by hashing.

Use this when your sparse features are in string or integer format, and you want to distribute your inputs into a finite number of buckets by hashing. output_id = Hash(input_feature_string) % bucket_size for string type input. For int type input, the value is converted to its string representation first and then hashed by the same formula.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int hash_bucket_size
An int > 1. The number of buckets.
ImplicitContainer<T> dtype
The type of features. Only string and integer types are supported.
Returns
HashedCategoricalColumn
A `HashedCategoricalColumn`.
Show Example
keywords = categorical_column_with_hash_bucket("keywords", 10K)
            columns = [keywords,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction = linear_model(features, columns) 

# or keywords_embedded = embedding_column(keywords, 16) columns = [keywords_embedded,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) dense_tensor = input_layer(features, columns)

HashedCategoricalColumn categorical_column_with_hash_bucket(string key, string hash_bucket_size, ImplicitContainer<T> dtype)

Represents sparse feature where ids are set by hashing.

Use this when your sparse features are in string or integer format, and you want to distribute your inputs into a finite number of buckets by hashing. output_id = Hash(input_feature_string) % bucket_size for string type input. For int type input, the value is converted to its string representation first and then hashed by the same formula.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
string hash_bucket_size
An int > 1. The number of buckets.
ImplicitContainer<T> dtype
The type of features. Only string and integer types are supported.
Returns
HashedCategoricalColumn
A `HashedCategoricalColumn`.
Show Example
keywords = categorical_column_with_hash_bucket("keywords", 10K)
            columns = [keywords,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction = linear_model(features, columns) 

# or keywords_embedded = embedding_column(keywords, 16) columns = [keywords_embedded,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) dense_tensor = input_layer(features, columns)

HashedCategoricalColumn categorical_column_with_hash_bucket(IEnumerable<string> key, int hash_bucket_size, ImplicitContainer<T> dtype)

Represents sparse feature where ids are set by hashing.

Use this when your sparse features are in string or integer format, and you want to distribute your inputs into a finite number of buckets by hashing. output_id = Hash(input_feature_string) % bucket_size for string type input. For int type input, the value is converted to its string representation first and then hashed by the same formula.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int hash_bucket_size
An int > 1. The number of buckets.
ImplicitContainer<T> dtype
The type of features. Only string and integer types are supported.
Returns
HashedCategoricalColumn
A `HashedCategoricalColumn`.
Show Example
keywords = categorical_column_with_hash_bucket("keywords", 10K)
            columns = [keywords,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction = linear_model(features, columns) 

# or keywords_embedded = embedding_column(keywords, 16) columns = [keywords_embedded,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) dense_tensor = input_layer(features, columns)

HashedCategoricalColumn categorical_column_with_hash_bucket(IEnumerable<string> key, IEnumerable<object> hash_bucket_size, ImplicitContainer<T> dtype)

Represents sparse feature where ids are set by hashing.

Use this when your sparse features are in string or integer format, and you want to distribute your inputs into a finite number of buckets by hashing. output_id = Hash(input_feature_string) % bucket_size for string type input. For int type input, the value is converted to its string representation first and then hashed by the same formula.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
IEnumerable<object> hash_bucket_size
An int > 1. The number of buckets.
ImplicitContainer<T> dtype
The type of features. Only string and integer types are supported.
Returns
HashedCategoricalColumn
A `HashedCategoricalColumn`.
Show Example
keywords = categorical_column_with_hash_bucket("keywords", 10K)
            columns = [keywords,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction = linear_model(features, columns) 

# or keywords_embedded = embedding_column(keywords, 16) columns = [keywords_embedded,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) dense_tensor = input_layer(features, columns)

HashedCategoricalColumn categorical_column_with_hash_bucket(string key, IEnumerable<object> hash_bucket_size, ImplicitContainer<T> dtype)

Represents sparse feature where ids are set by hashing.

Use this when your sparse features are in string or integer format, and you want to distribute your inputs into a finite number of buckets by hashing. output_id = Hash(input_feature_string) % bucket_size for string type input. For int type input, the value is converted to its string representation first and then hashed by the same formula.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
IEnumerable<object> hash_bucket_size
An int > 1. The number of buckets.
ImplicitContainer<T> dtype
The type of features. Only string and integer types are supported.
Returns
HashedCategoricalColumn
A `HashedCategoricalColumn`.
Show Example
keywords = categorical_column_with_hash_bucket("keywords", 10K)
            columns = [keywords,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction = linear_model(features, columns) 

# or keywords_embedded = embedding_column(keywords, 16) columns = [keywords_embedded,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) dense_tensor = input_layer(features, columns)

HashedCategoricalColumn categorical_column_with_hash_bucket(IEnumerable<string> key, string hash_bucket_size, ImplicitContainer<T> dtype)

Represents sparse feature where ids are set by hashing.

Use this when your sparse features are in string or integer format, and you want to distribute your inputs into a finite number of buckets by hashing. output_id = Hash(input_feature_string) % bucket_size for string type input. For int type input, the value is converted to its string representation first and then hashed by the same formula.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
string hash_bucket_size
An int > 1. The number of buckets.
ImplicitContainer<T> dtype
The type of features. Only string and integer types are supported.
Returns
HashedCategoricalColumn
A `HashedCategoricalColumn`.
Show Example
keywords = categorical_column_with_hash_bucket("keywords", 10K)
            columns = [keywords,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction = linear_model(features, columns) 

# or keywords_embedded = embedding_column(keywords, 16) columns = [keywords_embedded,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) dense_tensor = input_layer(features, columns)

object categorical_column_with_hash_bucket_dyn(object key, object hash_bucket_size, ImplicitContainer<T> dtype)

Represents sparse feature where ids are set by hashing.

Use this when your sparse features are in string or integer format, and you want to distribute your inputs into a finite number of buckets by hashing. output_id = Hash(input_feature_string) % bucket_size for string type input. For int type input, the value is converted to its string representation first and then hashed by the same formula.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example:
Parameters
object key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
object hash_bucket_size
An int > 1. The number of buckets.
ImplicitContainer<T> dtype
The type of features. Only string and integer types are supported.
Returns
object
A `HashedCategoricalColumn`.
Show Example
keywords = categorical_column_with_hash_bucket("keywords", 10K)
            columns = [keywords,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction = linear_model(features, columns) 

# or keywords_embedded = embedding_column(keywords, 16) columns = [keywords_embedded,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) dense_tensor = input_layer(features, columns)

IdentityCategoricalColumn categorical_column_with_identity(IEnumerable<string> key, string num_buckets, Nullable<int> default_value)

A `CategoricalColumn` that returns identity values.

Use this when your inputs are integers in the range `[0, num_buckets)`, and you want to use the input value itself as the categorical ID. Values outside this range will result in `default_value` if specified, otherwise it will fail.

Typically, this is used for contiguous ranges of integer indexes, but it doesn't have to be. This might be inefficient, however, if many of IDs are unused. Consider `categorical_column_with_hash_bucket` in that case.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

In the following examples, each input in the range `[0, 1000000)` is assigned the same value. All other inputs are assigned `default_value` 0. Note that a literal 0 in inputs will result in the same default ID.

Linear model: Embedding for a DNN model:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
string num_buckets
Range of inputs and outputs is `[0, num_buckets)`.
Nullable<int> default_value
If `None`, this column's graph operations will fail for out-of-range inputs. Otherwise, this value must be in the range `[0, num_buckets)`, and will replace inputs in that range.
Returns
IdentityCategoricalColumn
A `CategoricalColumn` that returns identity values.
Show Example
video_id = categorical_column_with_identity(
                key='video_id', num_buckets=1000000, default_value=0)
            columns = [video_id,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

IdentityCategoricalColumn categorical_column_with_identity(string key, int num_buckets, Nullable<int> default_value)

A `CategoricalColumn` that returns identity values.

Use this when your inputs are integers in the range `[0, num_buckets)`, and you want to use the input value itself as the categorical ID. Values outside this range will result in `default_value` if specified, otherwise it will fail.

Typically, this is used for contiguous ranges of integer indexes, but it doesn't have to be. This might be inefficient, however, if many of IDs are unused. Consider `categorical_column_with_hash_bucket` in that case.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

In the following examples, each input in the range `[0, 1000000)` is assigned the same value. All other inputs are assigned `default_value` 0. Note that a literal 0 in inputs will result in the same default ID.

Linear model: Embedding for a DNN model:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int num_buckets
Range of inputs and outputs is `[0, num_buckets)`.
Nullable<int> default_value
If `None`, this column's graph operations will fail for out-of-range inputs. Otherwise, this value must be in the range `[0, num_buckets)`, and will replace inputs in that range.
Returns
IdentityCategoricalColumn
A `CategoricalColumn` that returns identity values.
Show Example
video_id = categorical_column_with_identity(
                key='video_id', num_buckets=1000000, default_value=0)
            columns = [video_id,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

IdentityCategoricalColumn categorical_column_with_identity(string key, string num_buckets, Nullable<int> default_value)

A `CategoricalColumn` that returns identity values.

Use this when your inputs are integers in the range `[0, num_buckets)`, and you want to use the input value itself as the categorical ID. Values outside this range will result in `default_value` if specified, otherwise it will fail.

Typically, this is used for contiguous ranges of integer indexes, but it doesn't have to be. This might be inefficient, however, if many of IDs are unused. Consider `categorical_column_with_hash_bucket` in that case.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

In the following examples, each input in the range `[0, 1000000)` is assigned the same value. All other inputs are assigned `default_value` 0. Note that a literal 0 in inputs will result in the same default ID.

Linear model: Embedding for a DNN model:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
string num_buckets
Range of inputs and outputs is `[0, num_buckets)`.
Nullable<int> default_value
If `None`, this column's graph operations will fail for out-of-range inputs. Otherwise, this value must be in the range `[0, num_buckets)`, and will replace inputs in that range.
Returns
IdentityCategoricalColumn
A `CategoricalColumn` that returns identity values.
Show Example
video_id = categorical_column_with_identity(
                key='video_id', num_buckets=1000000, default_value=0)
            columns = [video_id,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

IdentityCategoricalColumn categorical_column_with_identity(IEnumerable<string> key, int num_buckets, Nullable<int> default_value)

A `CategoricalColumn` that returns identity values.

Use this when your inputs are integers in the range `[0, num_buckets)`, and you want to use the input value itself as the categorical ID. Values outside this range will result in `default_value` if specified, otherwise it will fail.

Typically, this is used for contiguous ranges of integer indexes, but it doesn't have to be. This might be inefficient, however, if many of IDs are unused. Consider `categorical_column_with_hash_bucket` in that case.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

In the following examples, each input in the range `[0, 1000000)` is assigned the same value. All other inputs are assigned `default_value` 0. Note that a literal 0 in inputs will result in the same default ID.

Linear model: Embedding for a DNN model:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int num_buckets
Range of inputs and outputs is `[0, num_buckets)`.
Nullable<int> default_value
If `None`, this column's graph operations will fail for out-of-range inputs. Otherwise, this value must be in the range `[0, num_buckets)`, and will replace inputs in that range.
Returns
IdentityCategoricalColumn
A `CategoricalColumn` that returns identity values.
Show Example
video_id = categorical_column_with_identity(
                key='video_id', num_buckets=1000000, default_value=0)
            columns = [video_id,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

IdentityCategoricalColumn categorical_column_with_identity(IEnumerable<string> key, IEnumerable<object> num_buckets, Nullable<int> default_value)

A `CategoricalColumn` that returns identity values.

Use this when your inputs are integers in the range `[0, num_buckets)`, and you want to use the input value itself as the categorical ID. Values outside this range will result in `default_value` if specified, otherwise it will fail.

Typically, this is used for contiguous ranges of integer indexes, but it doesn't have to be. This might be inefficient, however, if many of IDs are unused. Consider `categorical_column_with_hash_bucket` in that case.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

In the following examples, each input in the range `[0, 1000000)` is assigned the same value. All other inputs are assigned `default_value` 0. Note that a literal 0 in inputs will result in the same default ID.

Linear model: Embedding for a DNN model:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
IEnumerable<object> num_buckets
Range of inputs and outputs is `[0, num_buckets)`.
Nullable<int> default_value
If `None`, this column's graph operations will fail for out-of-range inputs. Otherwise, this value must be in the range `[0, num_buckets)`, and will replace inputs in that range.
Returns
IdentityCategoricalColumn
A `CategoricalColumn` that returns identity values.
Show Example
video_id = categorical_column_with_identity(
                key='video_id', num_buckets=1000000, default_value=0)
            columns = [video_id,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

IdentityCategoricalColumn categorical_column_with_identity(string key, IEnumerable<object> num_buckets, Nullable<int> default_value)

A `CategoricalColumn` that returns identity values.

Use this when your inputs are integers in the range `[0, num_buckets)`, and you want to use the input value itself as the categorical ID. Values outside this range will result in `default_value` if specified, otherwise it will fail.

Typically, this is used for contiguous ranges of integer indexes, but it doesn't have to be. This might be inefficient, however, if many of IDs are unused. Consider `categorical_column_with_hash_bucket` in that case.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

In the following examples, each input in the range `[0, 1000000)` is assigned the same value. All other inputs are assigned `default_value` 0. Note that a literal 0 in inputs will result in the same default ID.

Linear model: Embedding for a DNN model:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
IEnumerable<object> num_buckets
Range of inputs and outputs is `[0, num_buckets)`.
Nullable<int> default_value
If `None`, this column's graph operations will fail for out-of-range inputs. Otherwise, this value must be in the range `[0, num_buckets)`, and will replace inputs in that range.
Returns
IdentityCategoricalColumn
A `CategoricalColumn` that returns identity values.
Show Example
video_id = categorical_column_with_identity(
                key='video_id', num_buckets=1000000, default_value=0)
            columns = [video_id,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

object categorical_column_with_identity_dyn(object key, object num_buckets, object default_value)

A `CategoricalColumn` that returns identity values.

Use this when your inputs are integers in the range `[0, num_buckets)`, and you want to use the input value itself as the categorical ID. Values outside this range will result in `default_value` if specified, otherwise it will fail.

Typically, this is used for contiguous ranges of integer indexes, but it doesn't have to be. This might be inefficient, however, if many of IDs are unused. Consider `categorical_column_with_hash_bucket` in that case.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

In the following examples, each input in the range `[0, 1000000)` is assigned the same value. All other inputs are assigned `default_value` 0. Note that a literal 0 in inputs will result in the same default ID.

Linear model: Embedding for a DNN model:
Parameters
object key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
object num_buckets
Range of inputs and outputs is `[0, num_buckets)`.
object default_value
If `None`, this column's graph operations will fail for out-of-range inputs. Otherwise, this value must be in the range `[0, num_buckets)`, and will replace inputs in that range.
Returns
object
A `CategoricalColumn` that returns identity values.
Show Example
video_id = categorical_column_with_identity(
                key='video_id', num_buckets=1000000, default_value=0)
            columns = [video_id,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

VocabularyFileCategoricalColumn categorical_column_with_vocabulary_file(IEnumerable<string> key, IEnumerable<object> vocabulary_file, Nullable<int> vocabulary_size, int num_oov_buckets, Nullable<int> default_value, ImplicitContainer<T> dtype)

A `CategoricalColumn` with a vocabulary file.

Use this when your inputs are in string or integer format, and you have a vocabulary file that maps each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example with `num_oov_buckets`: File '/us/states.txt' contains 50 lines, each with a 2-character U.S. state abbreviation. All inputs with values in that file are assigned an ID 0-49, corresponding to its line number. All other values are hashed and assigned an ID 50-54. Example with `default_value`: File '/us/states.txt' contains 51 lines - the first line is 'XX', and the other 50 each have a 2-character U.S. state abbreviation. Both a literal 'XX' in input, and other values missing from the file, will be assigned ID 0. All others are assigned the corresponding line number 1-50. And to make an embedding with either:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
IEnumerable<object> vocabulary_file
The vocabulary file name.
Nullable<int> vocabulary_size
Number of the elements in the vocabulary. This must be no greater than length of `vocabulary_file`, if less than length, later values are ignored. If None, it is set to the length of `vocabulary_file`.
int num_oov_buckets
Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[vocabulary_size, vocabulary_size+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`.
Nullable<int> default_value
The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`.
ImplicitContainer<T> dtype
The type of features. Only string and integer types are supported.
Returns
VocabularyFileCategoricalColumn
A `CategoricalColumn` with a vocabulary file.
Show Example
states = categorical_column_with_vocabulary_file(
                key='states', vocabulary_file='/us/states.txt', vocabulary_size=50,
                num_oov_buckets=5)
            columns = [states,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction = linear_model(features, columns) 

VocabularyFileCategoricalColumn categorical_column_with_vocabulary_file(string key, string vocabulary_file, Nullable<int> vocabulary_size, int num_oov_buckets, Nullable<int> default_value, ImplicitContainer<T> dtype)

A `CategoricalColumn` with a vocabulary file.

Use this when your inputs are in string or integer format, and you have a vocabulary file that maps each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example with `num_oov_buckets`: File '/us/states.txt' contains 50 lines, each with a 2-character U.S. state abbreviation. All inputs with values in that file are assigned an ID 0-49, corresponding to its line number. All other values are hashed and assigned an ID 50-54. Example with `default_value`: File '/us/states.txt' contains 51 lines - the first line is 'XX', and the other 50 each have a 2-character U.S. state abbreviation. Both a literal 'XX' in input, and other values missing from the file, will be assigned ID 0. All others are assigned the corresponding line number 1-50. And to make an embedding with either:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
string vocabulary_file
The vocabulary file name.
Nullable<int> vocabulary_size
Number of the elements in the vocabulary. This must be no greater than length of `vocabulary_file`, if less than length, later values are ignored. If None, it is set to the length of `vocabulary_file`.
int num_oov_buckets
Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[vocabulary_size, vocabulary_size+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`.
Nullable<int> default_value
The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`.
ImplicitContainer<T> dtype
The type of features. Only string and integer types are supported.
Returns
VocabularyFileCategoricalColumn
A `CategoricalColumn` with a vocabulary file.
Show Example
states = categorical_column_with_vocabulary_file(
                key='states', vocabulary_file='/us/states.txt', vocabulary_size=50,
                num_oov_buckets=5)
            columns = [states,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction = linear_model(features, columns) 

VocabularyFileCategoricalColumn categorical_column_with_vocabulary_file(string key, int vocabulary_file, Nullable<int> vocabulary_size, int num_oov_buckets, Nullable<int> default_value, ImplicitContainer<T> dtype)

A `CategoricalColumn` with a vocabulary file.

Use this when your inputs are in string or integer format, and you have a vocabulary file that maps each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example with `num_oov_buckets`: File '/us/states.txt' contains 50 lines, each with a 2-character U.S. state abbreviation. All inputs with values in that file are assigned an ID 0-49, corresponding to its line number. All other values are hashed and assigned an ID 50-54. Example with `default_value`: File '/us/states.txt' contains 51 lines - the first line is 'XX', and the other 50 each have a 2-character U.S. state abbreviation. Both a literal 'XX' in input, and other values missing from the file, will be assigned ID 0. All others are assigned the corresponding line number 1-50. And to make an embedding with either:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int vocabulary_file
The vocabulary file name.
Nullable<int> vocabulary_size
Number of the elements in the vocabulary. This must be no greater than length of `vocabulary_file`, if less than length, later values are ignored. If None, it is set to the length of `vocabulary_file`.
int num_oov_buckets
Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[vocabulary_size, vocabulary_size+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`.
Nullable<int> default_value
The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`.
ImplicitContainer<T> dtype
The type of features. Only string and integer types are supported.
Returns
VocabularyFileCategoricalColumn
A `CategoricalColumn` with a vocabulary file.
Show Example
states = categorical_column_with_vocabulary_file(
                key='states', vocabulary_file='/us/states.txt', vocabulary_size=50,
                num_oov_buckets=5)
            columns = [states,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction = linear_model(features, columns) 

VocabularyFileCategoricalColumn categorical_column_with_vocabulary_file(string key, IEnumerable<object> vocabulary_file, Nullable<int> vocabulary_size, int num_oov_buckets, Nullable<int> default_value, ImplicitContainer<T> dtype)

A `CategoricalColumn` with a vocabulary file.

Use this when your inputs are in string or integer format, and you have a vocabulary file that maps each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example with `num_oov_buckets`: File '/us/states.txt' contains 50 lines, each with a 2-character U.S. state abbreviation. All inputs with values in that file are assigned an ID 0-49, corresponding to its line number. All other values are hashed and assigned an ID 50-54. Example with `default_value`: File '/us/states.txt' contains 51 lines - the first line is 'XX', and the other 50 each have a 2-character U.S. state abbreviation. Both a literal 'XX' in input, and other values missing from the file, will be assigned ID 0. All others are assigned the corresponding line number 1-50. And to make an embedding with either:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
IEnumerable<object> vocabulary_file
The vocabulary file name.
Nullable<int> vocabulary_size
Number of the elements in the vocabulary. This must be no greater than length of `vocabulary_file`, if less than length, later values are ignored. If None, it is set to the length of `vocabulary_file`.
int num_oov_buckets
Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[vocabulary_size, vocabulary_size+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`.
Nullable<int> default_value
The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`.
ImplicitContainer<T> dtype
The type of features. Only string and integer types are supported.
Returns
VocabularyFileCategoricalColumn
A `CategoricalColumn` with a vocabulary file.
Show Example
states = categorical_column_with_vocabulary_file(
                key='states', vocabulary_file='/us/states.txt', vocabulary_size=50,
                num_oov_buckets=5)
            columns = [states,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction = linear_model(features, columns) 

VocabularyFileCategoricalColumn categorical_column_with_vocabulary_file(IEnumerable<string> key, string vocabulary_file, Nullable<int> vocabulary_size, int num_oov_buckets, Nullable<int> default_value, ImplicitContainer<T> dtype)

A `CategoricalColumn` with a vocabulary file.

Use this when your inputs are in string or integer format, and you have a vocabulary file that maps each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example with `num_oov_buckets`: File '/us/states.txt' contains 50 lines, each with a 2-character U.S. state abbreviation. All inputs with values in that file are assigned an ID 0-49, corresponding to its line number. All other values are hashed and assigned an ID 50-54. Example with `default_value`: File '/us/states.txt' contains 51 lines - the first line is 'XX', and the other 50 each have a 2-character U.S. state abbreviation. Both a literal 'XX' in input, and other values missing from the file, will be assigned ID 0. All others are assigned the corresponding line number 1-50. And to make an embedding with either:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
string vocabulary_file
The vocabulary file name.
Nullable<int> vocabulary_size
Number of the elements in the vocabulary. This must be no greater than length of `vocabulary_file`, if less than length, later values are ignored. If None, it is set to the length of `vocabulary_file`.
int num_oov_buckets
Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[vocabulary_size, vocabulary_size+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`.
Nullable<int> default_value
The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`.
ImplicitContainer<T> dtype
The type of features. Only string and integer types are supported.
Returns
VocabularyFileCategoricalColumn
A `CategoricalColumn` with a vocabulary file.
Show Example
states = categorical_column_with_vocabulary_file(
                key='states', vocabulary_file='/us/states.txt', vocabulary_size=50,
                num_oov_buckets=5)
            columns = [states,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction = linear_model(features, columns) 

VocabularyFileCategoricalColumn categorical_column_with_vocabulary_file(IEnumerable<string> key, int vocabulary_file, Nullable<int> vocabulary_size, int num_oov_buckets, Nullable<int> default_value, ImplicitContainer<T> dtype)

A `CategoricalColumn` with a vocabulary file.

Use this when your inputs are in string or integer format, and you have a vocabulary file that maps each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example with `num_oov_buckets`: File '/us/states.txt' contains 50 lines, each with a 2-character U.S. state abbreviation. All inputs with values in that file are assigned an ID 0-49, corresponding to its line number. All other values are hashed and assigned an ID 50-54. Example with `default_value`: File '/us/states.txt' contains 51 lines - the first line is 'XX', and the other 50 each have a 2-character U.S. state abbreviation. Both a literal 'XX' in input, and other values missing from the file, will be assigned ID 0. All others are assigned the corresponding line number 1-50. And to make an embedding with either:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int vocabulary_file
The vocabulary file name.
Nullable<int> vocabulary_size
Number of the elements in the vocabulary. This must be no greater than length of `vocabulary_file`, if less than length, later values are ignored. If None, it is set to the length of `vocabulary_file`.
int num_oov_buckets
Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[vocabulary_size, vocabulary_size+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`.
Nullable<int> default_value
The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`.
ImplicitContainer<T> dtype
The type of features. Only string and integer types are supported.
Returns
VocabularyFileCategoricalColumn
A `CategoricalColumn` with a vocabulary file.
Show Example
states = categorical_column_with_vocabulary_file(
                key='states', vocabulary_file='/us/states.txt', vocabulary_size=50,
                num_oov_buckets=5)
            columns = [states,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction = linear_model(features, columns) 

VocabularyListCategoricalColumn categorical_column_with_vocabulary_list(string key, int vocabulary_list, DType dtype, int default_value, int num_oov_buckets)

A `CategoricalColumn` with in-memory vocabulary.

Use this when your inputs are in string or integer format, and you have an in-memory vocabulary mapping each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example with `num_oov_buckets`: In the following example, each input in `vocabulary_list` is assigned an ID 0-3 corresponding to its index (e.g., input 'B' produces output 2). All other inputs are hashed and assigned an ID 4-5. Example with `default_value`: In the following example, each input in `vocabulary_list` is assigned an ID 0-4 corresponding to its index (e.g., input 'B' produces output 3). All other inputs are assigned `default_value` 0. And to make an embedding with either:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int vocabulary_list
An ordered iterable defining the vocabulary. Each feature is mapped to the index of its value (if present) in `vocabulary_list`. Must be castable to `dtype`.
DType dtype
The type of features. Only string and integer types are supported. If `None`, it will be inferred from `vocabulary_list`.
int default_value
The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`.
int num_oov_buckets
Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`.
Returns
VocabularyListCategoricalColumn
A `CategoricalColumn` with in-memory vocabulary.
Show Example
colors = categorical_column_with_vocabulary_list(
                key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),
                num_oov_buckets=2)
            columns = [colors,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

VocabularyListCategoricalColumn categorical_column_with_vocabulary_list(IEnumerable<string> key, ndarray vocabulary_list, DType dtype, int default_value, int num_oov_buckets)

A `CategoricalColumn` with in-memory vocabulary.

Use this when your inputs are in string or integer format, and you have an in-memory vocabulary mapping each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example with `num_oov_buckets`: In the following example, each input in `vocabulary_list` is assigned an ID 0-3 corresponding to its index (e.g., input 'B' produces output 2). All other inputs are hashed and assigned an ID 4-5. Example with `default_value`: In the following example, each input in `vocabulary_list` is assigned an ID 0-4 corresponding to its index (e.g., input 'B' produces output 3). All other inputs are assigned `default_value` 0. And to make an embedding with either:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
ndarray vocabulary_list
An ordered iterable defining the vocabulary. Each feature is mapped to the index of its value (if present) in `vocabulary_list`. Must be castable to `dtype`.
DType dtype
The type of features. Only string and integer types are supported. If `None`, it will be inferred from `vocabulary_list`.
int default_value
The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`.
int num_oov_buckets
Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`.
Returns
VocabularyListCategoricalColumn
A `CategoricalColumn` with in-memory vocabulary.
Show Example
colors = categorical_column_with_vocabulary_list(
                key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),
                num_oov_buckets=2)
            columns = [colors,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

VocabularyListCategoricalColumn categorical_column_with_vocabulary_list(IEnumerable<string> key, int vocabulary_list, DType dtype, int default_value, int num_oov_buckets)

A `CategoricalColumn` with in-memory vocabulary.

Use this when your inputs are in string or integer format, and you have an in-memory vocabulary mapping each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example with `num_oov_buckets`: In the following example, each input in `vocabulary_list` is assigned an ID 0-3 corresponding to its index (e.g., input 'B' produces output 2). All other inputs are hashed and assigned an ID 4-5. Example with `default_value`: In the following example, each input in `vocabulary_list` is assigned an ID 0-4 corresponding to its index (e.g., input 'B' produces output 3). All other inputs are assigned `default_value` 0. And to make an embedding with either:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int vocabulary_list
An ordered iterable defining the vocabulary. Each feature is mapped to the index of its value (if present) in `vocabulary_list`. Must be castable to `dtype`.
DType dtype
The type of features. Only string and integer types are supported. If `None`, it will be inferred from `vocabulary_list`.
int default_value
The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`.
int num_oov_buckets
Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`.
Returns
VocabularyListCategoricalColumn
A `CategoricalColumn` with in-memory vocabulary.
Show Example
colors = categorical_column_with_vocabulary_list(
                key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),
                num_oov_buckets=2)
            columns = [colors,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

VocabularyListCategoricalColumn categorical_column_with_vocabulary_list(IEnumerable<string> key, string vocabulary_list, DType dtype, int default_value, int num_oov_buckets)

A `CategoricalColumn` with in-memory vocabulary.

Use this when your inputs are in string or integer format, and you have an in-memory vocabulary mapping each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example with `num_oov_buckets`: In the following example, each input in `vocabulary_list` is assigned an ID 0-3 corresponding to its index (e.g., input 'B' produces output 2). All other inputs are hashed and assigned an ID 4-5. Example with `default_value`: In the following example, each input in `vocabulary_list` is assigned an ID 0-4 corresponding to its index (e.g., input 'B' produces output 3). All other inputs are assigned `default_value` 0. And to make an embedding with either:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
string vocabulary_list
An ordered iterable defining the vocabulary. Each feature is mapped to the index of its value (if present) in `vocabulary_list`. Must be castable to `dtype`.
DType dtype
The type of features. Only string and integer types are supported. If `None`, it will be inferred from `vocabulary_list`.
int default_value
The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`.
int num_oov_buckets
Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`.
Returns
VocabularyListCategoricalColumn
A `CategoricalColumn` with in-memory vocabulary.
Show Example
colors = categorical_column_with_vocabulary_list(
                key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),
                num_oov_buckets=2)
            columns = [colors,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

VocabularyListCategoricalColumn categorical_column_with_vocabulary_list(string key, ndarray vocabulary_list, DType dtype, int default_value, int num_oov_buckets)

A `CategoricalColumn` with in-memory vocabulary.

Use this when your inputs are in string or integer format, and you have an in-memory vocabulary mapping each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example with `num_oov_buckets`: In the following example, each input in `vocabulary_list` is assigned an ID 0-3 corresponding to its index (e.g., input 'B' produces output 2). All other inputs are hashed and assigned an ID 4-5. Example with `default_value`: In the following example, each input in `vocabulary_list` is assigned an ID 0-4 corresponding to its index (e.g., input 'B' produces output 3). All other inputs are assigned `default_value` 0. And to make an embedding with either:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
ndarray vocabulary_list
An ordered iterable defining the vocabulary. Each feature is mapped to the index of its value (if present) in `vocabulary_list`. Must be castable to `dtype`.
DType dtype
The type of features. Only string and integer types are supported. If `None`, it will be inferred from `vocabulary_list`.
int default_value
The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`.
int num_oov_buckets
Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`.
Returns
VocabularyListCategoricalColumn
A `CategoricalColumn` with in-memory vocabulary.
Show Example
colors = categorical_column_with_vocabulary_list(
                key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),
                num_oov_buckets=2)
            columns = [colors,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

VocabularyListCategoricalColumn categorical_column_with_vocabulary_list(string key, IEnumerable<int> vocabulary_list, DType dtype, int default_value, int num_oov_buckets)

A `CategoricalColumn` with in-memory vocabulary.

Use this when your inputs are in string or integer format, and you have an in-memory vocabulary mapping each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example with `num_oov_buckets`: In the following example, each input in `vocabulary_list` is assigned an ID 0-3 corresponding to its index (e.g., input 'B' produces output 2). All other inputs are hashed and assigned an ID 4-5. Example with `default_value`: In the following example, each input in `vocabulary_list` is assigned an ID 0-4 corresponding to its index (e.g., input 'B' produces output 3). All other inputs are assigned `default_value` 0. And to make an embedding with either:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
IEnumerable<int> vocabulary_list
An ordered iterable defining the vocabulary. Each feature is mapped to the index of its value (if present) in `vocabulary_list`. Must be castable to `dtype`.
DType dtype
The type of features. Only string and integer types are supported. If `None`, it will be inferred from `vocabulary_list`.
int default_value
The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`.
int num_oov_buckets
Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`.
Returns
VocabularyListCategoricalColumn
A `CategoricalColumn` with in-memory vocabulary.
Show Example
colors = categorical_column_with_vocabulary_list(
                key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),
                num_oov_buckets=2)
            columns = [colors,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

VocabularyListCategoricalColumn categorical_column_with_vocabulary_list(string key, string vocabulary_list, DType dtype, int default_value, int num_oov_buckets)

A `CategoricalColumn` with in-memory vocabulary.

Use this when your inputs are in string or integer format, and you have an in-memory vocabulary mapping each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example with `num_oov_buckets`: In the following example, each input in `vocabulary_list` is assigned an ID 0-3 corresponding to its index (e.g., input 'B' produces output 2). All other inputs are hashed and assigned an ID 4-5. Example with `default_value`: In the following example, each input in `vocabulary_list` is assigned an ID 0-4 corresponding to its index (e.g., input 'B' produces output 3). All other inputs are assigned `default_value` 0. And to make an embedding with either:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
string vocabulary_list
An ordered iterable defining the vocabulary. Each feature is mapped to the index of its value (if present) in `vocabulary_list`. Must be castable to `dtype`.
DType dtype
The type of features. Only string and integer types are supported. If `None`, it will be inferred from `vocabulary_list`.
int default_value
The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`.
int num_oov_buckets
Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`.
Returns
VocabularyListCategoricalColumn
A `CategoricalColumn` with in-memory vocabulary.
Show Example
colors = categorical_column_with_vocabulary_list(
                key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),
                num_oov_buckets=2)
            columns = [colors,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

VocabularyListCategoricalColumn categorical_column_with_vocabulary_list(IEnumerable<string> key, IEnumerable<int> vocabulary_list, DType dtype, int default_value, int num_oov_buckets)

A `CategoricalColumn` with in-memory vocabulary.

Use this when your inputs are in string or integer format, and you have an in-memory vocabulary mapping each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example with `num_oov_buckets`: In the following example, each input in `vocabulary_list` is assigned an ID 0-3 corresponding to its index (e.g., input 'B' produces output 2). All other inputs are hashed and assigned an ID 4-5. Example with `default_value`: In the following example, each input in `vocabulary_list` is assigned an ID 0-4 corresponding to its index (e.g., input 'B' produces output 3). All other inputs are assigned `default_value` 0. And to make an embedding with either:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
IEnumerable<int> vocabulary_list
An ordered iterable defining the vocabulary. Each feature is mapped to the index of its value (if present) in `vocabulary_list`. Must be castable to `dtype`.
DType dtype
The type of features. Only string and integer types are supported. If `None`, it will be inferred from `vocabulary_list`.
int default_value
The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`.
int num_oov_buckets
Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`.
Returns
VocabularyListCategoricalColumn
A `CategoricalColumn` with in-memory vocabulary.
Show Example
colors = categorical_column_with_vocabulary_list(
                key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),
                num_oov_buckets=2)
            columns = [colors,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

object categorical_column_with_vocabulary_list_dyn(object key, object vocabulary_list, object dtype, ImplicitContainer<T> default_value, ImplicitContainer<T> num_oov_buckets)

A `CategoricalColumn` with in-memory vocabulary.

Use this when your inputs are in string or integer format, and you have an in-memory vocabulary mapping each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.

For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.

Example with `num_oov_buckets`: In the following example, each input in `vocabulary_list` is assigned an ID 0-3 corresponding to its index (e.g., input 'B' produces output 2). All other inputs are hashed and assigned an ID 4-5. Example with `default_value`: In the following example, each input in `vocabulary_list` is assigned an ID 0-4 corresponding to its index (e.g., input 'B' produces output 3). All other inputs are assigned `default_value` 0. And to make an embedding with either:
Parameters
object key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
object vocabulary_list
An ordered iterable defining the vocabulary. Each feature is mapped to the index of its value (if present) in `vocabulary_list`. Must be castable to `dtype`.
object dtype
The type of features. Only string and integer types are supported. If `None`, it will be inferred from `vocabulary_list`.
ImplicitContainer<T> default_value
The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`.
ImplicitContainer<T> num_oov_buckets
Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`.
Returns
object
A `CategoricalColumn` with in-memory vocabulary.
Show Example
colors = categorical_column_with_vocabulary_list(
                key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),
                num_oov_buckets=2)
            columns = [colors,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

CrossedColumn crossed_column(IEnumerable<object> keys, Nullable<int> hash_bucket_size, Nullable<int> hash_key)

Returns a column for performing crosses of categorical features.

Crossed features will be hashed according to `hash_bucket_size`. Conceptually, the transformation can be thought of as: Hash(cartesian product of features) % `hash_bucket_size`

For example, if the input features are:

* SparseTensor referred by first key: * SparseTensor referred by second key: then crossed feature will look like: Here is an example to create a linear model with crosses of string features: You could also use vocabulary lookup before crossing: If an input feature is of numeric type, you can use `categorical_column_with_identity`, or `bucketized_column`, as in the example: To use crossed column in DNN model, you need to add it in an embedding column as in this example:
Parameters
IEnumerable<object> keys
An iterable identifying the features to be crossed. Each element can be either: * string: Will use the corresponding feature which must be of string type. * `CategoricalColumn`: Will use the transformed tensor produced by this column. Does not support hashed categorical column.
Nullable<int> hash_bucket_size
An int > 1. The number of buckets.
Nullable<int> hash_key
Specify the hash_key that will be used by the `FingerprintCat64` function to combine the crosses fingerprints on SparseCrossOp (optional).
Returns
CrossedColumn
A `CrossedColumn`.
Show Example
shape = [2, 2]
            {
                [0, 0]: "a"
                [1, 0]: "b"
                [1, 1]: "c"
            } 

object crossed_column_dyn(object keys, object hash_bucket_size, object hash_key)

Returns a column for performing crosses of categorical features.

Crossed features will be hashed according to `hash_bucket_size`. Conceptually, the transformation can be thought of as: Hash(cartesian product of features) % `hash_bucket_size`

For example, if the input features are:

* SparseTensor referred by first key: * SparseTensor referred by second key: then crossed feature will look like: Here is an example to create a linear model with crosses of string features: You could also use vocabulary lookup before crossing: If an input feature is of numeric type, you can use `categorical_column_with_identity`, or `bucketized_column`, as in the example: To use crossed column in DNN model, you need to add it in an embedding column as in this example:
Parameters
object keys
An iterable identifying the features to be crossed. Each element can be either: * string: Will use the corresponding feature which must be of string type. * `CategoricalColumn`: Will use the transformed tensor produced by this column. Does not support hashed categorical column.
object hash_bucket_size
An int > 1. The number of buckets.
object hash_key
Specify the hash_key that will be used by the `FingerprintCat64` function to combine the crosses fingerprints on SparseCrossOp (optional).
Returns
object
A `CrossedColumn`.
Show Example
shape = [2, 2]
            {
                [0, 0]: "a"
                [1, 0]: "b"
                [1, 1]: "c"
            } 

EmbeddingColumn embedding_column(_CategoricalColumn categorical_column, int dimension, string combiner, PythonFunctionContainer initializer, string ckpt_to_load_from, string tensor_name_in_ckpt, Nullable<double> max_norm, bool trainable)

`DenseColumn` that converts from sparse, categorical input.

Use this when your inputs are sparse, but you want to convert them to a dense representation (e.g., to feed to a DNN).

Inputs must be a `CategoricalColumn` created by any of the `categorical_column_*` function. Here is an example of using `embedding_column` with `DNNClassifier`: Here is an example using `embedding_column` with model_fn:
Parameters
_CategoricalColumn categorical_column
A `CategoricalColumn` created by a `categorical_column_with_*` function. This column produces the sparse IDs that are inputs to the embedding lookup.
int dimension
An integer specifying dimension of the embedding, must be > 0.
string combiner
A string specifying how to reduce if there are multiple entries in a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with 'mean' the default. 'sqrtn' often achieves good accuracy, in particular with bag-of-words columns. Each of this can be thought as example level normalizations on the column. For more information, see `tf.embedding_lookup_sparse`.
PythonFunctionContainer initializer
A variable initializer function to be used in embedding variable initialization. If not specified, defaults to `truncated_normal_initializer` with mean `0.0` and standard deviation `1/sqrt(dimension)`.
string ckpt_to_load_from
String representing checkpoint name/pattern from which to restore column weights. Required if `tensor_name_in_ckpt` is not `None`.
string tensor_name_in_ckpt
Name of the `Tensor` in `ckpt_to_load_from` from which to restore the column weights. Required if `ckpt_to_load_from` is not `None`.
Nullable<double> max_norm
If not `None`, embedding values are l2-normalized to this value.
bool trainable
Whether or not the embedding is trainable. Default is True.
Returns
EmbeddingColumn
`DenseColumn` that converts from sparse input.
Show Example
video_id = categorical_column_with_identity(
                key='video_id', num_buckets=1000000, default_value=0)
            columns = [embedding_column(video_id, 9),...] 

estimator = tf.estimator.DNNClassifier(feature_columns=columns,...)

label_column =... def input_fn(): features = tf.io.parse_example( ..., features=make_parse_example_spec(columns + [label_column])) labels = features.pop(label_column.name) return features, labels

estimator.train(input_fn=input_fn, steps=100)

EmbeddingColumn embedding_column(_CategoricalColumn categorical_column, int dimension, string combiner, string initializer, string ckpt_to_load_from, string tensor_name_in_ckpt, Nullable<double> max_norm, bool trainable)

`DenseColumn` that converts from sparse, categorical input.

Use this when your inputs are sparse, but you want to convert them to a dense representation (e.g., to feed to a DNN).

Inputs must be a `CategoricalColumn` created by any of the `categorical_column_*` function. Here is an example of using `embedding_column` with `DNNClassifier`: Here is an example using `embedding_column` with model_fn:
Parameters
_CategoricalColumn categorical_column
A `CategoricalColumn` created by a `categorical_column_with_*` function. This column produces the sparse IDs that are inputs to the embedding lookup.
int dimension
An integer specifying dimension of the embedding, must be > 0.
string combiner
A string specifying how to reduce if there are multiple entries in a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with 'mean' the default. 'sqrtn' often achieves good accuracy, in particular with bag-of-words columns. Each of this can be thought as example level normalizations on the column. For more information, see `tf.embedding_lookup_sparse`.
string initializer
A variable initializer function to be used in embedding variable initialization. If not specified, defaults to `truncated_normal_initializer` with mean `0.0` and standard deviation `1/sqrt(dimension)`.
string ckpt_to_load_from
String representing checkpoint name/pattern from which to restore column weights. Required if `tensor_name_in_ckpt` is not `None`.
string tensor_name_in_ckpt
Name of the `Tensor` in `ckpt_to_load_from` from which to restore the column weights. Required if `ckpt_to_load_from` is not `None`.
Nullable<double> max_norm
If not `None`, embedding values are l2-normalized to this value.
bool trainable
Whether or not the embedding is trainable. Default is True.
Returns
EmbeddingColumn
`DenseColumn` that converts from sparse input.
Show Example
video_id = categorical_column_with_identity(
                key='video_id', num_buckets=1000000, default_value=0)
            columns = [embedding_column(video_id, 9),...] 

estimator = tf.estimator.DNNClassifier(feature_columns=columns,...)

label_column =... def input_fn(): features = tf.io.parse_example( ..., features=make_parse_example_spec(columns + [label_column])) labels = features.pop(label_column.name) return features, labels

estimator.train(input_fn=input_fn, steps=100)

IndicatorColumn indicator_column(_CategoricalColumn categorical_column)

Represents multi-hot representation of given categorical column.

- For DNN model, `indicator_column` can be used to wrap any `categorical_column_*` (e.g., to feed to DNN). Consider to Use `embedding_column` if the number of buckets/unique(values) are large.

- For Wide (aka linear) model, `indicator_column` is the internal representation for categorical column when passing categorical column directly (as any element in feature_columns) to `linear_model`. See `linear_model` for details.
Parameters
_CategoricalColumn categorical_column
A `CategoricalColumn` which is created by `categorical_column_with_*` or `crossed_column` functions.
Returns
IndicatorColumn
An `IndicatorColumn`.
Show Example
name = indicator_column(categorical_column_with_vocabulary_list(
                'name', ['bob', 'george', 'wanda'])
            columns = [name,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

dense_tensor == [[1, 0, 0]] # If "name" bytes_list is ["bob"] dense_tensor == [[1, 0, 1]] # If "name" bytes_list is ["bob", "wanda"] dense_tensor == [[2, 0, 0]] # If "name" bytes_list is ["bob", "bob"]

object indicator_column_dyn(object categorical_column)

Represents multi-hot representation of given categorical column.

- For DNN model, `indicator_column` can be used to wrap any `categorical_column_*` (e.g., to feed to DNN). Consider to Use `embedding_column` if the number of buckets/unique(values) are large.

- For Wide (aka linear) model, `indicator_column` is the internal representation for categorical column when passing categorical column directly (as any element in feature_columns) to `linear_model`. See `linear_model` for details.
Parameters
object categorical_column
A `CategoricalColumn` which is created by `categorical_column_with_*` or `crossed_column` functions.
Returns
object
An `IndicatorColumn`.
Show Example
name = indicator_column(categorical_column_with_vocabulary_list(
                'name', ['bob', 'george', 'wanda'])
            columns = [name,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

dense_tensor == [[1, 0, 0]] # If "name" bytes_list is ["bob"] dense_tensor == [[1, 0, 1]] # If "name" bytes_list is ["bob", "wanda"] dense_tensor == [[2, 0, 0]] # If "name" bytes_list is ["bob", "bob"]

Tensor input_layer(IDictionary<object, object> features, ValueTuple<_EmbeddingColumn> feature_columns, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, object> cols_to_vars, IDictionary<object, object> cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
IDictionary<object, object> features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
ValueTuple<_EmbeddingColumn> feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, object> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
IDictionary<object, object> cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
Tensor
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

Tensor input_layer(IDictionary<object, object> features, IEnumerable<object> feature_columns, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, object> cols_to_vars, IDictionary<object, object> cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
IDictionary<object, object> features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
IEnumerable<object> feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, object> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
IDictionary<object, object> cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
Tensor
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

Tensor input_layer(IDictionary<object, object> features, IDictionary<string, object> feature_columns, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, object> cols_to_vars, IDictionary<object, object> cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
IDictionary<object, object> features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
IDictionary<string, object> feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, object> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
IDictionary<object, object> cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
Tensor
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

Tensor input_layer(IDictionary<object, object> features, _DenseColumn feature_columns, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, object> cols_to_vars, IDictionary<object, object> cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
IDictionary<object, object> features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
_DenseColumn feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, object> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
IDictionary<object, object> cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
Tensor
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

Tensor input_layer(IGraphNodeBase features, IDictionary<string, object> feature_columns, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, object> cols_to_vars, IDictionary<object, object> cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
IGraphNodeBase features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
IDictionary<string, object> feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, object> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
IDictionary<object, object> cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
Tensor
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

Tensor input_layer(IGraphNodeBase features, IEnumerable<object> feature_columns, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, object> cols_to_vars, IDictionary<object, object> cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
IGraphNodeBase features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
IEnumerable<object> feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, object> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
IDictionary<object, object> cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
Tensor
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

Tensor input_layer(IGraphNodeBase features, IEnumerator<_NumericColumn> feature_columns, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, object> cols_to_vars, IDictionary<object, object> cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
IGraphNodeBase features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
IEnumerator<_NumericColumn> feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, object> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
IDictionary<object, object> cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
Tensor
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

Tensor input_layer(IDictionary<object, object> features, IEnumerator<_NumericColumn> feature_columns, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, object> cols_to_vars, IDictionary<object, object> cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
IDictionary<object, object> features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
IEnumerator<_NumericColumn> feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, object> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
IDictionary<object, object> cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
Tensor
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

Tensor input_layer(IGraphNodeBase features, _DenseColumn feature_columns, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, object> cols_to_vars, IDictionary<object, object> cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
IGraphNodeBase features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
_DenseColumn feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, object> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
IDictionary<object, object> cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
Tensor
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

Tensor input_layer(IGraphNodeBase features, ValueTuple<_EmbeddingColumn> feature_columns, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, object> cols_to_vars, IDictionary<object, object> cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
IGraphNodeBase features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
ValueTuple<_EmbeddingColumn> feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, object> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
IDictionary<object, object> cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
Tensor
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

Tensor input_layer(PythonClassContainer features, IEnumerable<object> feature_columns, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, object> cols_to_vars, IDictionary<object, object> cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
PythonClassContainer features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
IEnumerable<object> feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, object> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
IDictionary<object, object> cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
Tensor
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

Tensor input_layer(PythonClassContainer features, IEnumerator<_NumericColumn> feature_columns, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, object> cols_to_vars, IDictionary<object, object> cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
PythonClassContainer features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
IEnumerator<_NumericColumn> feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, object> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
IDictionary<object, object> cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
Tensor
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

Tensor input_layer(PythonClassContainer features, ValueTuple<_EmbeddingColumn> feature_columns, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, object> cols_to_vars, IDictionary<object, object> cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
PythonClassContainer features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
ValueTuple<_EmbeddingColumn> feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, object> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
IDictionary<object, object> cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
Tensor
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

Tensor input_layer(PythonClassContainer features, _DenseColumn feature_columns, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, object> cols_to_vars, IDictionary<object, object> cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
PythonClassContainer features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
_DenseColumn feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, object> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
IDictionary<object, object> cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
Tensor
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

Tensor input_layer(PythonClassContainer features, IDictionary<string, object> feature_columns, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, object> cols_to_vars, IDictionary<object, object> cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
PythonClassContainer features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
IDictionary<string, object> feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, object> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
IDictionary<object, object> cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
Tensor
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

object input_layer_dyn(object features, object feature_columns, object weight_collections, ImplicitContainer<T> trainable, object cols_to_vars, object cols_to_output_tensors)

Returns a dense `Tensor` as input layer based on given `feature_columns`.

Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single `Tensor`.

Example:
Parameters
object features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values can be a `SparseTensor` or a `Tensor` depends on corresponding `_FeatureColumn`.
object feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_DenseColumn` such as `numeric_column`, `embedding_column`, `bucketized_column`, `indicator_column`. If you have categorical features, you can wrap them with an `embedding_column` or `indicator_column`.
object weight_collections
A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
ImplicitContainer<T> trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
object cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to list of `Variable`s. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [
object cols_to_output_tensors
If not `None`, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output `Tensor`s.
Returns
object
A `Tensor` which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is `float32`. first_layer_dimension is determined based on given `feature_columns`.
Show Example
price = numeric_column('price')
            keywords_embedded = embedding_column(
                categorical_column_with_hash_bucket("keywords", 10K), dimensions=16)
            columns = [price, keywords_embedded,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns)
            for units in [128, 64, 32]:
              dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)
            prediction = tf.compat.v1.layers.dense(dense_tensor, 1) 

Tensor linear_model(IDictionary<object, object> features, string feature_columns, int units, string sparse_combiner, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, IEnumerable<object>> cols_to_vars)

Returns a linear prediction `Tensor` based on given `feature_columns`.

This function generates a weighted sum based on output dimension `units`. Weighted sum refers to logits in classification problems. It refers to the prediction itself for linear regression problems.

Note on supported columns: `linear_model` treats categorical columns as `indicator_column`s. To be specific, assume the input as `SparseTensor` looks like: `linear_model` assigns weights for the presence of "a", "b", "c' implicitly, just like `indicator_column`, while `input_layer` explicitly requires wrapping each of categorical columns with an `embedding_column` or an `indicator_column`.

Example of usage: The `sparse_combiner` argument works as follows For example, for two features represented as the categorical columns: with `sparse_combiner` as "mean", the linear model outputs consequently are:

``` y_0 = 1.0 / 2.0 * ( w_a + w_b ) + w_d + b y_1 = w_c + 1.0 / 3.0 * ( w_e + 2.0 * w_f ) + b ```

where `y_i` is the output, `b` is the bias, and `w_x` is the weight assigned to the presence of `x` in the input features.
Parameters
IDictionary<object, object> features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values are `Tensor` or `SparseTensor` depending on corresponding `_FeatureColumn`.
string feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_FeatureColumn`s.
int units
An integer, dimensionality of the output space. Default value is 1.
string sparse_combiner
A string specifying how to reduce if a categorical column is multivalent. Except `numeric_column`, almost all columns passed to `linear_model` are considered as categorical columns. It combines each categorical column independently. Currently "mean", "sqrtn" and "sum" are supported, with "sum" the default for linear model. "sqrtn" often achieves good accuracy, in particular with bag-of-words columns. * "sum": do not normalize features in the column * "mean": do l1 normalization on features in the column * "sqrtn": do l2 normalization on features in the column
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that, variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, IEnumerable<object>> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to associated list of `Variable`s. For example, after the call, we might have cols_to_vars = { _NumericColumn( key='numeric_feature1', shape=(1,): [], 'bias': [], _NumericColumn( key='numeric_feature2', shape=(2,)): []} If a column creates no variables, its value will be an empty list. Note that cols_to_vars will also contain a string key 'bias' that maps to a list of Variables.
Returns
Tensor
A `Tensor` which represents predictions/logits of a linear model. Its shape is (batch_size, units) and its dtype is `float32`.
Show Example
shape = [2, 2]
            {
                [0, 0]: "a"
                [1, 0]: "b"
                [1, 1]: "c"
            } 

Tensor linear_model(IDictionary<object, object> features, ValueTuple<_CrossedColumn> feature_columns, int units, string sparse_combiner, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, IEnumerable<object>> cols_to_vars)

Returns a linear prediction `Tensor` based on given `feature_columns`.

This function generates a weighted sum based on output dimension `units`. Weighted sum refers to logits in classification problems. It refers to the prediction itself for linear regression problems.

Note on supported columns: `linear_model` treats categorical columns as `indicator_column`s. To be specific, assume the input as `SparseTensor` looks like: `linear_model` assigns weights for the presence of "a", "b", "c' implicitly, just like `indicator_column`, while `input_layer` explicitly requires wrapping each of categorical columns with an `embedding_column` or an `indicator_column`.

Example of usage: The `sparse_combiner` argument works as follows For example, for two features represented as the categorical columns: with `sparse_combiner` as "mean", the linear model outputs consequently are:

``` y_0 = 1.0 / 2.0 * ( w_a + w_b ) + w_d + b y_1 = w_c + 1.0 / 3.0 * ( w_e + 2.0 * w_f ) + b ```

where `y_i` is the output, `b` is the bias, and `w_x` is the weight assigned to the presence of `x` in the input features.
Parameters
IDictionary<object, object> features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values are `Tensor` or `SparseTensor` depending on corresponding `_FeatureColumn`.
ValueTuple<_CrossedColumn> feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_FeatureColumn`s.
int units
An integer, dimensionality of the output space. Default value is 1.
string sparse_combiner
A string specifying how to reduce if a categorical column is multivalent. Except `numeric_column`, almost all columns passed to `linear_model` are considered as categorical columns. It combines each categorical column independently. Currently "mean", "sqrtn" and "sum" are supported, with "sum" the default for linear model. "sqrtn" often achieves good accuracy, in particular with bag-of-words columns. * "sum": do not normalize features in the column * "mean": do l1 normalization on features in the column * "sqrtn": do l2 normalization on features in the column
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that, variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, IEnumerable<object>> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to associated list of `Variable`s. For example, after the call, we might have cols_to_vars = { _NumericColumn( key='numeric_feature1', shape=(1,): [], 'bias': [], _NumericColumn( key='numeric_feature2', shape=(2,)): []} If a column creates no variables, its value will be an empty list. Note that cols_to_vars will also contain a string key 'bias' that maps to a list of Variables.
Returns
Tensor
A `Tensor` which represents predictions/logits of a linear model. Its shape is (batch_size, units) and its dtype is `float32`.
Show Example
shape = [2, 2]
            {
                [0, 0]: "a"
                [1, 0]: "b"
                [1, 1]: "c"
            } 

Tensor linear_model(IDictionary<object, object> features, IDictionary<string, object> feature_columns, int units, string sparse_combiner, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, IEnumerable<object>> cols_to_vars)

Returns a linear prediction `Tensor` based on given `feature_columns`.

This function generates a weighted sum based on output dimension `units`. Weighted sum refers to logits in classification problems. It refers to the prediction itself for linear regression problems.

Note on supported columns: `linear_model` treats categorical columns as `indicator_column`s. To be specific, assume the input as `SparseTensor` looks like: `linear_model` assigns weights for the presence of "a", "b", "c' implicitly, just like `indicator_column`, while `input_layer` explicitly requires wrapping each of categorical columns with an `embedding_column` or an `indicator_column`.

Example of usage: The `sparse_combiner` argument works as follows For example, for two features represented as the categorical columns: with `sparse_combiner` as "mean", the linear model outputs consequently are:

``` y_0 = 1.0 / 2.0 * ( w_a + w_b ) + w_d + b y_1 = w_c + 1.0 / 3.0 * ( w_e + 2.0 * w_f ) + b ```

where `y_i` is the output, `b` is the bias, and `w_x` is the weight assigned to the presence of `x` in the input features.
Parameters
IDictionary<object, object> features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values are `Tensor` or `SparseTensor` depending on corresponding `_FeatureColumn`.
IDictionary<string, object> feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_FeatureColumn`s.
int units
An integer, dimensionality of the output space. Default value is 1.
string sparse_combiner
A string specifying how to reduce if a categorical column is multivalent. Except `numeric_column`, almost all columns passed to `linear_model` are considered as categorical columns. It combines each categorical column independently. Currently "mean", "sqrtn" and "sum" are supported, with "sum" the default for linear model. "sqrtn" often achieves good accuracy, in particular with bag-of-words columns. * "sum": do not normalize features in the column * "mean": do l1 normalization on features in the column * "sqrtn": do l2 normalization on features in the column
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that, variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, IEnumerable<object>> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to associated list of `Variable`s. For example, after the call, we might have cols_to_vars = { _NumericColumn( key='numeric_feature1', shape=(1,): [], 'bias': [], _NumericColumn( key='numeric_feature2', shape=(2,)): []} If a column creates no variables, its value will be an empty list. Note that cols_to_vars will also contain a string key 'bias' that maps to a list of Variables.
Returns
Tensor
A `Tensor` which represents predictions/logits of a linear model. Its shape is (batch_size, units) and its dtype is `float32`.
Show Example
shape = [2, 2]
            {
                [0, 0]: "a"
                [1, 0]: "b"
                [1, 1]: "c"
            } 

Tensor linear_model(IDictionary<object, object> features, IEnumerable<object> feature_columns, int units, string sparse_combiner, IEnumerable<string> weight_collections, bool trainable, IDictionary<object, IEnumerable<object>> cols_to_vars)

Returns a linear prediction `Tensor` based on given `feature_columns`.

This function generates a weighted sum based on output dimension `units`. Weighted sum refers to logits in classification problems. It refers to the prediction itself for linear regression problems.

Note on supported columns: `linear_model` treats categorical columns as `indicator_column`s. To be specific, assume the input as `SparseTensor` looks like: `linear_model` assigns weights for the presence of "a", "b", "c' implicitly, just like `indicator_column`, while `input_layer` explicitly requires wrapping each of categorical columns with an `embedding_column` or an `indicator_column`.

Example of usage: The `sparse_combiner` argument works as follows For example, for two features represented as the categorical columns: with `sparse_combiner` as "mean", the linear model outputs consequently are:

``` y_0 = 1.0 / 2.0 * ( w_a + w_b ) + w_d + b y_1 = w_c + 1.0 / 3.0 * ( w_e + 2.0 * w_f ) + b ```

where `y_i` is the output, `b` is the bias, and `w_x` is the weight assigned to the presence of `x` in the input features.
Parameters
IDictionary<object, object> features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values are `Tensor` or `SparseTensor` depending on corresponding `_FeatureColumn`.
IEnumerable<object> feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_FeatureColumn`s.
int units
An integer, dimensionality of the output space. Default value is 1.
string sparse_combiner
A string specifying how to reduce if a categorical column is multivalent. Except `numeric_column`, almost all columns passed to `linear_model` are considered as categorical columns. It combines each categorical column independently. Currently "mean", "sqrtn" and "sum" are supported, with "sum" the default for linear model. "sqrtn" often achieves good accuracy, in particular with bag-of-words columns. * "sum": do not normalize features in the column * "mean": do l1 normalization on features in the column * "sqrtn": do l2 normalization on features in the column
IEnumerable<string> weight_collections
A list of collection names to which the Variable will be added. Note that, variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
bool trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
IDictionary<object, IEnumerable<object>> cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to associated list of `Variable`s. For example, after the call, we might have cols_to_vars = { _NumericColumn( key='numeric_feature1', shape=(1,): [], 'bias': [], _NumericColumn( key='numeric_feature2', shape=(2,)): []} If a column creates no variables, its value will be an empty list. Note that cols_to_vars will also contain a string key 'bias' that maps to a list of Variables.
Returns
Tensor
A `Tensor` which represents predictions/logits of a linear model. Its shape is (batch_size, units) and its dtype is `float32`.
Show Example
shape = [2, 2]
            {
                [0, 0]: "a"
                [1, 0]: "b"
                [1, 1]: "c"
            } 

object linear_model_dyn(object features, object feature_columns, ImplicitContainer<T> units, ImplicitContainer<T> sparse_combiner, object weight_collections, ImplicitContainer<T> trainable, object cols_to_vars)

Returns a linear prediction `Tensor` based on given `feature_columns`.

This function generates a weighted sum based on output dimension `units`. Weighted sum refers to logits in classification problems. It refers to the prediction itself for linear regression problems.

Note on supported columns: `linear_model` treats categorical columns as `indicator_column`s. To be specific, assume the input as `SparseTensor` looks like: `linear_model` assigns weights for the presence of "a", "b", "c' implicitly, just like `indicator_column`, while `input_layer` explicitly requires wrapping each of categorical columns with an `embedding_column` or an `indicator_column`.

Example of usage: The `sparse_combiner` argument works as follows For example, for two features represented as the categorical columns: with `sparse_combiner` as "mean", the linear model outputs consequently are:

``` y_0 = 1.0 / 2.0 * ( w_a + w_b ) + w_d + b y_1 = w_c + 1.0 / 3.0 * ( w_e + 2.0 * w_f ) + b ```

where `y_i` is the output, `b` is the bias, and `w_x` is the weight assigned to the presence of `x` in the input features.
Parameters
object features
A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values are `Tensor` or `SparseTensor` depending on corresponding `_FeatureColumn`.
object feature_columns
An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_FeatureColumn`s.
ImplicitContainer<T> units
An integer, dimensionality of the output space. Default value is 1.
ImplicitContainer<T> sparse_combiner
A string specifying how to reduce if a categorical column is multivalent. Except `numeric_column`, almost all columns passed to `linear_model` are considered as categorical columns. It combines each categorical column independently. Currently "mean", "sqrtn" and "sum" are supported, with "sum" the default for linear model. "sqrtn" often achieves good accuracy, in particular with bag-of-words columns. * "sum": do not normalize features in the column * "mean": do l1 normalization on features in the column * "sqrtn": do l2 normalization on features in the column
object weight_collections
A list of collection names to which the Variable will be added. Note that, variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and `ops.GraphKeys.MODEL_VARIABLES`.
ImplicitContainer<T> trainable
If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
object cols_to_vars
If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to associated list of `Variable`s. For example, after the call, we might have cols_to_vars = { _NumericColumn( key='numeric_feature1', shape=(1,): [], 'bias': [], _NumericColumn( key='numeric_feature2', shape=(2,)): []} If a column creates no variables, its value will be an empty list. Note that cols_to_vars will also contain a string key 'bias' that maps to a list of Variables.
Returns
object
A `Tensor` which represents predictions/logits of a linear model. Its shape is (batch_size, units) and its dtype is `float32`.
Show Example
shape = [2, 2]
            {
                [0, 0]: "a"
                [1, 0]: "b"
                [1, 1]: "c"
            } 

NumericColumn numeric_column(IEnumerable<string> key, ImplicitContainer<T> shape, double default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
ImplicitContainer<T> shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
double default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(_DenseColumn key, int shape, IEnumerable<object> default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
_DenseColumn key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
IEnumerable<object> default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(IEnumerable<string> key, int shape, ndarray default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
ndarray default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(IEnumerable<string> key, int shape, IEnumerable<object> default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
IEnumerable<object> default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(IEnumerable<string> key, TensorShape shape, ndarray default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
TensorShape shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
ndarray default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(IEnumerable<string> key, int shape, double default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
double default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(IEnumerable<string> key, ImplicitContainer<T> shape, IEnumerable<object> default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
ImplicitContainer<T> shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
IEnumerable<object> default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(IEnumerable<string> key, TensorShape shape, IEnumerable<object> default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
TensorShape shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
IEnumerable<object> default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(_DenseColumn key, TensorShape shape, double default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
_DenseColumn key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
TensorShape shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
double default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(IEnumerable<string> key, ImplicitContainer<T> shape, ndarray default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
ImplicitContainer<T> shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
ndarray default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(object key, TensorShape shape, double default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
object key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
TensorShape shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
double default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(_DenseColumn key, TensorShape shape, IEnumerable<object> default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
_DenseColumn key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
TensorShape shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
IEnumerable<object> default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(object key, TensorShape shape, ndarray default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
object key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
TensorShape shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
ndarray default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(object key, TensorShape shape, IEnumerable<object> default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
object key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
TensorShape shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
IEnumerable<object> default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(object key, ImplicitContainer<T> shape, double default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
object key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
ImplicitContainer<T> shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
double default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(object key, ImplicitContainer<T> shape, ndarray default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
object key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
ImplicitContainer<T> shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
ndarray default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(object key, ImplicitContainer<T> shape, IEnumerable<object> default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
object key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
ImplicitContainer<T> shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
IEnumerable<object> default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(object key, int shape, double default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
object key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
double default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(object key, int shape, ndarray default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
object key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
ndarray default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(object key, int shape, IEnumerable<object> default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
object key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
IEnumerable<object> default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(string key, TensorShape shape, double default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
TensorShape shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
double default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(string key, TensorShape shape, ndarray default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
TensorShape shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
ndarray default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(string key, TensorShape shape, IEnumerable<object> default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
TensorShape shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
IEnumerable<object> default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(string key, ImplicitContainer<T> shape, double default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
ImplicitContainer<T> shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
double default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(_DenseColumn key, TensorShape shape, ndarray default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
_DenseColumn key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
TensorShape shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
ndarray default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(string key, ImplicitContainer<T> shape, IEnumerable<object> default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
ImplicitContainer<T> shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
IEnumerable<object> default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(string key, int shape, double default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
double default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(string key, ImplicitContainer<T> shape, ndarray default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
ImplicitContainer<T> shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
ndarray default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(_DenseColumn key, ImplicitContainer<T> shape, IEnumerable<object> default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
_DenseColumn key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
ImplicitContainer<T> shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
IEnumerable<object> default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(_DenseColumn key, ImplicitContainer<T> shape, double default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
_DenseColumn key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
ImplicitContainer<T> shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
double default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(_DenseColumn key, ImplicitContainer<T> shape, ndarray default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
_DenseColumn key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
ImplicitContainer<T> shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
ndarray default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(string key, int shape, ndarray default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
ndarray default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(_DenseColumn key, int shape, double default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
_DenseColumn key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
double default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(_DenseColumn key, int shape, ndarray default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
_DenseColumn key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
ndarray default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(IEnumerable<string> key, TensorShape shape, double default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
IEnumerable<string> key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
TensorShape shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
double default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

NumericColumn numeric_column(string key, int shape, IEnumerable<object> default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
string key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
int shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
IEnumerable<object> default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
NumericColumn
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

object numeric_column_dyn(object key, ImplicitContainer<T> shape, object default_value, ImplicitContainer<T> dtype, object normalizer_fn)

Represents real valued or numerical features.

Example:
Parameters
object key
A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns.
ImplicitContainer<T> shape
An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch_size] + `shape`.
object default_value
A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`.
ImplicitContainer<T> dtype
defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
object normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
object
A `NumericColumn`.
Show Example
price = numeric_column('price')
            columns = [price,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            dense_tensor = input_layer(features, columns) 

# or bucketized_price = bucketized_column(price, boundaries=[...]) columns = [bucketized_price,...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns)

SequenceNumericColumn sequence_numeric_column(string key, int shape, double default_value, ImplicitContainer<T> dtype, string normalizer_fn)

Returns a feature column that represents sequences of numeric data.

Example:
Parameters
string key
A unique string identifying the input features.
int shape
The shape of the input data per sequence id. E.g. if `shape=(2,)`, each example must contain `2 * sequence_length` values.
double default_value
A single value compatible with `dtype` that is used for padding the sparse data into a dense `Tensor`.
ImplicitContainer<T> dtype
The type of values.
string normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
SequenceNumericColumn
A `SequenceNumericColumn`.
Show Example
temperature = sequence_numeric_column('temperature')
            columns = [temperature] 

features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) sequence_feature_layer = SequenceFeatures(columns) sequence_input, sequence_length = sequence_feature_layer(features) sequence_length_mask = tf.sequence_mask(sequence_length)

rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size) rnn_layer = tf.keras.layers.RNN(rnn_cell) outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)

SequenceNumericColumn sequence_numeric_column(string key, int shape, int default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Returns a feature column that represents sequences of numeric data.

Example:
Parameters
string key
A unique string identifying the input features.
int shape
The shape of the input data per sequence id. E.g. if `shape=(2,)`, each example must contain `2 * sequence_length` values.
int default_value
A single value compatible with `dtype` that is used for padding the sparse data into a dense `Tensor`.
ImplicitContainer<T> dtype
The type of values.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
SequenceNumericColumn
A `SequenceNumericColumn`.
Show Example
temperature = sequence_numeric_column('temperature')
            columns = [temperature] 

features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) sequence_feature_layer = SequenceFeatures(columns) sequence_input, sequence_length = sequence_feature_layer(features) sequence_length_mask = tf.sequence_mask(sequence_length)

rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size) rnn_layer = tf.keras.layers.RNN(rnn_cell) outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)

SequenceNumericColumn sequence_numeric_column(string key, ImplicitContainer<T> shape, int default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Returns a feature column that represents sequences of numeric data.

Example:
Parameters
string key
A unique string identifying the input features.
ImplicitContainer<T> shape
The shape of the input data per sequence id. E.g. if `shape=(2,)`, each example must contain `2 * sequence_length` values.
int default_value
A single value compatible with `dtype` that is used for padding the sparse data into a dense `Tensor`.
ImplicitContainer<T> dtype
The type of values.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
SequenceNumericColumn
A `SequenceNumericColumn`.
Show Example
temperature = sequence_numeric_column('temperature')
            columns = [temperature] 

features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) sequence_feature_layer = SequenceFeatures(columns) sequence_input, sequence_length = sequence_feature_layer(features) sequence_length_mask = tf.sequence_mask(sequence_length)

rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size) rnn_layer = tf.keras.layers.RNN(rnn_cell) outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)

SequenceNumericColumn sequence_numeric_column(string key, int shape, int default_value, ImplicitContainer<T> dtype, string normalizer_fn)

Returns a feature column that represents sequences of numeric data.

Example:
Parameters
string key
A unique string identifying the input features.
int shape
The shape of the input data per sequence id. E.g. if `shape=(2,)`, each example must contain `2 * sequence_length` values.
int default_value
A single value compatible with `dtype` that is used for padding the sparse data into a dense `Tensor`.
ImplicitContainer<T> dtype
The type of values.
string normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
SequenceNumericColumn
A `SequenceNumericColumn`.
Show Example
temperature = sequence_numeric_column('temperature')
            columns = [temperature] 

features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) sequence_feature_layer = SequenceFeatures(columns) sequence_input, sequence_length = sequence_feature_layer(features) sequence_length_mask = tf.sequence_mask(sequence_length)

rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size) rnn_layer = tf.keras.layers.RNN(rnn_cell) outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)

SequenceNumericColumn sequence_numeric_column(string key, int shape, double default_value, ImplicitContainer<T> dtype, PythonFunctionContainer normalizer_fn)

Returns a feature column that represents sequences of numeric data.

Example:
Parameters
string key
A unique string identifying the input features.
int shape
The shape of the input data per sequence id. E.g. if `shape=(2,)`, each example must contain `2 * sequence_length` values.
double default_value
A single value compatible with `dtype` that is used for padding the sparse data into a dense `Tensor`.
ImplicitContainer<T> dtype
The type of values.
PythonFunctionContainer normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
SequenceNumericColumn
A `SequenceNumericColumn`.
Show Example
temperature = sequence_numeric_column('temperature')
            columns = [temperature] 

features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) sequence_feature_layer = SequenceFeatures(columns) sequence_input, sequence_length = sequence_feature_layer(features) sequence_length_mask = tf.sequence_mask(sequence_length)

rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size) rnn_layer = tf.keras.layers.RNN(rnn_cell) outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)

SequenceNumericColumn sequence_numeric_column(string key, ImplicitContainer<T> shape, int default_value, ImplicitContainer<T> dtype, string normalizer_fn)

Returns a feature column that represents sequences of numeric data.

Example:
Parameters
string key
A unique string identifying the input features.
ImplicitContainer<T> shape
The shape of the input data per sequence id. E.g. if `shape=(2,)`, each example must contain `2 * sequence_length` values.
int default_value
A single value compatible with `dtype` that is used for padding the sparse data into a dense `Tensor`.
ImplicitContainer<T> dtype
The type of values.
string normalizer_fn
If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns
SequenceNumericColumn
A `SequenceNumericColumn`.
Show Example
temperature = sequence_numeric_column('temperature')
            columns = [temperature] 

features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) sequence_feature_layer = SequenceFeatures(columns) sequence_input, sequence_length = sequence_feature_layer(features) sequence_length_mask = tf.sequence_mask(sequence_length)

rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size) rnn_layer = tf.keras.layers.RNN(rnn_cell) outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)

IList<_SharedEmbeddingColumn> shared_embedding_columns(IEnumerable<_IdentityCategoricalColumn> categorical_columns, int dimension, string combiner, PythonFunctionContainer initializer, string shared_embedding_collection_name, string ckpt_to_load_from, string tensor_name_in_ckpt, Nullable<double> max_norm, bool trainable)

List of dense columns that convert from sparse, categorical input.

This is similar to `embedding_column`, except that it produces a list of embedding columns that share the same embedding weights.

Use this when your inputs are sparse and of the same type (e.g. watched and impression video IDs that share the same vocabulary), and you want to convert them to a dense representation (e.g., to feed to a DNN).

Inputs must be a list of categorical columns created by any of the `categorical_column_*` function. They must all be of the same type and have the same arguments except `key`. E.g. they can be categorical_column_with_vocabulary_file with the same vocabulary_file. Some or all columns could also be weighted_categorical_column.

Here is an example embedding of two features for a DNNClassifier model: Here is an example using `shared_embedding_columns` with model_fn:
Parameters
IEnumerable<_IdentityCategoricalColumn> categorical_columns
List of categorical columns created by a `categorical_column_with_*` function. These columns produce the sparse IDs that are inputs to the embedding lookup. All columns must be of the same type and have the same arguments except `key`. E.g. they can be categorical_column_with_vocabulary_file with the same vocabulary_file. Some or all columns could also be weighted_categorical_column.
int dimension
An integer specifying dimension of the embedding, must be > 0.
string combiner
A string specifying how to reduce if there are multiple entries in a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with 'mean' the default. 'sqrtn' often achieves good accuracy, in particular with bag-of-words columns. Each of this can be thought as example level normalizations on the column. For more information, see `tf.embedding_lookup_sparse`.
PythonFunctionContainer initializer
A variable initializer function to be used in embedding variable initialization. If not specified, defaults to `truncated_normal_initializer` with mean `0.0` and standard deviation `1/sqrt(dimension)`.
string shared_embedding_collection_name
Optional name of the collection where shared embedding weights are added. If not given, a reasonable name will be chosen based on the names of `categorical_columns`. This is also used in `variable_scope` when creating shared embedding weights.
string ckpt_to_load_from
String representing checkpoint name/pattern from which to restore column weights. Required if `tensor_name_in_ckpt` is not `None`.
string tensor_name_in_ckpt
Name of the `Tensor` in `ckpt_to_load_from` from which to restore the column weights. Required if `ckpt_to_load_from` is not `None`.
Nullable<double> max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value, before combining.
bool trainable
Whether or not the embedding is trainable. Default is True.
Returns
IList<_SharedEmbeddingColumn>
A list of dense columns that converts from sparse input. The order of results follows the ordering of `categorical_columns`.
Show Example
watched_video_id = categorical_column_with_vocabulary_file(
                'watched_video_id', video_vocabulary_file, video_vocabulary_size)
            impression_video_id = categorical_column_with_vocabulary_file(
                'impression_video_id', video_vocabulary_file, video_vocabulary_size)
            columns = shared_embedding_columns(
                [watched_video_id, impression_video_id], dimension=10) 

estimator = tf.estimator.DNNClassifier(feature_columns=columns,...)

label_column =... def input_fn(): features = tf.io.parse_example( ..., features=make_parse_example_spec(columns + [label_column])) labels = features.pop(label_column.name) return features, labels

estimator.train(input_fn=input_fn, steps=100)

IList<_SharedEmbeddingColumn> shared_embedding_columns(IEnumerable<_IdentityCategoricalColumn> categorical_columns, int dimension, string combiner, string initializer, string shared_embedding_collection_name, string ckpt_to_load_from, string tensor_name_in_ckpt, Nullable<double> max_norm, bool trainable)

List of dense columns that convert from sparse, categorical input.

This is similar to `embedding_column`, except that it produces a list of embedding columns that share the same embedding weights.

Use this when your inputs are sparse and of the same type (e.g. watched and impression video IDs that share the same vocabulary), and you want to convert them to a dense representation (e.g., to feed to a DNN).

Inputs must be a list of categorical columns created by any of the `categorical_column_*` function. They must all be of the same type and have the same arguments except `key`. E.g. they can be categorical_column_with_vocabulary_file with the same vocabulary_file. Some or all columns could also be weighted_categorical_column.

Here is an example embedding of two features for a DNNClassifier model: Here is an example using `shared_embedding_columns` with model_fn:
Parameters
IEnumerable<_IdentityCategoricalColumn> categorical_columns
List of categorical columns created by a `categorical_column_with_*` function. These columns produce the sparse IDs that are inputs to the embedding lookup. All columns must be of the same type and have the same arguments except `key`. E.g. they can be categorical_column_with_vocabulary_file with the same vocabulary_file. Some or all columns could also be weighted_categorical_column.
int dimension
An integer specifying dimension of the embedding, must be > 0.
string combiner
A string specifying how to reduce if there are multiple entries in a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with 'mean' the default. 'sqrtn' often achieves good accuracy, in particular with bag-of-words columns. Each of this can be thought as example level normalizations on the column. For more information, see `tf.embedding_lookup_sparse`.
string initializer
A variable initializer function to be used in embedding variable initialization. If not specified, defaults to `truncated_normal_initializer` with mean `0.0` and standard deviation `1/sqrt(dimension)`.
string shared_embedding_collection_name
Optional name of the collection where shared embedding weights are added. If not given, a reasonable name will be chosen based on the names of `categorical_columns`. This is also used in `variable_scope` when creating shared embedding weights.
string ckpt_to_load_from
String representing checkpoint name/pattern from which to restore column weights. Required if `tensor_name_in_ckpt` is not `None`.
string tensor_name_in_ckpt
Name of the `Tensor` in `ckpt_to_load_from` from which to restore the column weights. Required if `ckpt_to_load_from` is not `None`.
Nullable<double> max_norm
If not `None`, each embedding is clipped if its l2-norm is larger than this value, before combining.
bool trainable
Whether or not the embedding is trainable. Default is True.
Returns
IList<_SharedEmbeddingColumn>
A list of dense columns that converts from sparse input. The order of results follows the ordering of `categorical_columns`.
Show Example
watched_video_id = categorical_column_with_vocabulary_file(
                'watched_video_id', video_vocabulary_file, video_vocabulary_size)
            impression_video_id = categorical_column_with_vocabulary_file(
                'impression_video_id', video_vocabulary_file, video_vocabulary_size)
            columns = shared_embedding_columns(
                [watched_video_id, impression_video_id], dimension=10) 

estimator = tf.estimator.DNNClassifier(feature_columns=columns,...)

label_column =... def input_fn(): features = tf.io.parse_example( ..., features=make_parse_example_spec(columns + [label_column])) labels = features.pop(label_column.name) return features, labels

estimator.train(input_fn=input_fn, steps=100)

IList<object> shared_embeddings(IEnumerable<IdentityCategoricalColumn> categorical_columns, int dimension, string combiner, PythonFunctionContainer initializer, string shared_embedding_collection_name, string ckpt_to_load_from, string tensor_name_in_ckpt, Nullable<double> max_norm, bool trainable)

IList<object> shared_embeddings(IEnumerable<IdentityCategoricalColumn> categorical_columns, int dimension, string combiner, string initializer, string shared_embedding_collection_name, string ckpt_to_load_from, string tensor_name_in_ckpt, Nullable<double> max_norm, bool trainable)

object shared_embeddings_dyn(object categorical_columns, object dimension, ImplicitContainer<T> combiner, object initializer, object shared_embedding_collection_name, object ckpt_to_load_from, object tensor_name_in_ckpt, object max_norm, ImplicitContainer<T> trainable)

WeightedCategoricalColumn weighted_categorical_column(_CategoricalColumn categorical_column, string weight_feature_key, ImplicitContainer<T> dtype)

Applies weight values to a `CategoricalColumn`.

Use this when each of your sparse inputs has both an ID and a value. For example, if you're representing text documents as a collection of word frequencies, you can provide 2 parallel sparse input features ('terms' and 'frequencies' below).

Example:

Input `tf.Example` objects:

```proto [ features { feature { key: "terms" value {bytes_list {value: "very" value: "model"}} } feature { key: "frequencies" value {float_list {value: 0.3 value: 0.1}} } }, features { feature { key: "terms" value {bytes_list {value: "when" value: "course" value: "human"}} } feature { key: "frequencies" value {float_list {value: 0.4 value: 0.1 value: 0.2}} } } ] ``` This assumes the input dictionary contains a `SparseTensor` for key 'terms', and a `SparseTensor` for key 'frequencies'. These 2 tensors must have the same indices and dense shape.
Parameters
_CategoricalColumn categorical_column
A `CategoricalColumn` created by `categorical_column_with_*` functions.
string weight_feature_key
String key for weight values.
ImplicitContainer<T> dtype
Type of weights, such as tf.float32. Only float and integer weights are supported.
Returns
WeightedCategoricalColumn
A `CategoricalColumn` composed of two sparse features: one represents id, the other represents weight (value) of the id feature in that example.
Show Example
categorical_column = categorical_column_with_hash_bucket(
                column_name='terms', hash_bucket_size=1000)
            weighted_column = weighted_categorical_column(
                categorical_column=categorical_column, weight_feature_key='frequencies')
            columns = [weighted_column,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

object weighted_categorical_column_dyn(object categorical_column, object weight_feature_key, ImplicitContainer<T> dtype)

Applies weight values to a `CategoricalColumn`.

Use this when each of your sparse inputs has both an ID and a value. For example, if you're representing text documents as a collection of word frequencies, you can provide 2 parallel sparse input features ('terms' and 'frequencies' below).

Example:

Input `tf.Example` objects:

```proto [ features { feature { key: "terms" value {bytes_list {value: "very" value: "model"}} } feature { key: "frequencies" value {float_list {value: 0.3 value: 0.1}} } }, features { feature { key: "terms" value {bytes_list {value: "when" value: "course" value: "human"}} } feature { key: "frequencies" value {float_list {value: 0.4 value: 0.1 value: 0.2}} } } ] ``` This assumes the input dictionary contains a `SparseTensor` for key 'terms', and a `SparseTensor` for key 'frequencies'. These 2 tensors must have the same indices and dense shape.
Parameters
object categorical_column
A `CategoricalColumn` created by `categorical_column_with_*` functions.
object weight_feature_key
String key for weight values.
ImplicitContainer<T> dtype
Type of weights, such as tf.float32. Only float and integer weights are supported.
Returns
object
A `CategoricalColumn` composed of two sparse features: one represents id, the other represents weight (value) of the id feature in that example.
Show Example
categorical_column = categorical_column_with_hash_bucket(
                column_name='terms', hash_bucket_size=1000)
            weighted_column = weighted_categorical_column(
                categorical_column=categorical_column, weight_feature_key='frequencies')
            columns = [weighted_column,...]
            features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
            linear_prediction, _, _ = linear_model(features, columns) 

Public properties

PythonFunctionContainer bucketized_column_fn get;

PythonFunctionContainer categorical_column_with_hash_bucket_fn get;

PythonFunctionContainer categorical_column_with_identity_fn get;

PythonFunctionContainer categorical_column_with_vocabulary_file_fn get;

PythonFunctionContainer categorical_column_with_vocabulary_list_fn get;

PythonFunctionContainer crossed_column_fn get;

PythonFunctionContainer embedding_column_fn get;

PythonFunctionContainer indicator_column_fn get;

PythonFunctionContainer input_layer_fn get;

PythonFunctionContainer linear_model_fn get;

PythonFunctionContainer make_parse_example_spec_fn get;

PythonFunctionContainer numeric_column_fn get;

PythonFunctionContainer sequence_categorical_column_with_hash_bucket_fn get;

PythonFunctionContainer sequence_categorical_column_with_identity_fn get;

PythonFunctionContainer sequence_categorical_column_with_vocabulary_file_fn get;

PythonFunctionContainer sequence_categorical_column_with_vocabulary_list_fn get;

PythonFunctionContainer sequence_numeric_column_fn get;

PythonFunctionContainer shared_embedding_columns_fn get;

PythonFunctionContainer shared_embeddings_fn get;

PythonFunctionContainer weighted_categorical_column_fn get;