Type tf.layers
Namespace tensorflow
Methods
- average_pooling1d
- average_pooling1d_dyn
- average_pooling2d
- average_pooling2d_dyn
- average_pooling3d
- average_pooling3d_dyn
- batch_normalization
- batch_normalization
- batch_normalization
- batch_normalization
- batch_normalization
- batch_normalization
- batch_normalization
- batch_normalization
- batch_normalization_dyn
- conv1d
- conv1d
- conv1d_dyn
- conv2d
- conv2d
- conv2d
- conv2d
- conv2d
- conv2d
- conv2d
- conv2d
- conv2d
- conv2d
- conv2d
- conv2d
- conv2d_dyn
- conv2d_transpose
- conv2d_transpose
- conv2d_transpose
- conv2d_transpose
- conv2d_transpose_dyn
- conv3d
- conv3d
- conv3d_dyn
- conv3d_transpose
- conv3d_transpose
- conv3d_transpose
- conv3d_transpose
- conv3d_transpose_dyn
- dense
- dense_dyn
- dropout
- dropout
- dropout
- dropout
- dropout_dyn
- flatten_dyn
- max_pooling1d
- max_pooling1d_dyn
- max_pooling2d
- max_pooling2d
- max_pooling2d
- max_pooling2d
- max_pooling2d
- max_pooling2d
- max_pooling2d
- max_pooling2d
- max_pooling2d_dyn
- max_pooling3d
- max_pooling3d
- max_pooling3d
- max_pooling3d
- max_pooling3d_dyn
- separable_conv1d
- separable_conv1d
- separable_conv1d
- separable_conv1d
- separable_conv1d_dyn
- separable_conv2d
- separable_conv2d
- separable_conv2d
- separable_conv2d
- separable_conv2d_dyn
Properties
Public static methods
object average_pooling1d(object inputs, object pool_size, object strides, string padding, string data_format, string name)
Average Pooling layer for 1D inputs. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.AveragePooling1D instead.
Parameters
-
object
inputs - The tensor over which to pool. Must have rank 3.
-
object
pool_size - An integer or tuple/list of a single integer, representing the size of the pooling window.
-
object
strides - An integer or tuple/list of a single integer, specifying the strides of the pooling operation.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, length, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, length)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- The output tensor, of rank 3.
object average_pooling1d_dyn(object inputs, object pool_size, object strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, object name)
Average Pooling layer for 1D inputs. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.AveragePooling1D instead.
Parameters
-
object
inputs - The tensor over which to pool. Must have rank 3.
-
object
pool_size - An integer or tuple/list of a single integer, representing the size of the pooling window.
-
object
strides - An integer or tuple/list of a single integer, specifying the strides of the pooling operation.
-
ImplicitContainer<T>
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
ImplicitContainer<T>
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, length, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, length)`.
-
object
name - A string, the name of the layer.
Returns
-
object
- The output tensor, of rank 3.
object average_pooling2d(object inputs, object pool_size, object strides, string padding, string data_format, string name)
Average pooling layer for 2D inputs (e.g. images). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.AveragePooling2D instead.
Parameters
-
object
inputs - The tensor over which to pool. Must have rank 4.
-
object
pool_size - An integer or tuple/list of 2 integers: (pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
object
strides - An integer or tuple/list of 2 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object average_pooling2d_dyn(object inputs, object pool_size, object strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, object name)
Average pooling layer for 2D inputs (e.g. images). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.AveragePooling2D instead.
Parameters
-
object
inputs - The tensor over which to pool. Must have rank 4.
-
object
pool_size - An integer or tuple/list of 2 integers: (pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
object
strides - An integer or tuple/list of 2 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
ImplicitContainer<T>
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
ImplicitContainer<T>
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
object
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object average_pooling3d(object inputs, object pool_size, object strides, string padding, string data_format, string name)
Average pooling layer for 3D inputs (e.g. volumes). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.AveragePooling3D instead.
Parameters
-
object
inputs - The tensor over which to pool. Must have rank 5.
-
object
pool_size - An integer or tuple/list of 3 integers: (pool_depth, pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
object
strides - An integer or tuple/list of 3 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, depth, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, depth, height, width)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object average_pooling3d_dyn(object inputs, object pool_size, object strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, object name)
Average pooling layer for 3D inputs (e.g. volumes). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.AveragePooling3D instead.
Parameters
-
object
inputs - The tensor over which to pool. Must have rank 5.
-
object
pool_size - An integer or tuple/list of 3 integers: (pool_depth, pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
object
strides - An integer or tuple/list of 3 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
ImplicitContainer<T>
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
ImplicitContainer<T>
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, depth, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, depth, height, width)`.
-
object
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object batch_normalization(IEnumerable<IGraphNodeBase> inputs, int axis, double momentum, double epsilon, bool center, bool scale, ImplicitContainer<T> beta_initializer, ImplicitContainer<T> gamma_initializer, ImplicitContainer<T> moving_mean_initializer, ImplicitContainer<T> moving_variance_initializer, object beta_regularizer, object gamma_regularizer, object beta_constraint, object gamma_constraint, IGraphNodeBase training, bool trainable, string name, Nullable<bool> reuse, bool renorm, object renorm_clipping, double renorm_momentum, object fused, Nullable<int> virtual_batch_size, object adjustment)
Functional interface for the batch normalization layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.BatchNormalization instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.batch_normalization` documentation). Reference: http://arxiv.org/abs/1502.03167 "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift" Sergey Ioffe, Christian Szegedy Note: when training, the moving_mean and moving_variance need to be updated.
By default the update ops are placed in
tf.GraphKeys.UPDATE_OPS
, so they
need to be executed alongside the `train_op`. Also, be sure to add any
batch_normalization ops before getting the update_ops collection. Otherwise,
update_ops will be empty, and training/inference will not work properly. For
example:
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - Tensor input.
-
int
axis - An `int`, the axis that should be normalized (typically the features axis). For instance, after a `Convolution2D` layer with `data_format="channels_first"`, set `axis=1` in `BatchNormalization`.
-
double
momentum - Momentum for the moving average.
-
double
epsilon - Small float added to variance to avoid dividing by zero.
-
bool
center - If True, add offset of `beta` to normalized tensor. If False, `beta` is ignored.
-
bool
scale - If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling can be done by the next layer.
-
ImplicitContainer<T>
beta_initializer - Initializer for the beta weight.
-
ImplicitContainer<T>
gamma_initializer - Initializer for the gamma weight.
-
ImplicitContainer<T>
moving_mean_initializer - Initializer for the moving mean.
-
ImplicitContainer<T>
moving_variance_initializer - Initializer for the moving variance.
-
object
beta_regularizer - Optional regularizer for the beta weight.
-
object
gamma_regularizer - Optional regularizer for the gamma weight.
-
object
beta_constraint - An optional projection function to be applied to the `beta` weight after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
gamma_constraint - An optional projection function to be applied to the `gamma` weight after being updated by an `Optimizer`.
-
IGraphNodeBase
training - Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (normalized with statistics of the current batch) or in inference mode (normalized with moving statistics). **NOTE**: make sure to set this parameter correctly, or else your training/inference will not work properly.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-
string
name - String, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
-
bool
renorm - Whether to use Batch Renormalization (https://arxiv.org/abs/1702.03275). This adds extra variables during training. The inference is the same for either value of this parameter.
-
object
renorm_clipping - A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar `Tensors` used to clip the renorm correction. The correction `(r, d)` is used as `corrected_value = normalized_value * r + d`, with `r` clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
-
double
renorm_momentum - Momentum used to update the moving means and standard deviations with renorm. Unlike `momentum`, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that `momentum` is still applied to get the means and variances for inference.
-
object
fused - if `None` or `True`, use a faster, fused implementation if possible. If `False`, use the system recommended implementation.
-
Nullable<int>
virtual_batch_size - An `int`. By default, `virtual_batch_size` is `None`, which means batch normalization is performed across the whole batch. When `virtual_batch_size` is not `None`, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.
-
object
adjustment - A function taking the `Tensor` containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1, `adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1))` will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If `None`, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.
Returns
-
object
- Output tensor.
Show Example
x_norm = tf.compat.v1.layers.batch_normalization(x, training=training) #... update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS) train_op = optimizer.minimize(loss) train_op = tf.group([train_op, update_ops])
object batch_normalization(IEnumerable<IGraphNodeBase> inputs, int axis, double momentum, double epsilon, bool center, bool scale, ImplicitContainer<T> beta_initializer, ImplicitContainer<T> gamma_initializer, ImplicitContainer<T> moving_mean_initializer, ImplicitContainer<T> moving_variance_initializer, object beta_regularizer, object gamma_regularizer, object beta_constraint, object gamma_constraint, bool training, bool trainable, string name, Nullable<bool> reuse, bool renorm, object renorm_clipping, double renorm_momentum, object fused, Nullable<int> virtual_batch_size, object adjustment)
Functional interface for the batch normalization layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.BatchNormalization instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.batch_normalization` documentation). Reference: http://arxiv.org/abs/1502.03167 "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift" Sergey Ioffe, Christian Szegedy Note: when training, the moving_mean and moving_variance need to be updated.
By default the update ops are placed in
tf.GraphKeys.UPDATE_OPS
, so they
need to be executed alongside the `train_op`. Also, be sure to add any
batch_normalization ops before getting the update_ops collection. Otherwise,
update_ops will be empty, and training/inference will not work properly. For
example:
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - Tensor input.
-
int
axis - An `int`, the axis that should be normalized (typically the features axis). For instance, after a `Convolution2D` layer with `data_format="channels_first"`, set `axis=1` in `BatchNormalization`.
-
double
momentum - Momentum for the moving average.
-
double
epsilon - Small float added to variance to avoid dividing by zero.
-
bool
center - If True, add offset of `beta` to normalized tensor. If False, `beta` is ignored.
-
bool
scale - If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling can be done by the next layer.
-
ImplicitContainer<T>
beta_initializer - Initializer for the beta weight.
-
ImplicitContainer<T>
gamma_initializer - Initializer for the gamma weight.
-
ImplicitContainer<T>
moving_mean_initializer - Initializer for the moving mean.
-
ImplicitContainer<T>
moving_variance_initializer - Initializer for the moving variance.
-
object
beta_regularizer - Optional regularizer for the beta weight.
-
object
gamma_regularizer - Optional regularizer for the gamma weight.
-
object
beta_constraint - An optional projection function to be applied to the `beta` weight after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
gamma_constraint - An optional projection function to be applied to the `gamma` weight after being updated by an `Optimizer`.
-
bool
training - Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (normalized with statistics of the current batch) or in inference mode (normalized with moving statistics). **NOTE**: make sure to set this parameter correctly, or else your training/inference will not work properly.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-
string
name - String, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
-
bool
renorm - Whether to use Batch Renormalization (https://arxiv.org/abs/1702.03275). This adds extra variables during training. The inference is the same for either value of this parameter.
-
object
renorm_clipping - A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar `Tensors` used to clip the renorm correction. The correction `(r, d)` is used as `corrected_value = normalized_value * r + d`, with `r` clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
-
double
renorm_momentum - Momentum used to update the moving means and standard deviations with renorm. Unlike `momentum`, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that `momentum` is still applied to get the means and variances for inference.
-
object
fused - if `None` or `True`, use a faster, fused implementation if possible. If `False`, use the system recommended implementation.
-
Nullable<int>
virtual_batch_size - An `int`. By default, `virtual_batch_size` is `None`, which means batch normalization is performed across the whole batch. When `virtual_batch_size` is not `None`, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.
-
object
adjustment - A function taking the `Tensor` containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1, `adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1))` will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If `None`, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.
Returns
-
object
- Output tensor.
Show Example
x_norm = tf.compat.v1.layers.batch_normalization(x, training=training) #... update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS) train_op = optimizer.minimize(loss) train_op = tf.group([train_op, update_ops])
object batch_normalization(IGraphNodeBase inputs, int axis, double momentum, double epsilon, bool center, bool scale, ImplicitContainer<T> beta_initializer, ImplicitContainer<T> gamma_initializer, ImplicitContainer<T> moving_mean_initializer, ImplicitContainer<T> moving_variance_initializer, object beta_regularizer, object gamma_regularizer, object beta_constraint, object gamma_constraint, IGraphNodeBase training, bool trainable, string name, Nullable<bool> reuse, bool renorm, object renorm_clipping, double renorm_momentum, object fused, Nullable<int> virtual_batch_size, object adjustment)
Functional interface for the batch normalization layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.BatchNormalization instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.batch_normalization` documentation). Reference: http://arxiv.org/abs/1502.03167 "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift" Sergey Ioffe, Christian Szegedy Note: when training, the moving_mean and moving_variance need to be updated.
By default the update ops are placed in
tf.GraphKeys.UPDATE_OPS
, so they
need to be executed alongside the `train_op`. Also, be sure to add any
batch_normalization ops before getting the update_ops collection. Otherwise,
update_ops will be empty, and training/inference will not work properly. For
example:
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
int
axis - An `int`, the axis that should be normalized (typically the features axis). For instance, after a `Convolution2D` layer with `data_format="channels_first"`, set `axis=1` in `BatchNormalization`.
-
double
momentum - Momentum for the moving average.
-
double
epsilon - Small float added to variance to avoid dividing by zero.
-
bool
center - If True, add offset of `beta` to normalized tensor. If False, `beta` is ignored.
-
bool
scale - If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling can be done by the next layer.
-
ImplicitContainer<T>
beta_initializer - Initializer for the beta weight.
-
ImplicitContainer<T>
gamma_initializer - Initializer for the gamma weight.
-
ImplicitContainer<T>
moving_mean_initializer - Initializer for the moving mean.
-
ImplicitContainer<T>
moving_variance_initializer - Initializer for the moving variance.
-
object
beta_regularizer - Optional regularizer for the beta weight.
-
object
gamma_regularizer - Optional regularizer for the gamma weight.
-
object
beta_constraint - An optional projection function to be applied to the `beta` weight after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
gamma_constraint - An optional projection function to be applied to the `gamma` weight after being updated by an `Optimizer`.
-
IGraphNodeBase
training - Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (normalized with statistics of the current batch) or in inference mode (normalized with moving statistics). **NOTE**: make sure to set this parameter correctly, or else your training/inference will not work properly.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-
string
name - String, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
-
bool
renorm - Whether to use Batch Renormalization (https://arxiv.org/abs/1702.03275). This adds extra variables during training. The inference is the same for either value of this parameter.
-
object
renorm_clipping - A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar `Tensors` used to clip the renorm correction. The correction `(r, d)` is used as `corrected_value = normalized_value * r + d`, with `r` clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
-
double
renorm_momentum - Momentum used to update the moving means and standard deviations with renorm. Unlike `momentum`, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that `momentum` is still applied to get the means and variances for inference.
-
object
fused - if `None` or `True`, use a faster, fused implementation if possible. If `False`, use the system recommended implementation.
-
Nullable<int>
virtual_batch_size - An `int`. By default, `virtual_batch_size` is `None`, which means batch normalization is performed across the whole batch. When `virtual_batch_size` is not `None`, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.
-
object
adjustment - A function taking the `Tensor` containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1, `adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1))` will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If `None`, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.
Returns
-
object
- Output tensor.
Show Example
x_norm = tf.compat.v1.layers.batch_normalization(x, training=training) #... update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS) train_op = optimizer.minimize(loss) train_op = tf.group([train_op, update_ops])
object batch_normalization(IGraphNodeBase inputs, IEnumerable<int> axis, double momentum, double epsilon, bool center, bool scale, ImplicitContainer<T> beta_initializer, ImplicitContainer<T> gamma_initializer, ImplicitContainer<T> moving_mean_initializer, ImplicitContainer<T> moving_variance_initializer, object beta_regularizer, object gamma_regularizer, object beta_constraint, object gamma_constraint, IGraphNodeBase training, bool trainable, string name, Nullable<bool> reuse, bool renorm, object renorm_clipping, double renorm_momentum, object fused, Nullable<int> virtual_batch_size, object adjustment)
Functional interface for the batch normalization layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.BatchNormalization instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.batch_normalization` documentation). Reference: http://arxiv.org/abs/1502.03167 "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift" Sergey Ioffe, Christian Szegedy Note: when training, the moving_mean and moving_variance need to be updated.
By default the update ops are placed in
tf.GraphKeys.UPDATE_OPS
, so they
need to be executed alongside the `train_op`. Also, be sure to add any
batch_normalization ops before getting the update_ops collection. Otherwise,
update_ops will be empty, and training/inference will not work properly. For
example:
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
IEnumerable<int>
axis - An `int`, the axis that should be normalized (typically the features axis). For instance, after a `Convolution2D` layer with `data_format="channels_first"`, set `axis=1` in `BatchNormalization`.
-
double
momentum - Momentum for the moving average.
-
double
epsilon - Small float added to variance to avoid dividing by zero.
-
bool
center - If True, add offset of `beta` to normalized tensor. If False, `beta` is ignored.
-
bool
scale - If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling can be done by the next layer.
-
ImplicitContainer<T>
beta_initializer - Initializer for the beta weight.
-
ImplicitContainer<T>
gamma_initializer - Initializer for the gamma weight.
-
ImplicitContainer<T>
moving_mean_initializer - Initializer for the moving mean.
-
ImplicitContainer<T>
moving_variance_initializer - Initializer for the moving variance.
-
object
beta_regularizer - Optional regularizer for the beta weight.
-
object
gamma_regularizer - Optional regularizer for the gamma weight.
-
object
beta_constraint - An optional projection function to be applied to the `beta` weight after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
gamma_constraint - An optional projection function to be applied to the `gamma` weight after being updated by an `Optimizer`.
-
IGraphNodeBase
training - Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (normalized with statistics of the current batch) or in inference mode (normalized with moving statistics). **NOTE**: make sure to set this parameter correctly, or else your training/inference will not work properly.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-
string
name - String, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
-
bool
renorm - Whether to use Batch Renormalization (https://arxiv.org/abs/1702.03275). This adds extra variables during training. The inference is the same for either value of this parameter.
-
object
renorm_clipping - A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar `Tensors` used to clip the renorm correction. The correction `(r, d)` is used as `corrected_value = normalized_value * r + d`, with `r` clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
-
double
renorm_momentum - Momentum used to update the moving means and standard deviations with renorm. Unlike `momentum`, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that `momentum` is still applied to get the means and variances for inference.
-
object
fused - if `None` or `True`, use a faster, fused implementation if possible. If `False`, use the system recommended implementation.
-
Nullable<int>
virtual_batch_size - An `int`. By default, `virtual_batch_size` is `None`, which means batch normalization is performed across the whole batch. When `virtual_batch_size` is not `None`, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.
-
object
adjustment - A function taking the `Tensor` containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1, `adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1))` will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If `None`, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.
Returns
-
object
- Output tensor.
Show Example
x_norm = tf.compat.v1.layers.batch_normalization(x, training=training) #... update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS) train_op = optimizer.minimize(loss) train_op = tf.group([train_op, update_ops])
object batch_normalization(IGraphNodeBase inputs, int axis, double momentum, double epsilon, bool center, bool scale, ImplicitContainer<T> beta_initializer, ImplicitContainer<T> gamma_initializer, ImplicitContainer<T> moving_mean_initializer, ImplicitContainer<T> moving_variance_initializer, object beta_regularizer, object gamma_regularizer, object beta_constraint, object gamma_constraint, bool training, bool trainable, string name, Nullable<bool> reuse, bool renorm, object renorm_clipping, double renorm_momentum, object fused, Nullable<int> virtual_batch_size, object adjustment)
Functional interface for the batch normalization layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.BatchNormalization instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.batch_normalization` documentation). Reference: http://arxiv.org/abs/1502.03167 "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift" Sergey Ioffe, Christian Szegedy Note: when training, the moving_mean and moving_variance need to be updated.
By default the update ops are placed in
tf.GraphKeys.UPDATE_OPS
, so they
need to be executed alongside the `train_op`. Also, be sure to add any
batch_normalization ops before getting the update_ops collection. Otherwise,
update_ops will be empty, and training/inference will not work properly. For
example:
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
int
axis - An `int`, the axis that should be normalized (typically the features axis). For instance, after a `Convolution2D` layer with `data_format="channels_first"`, set `axis=1` in `BatchNormalization`.
-
double
momentum - Momentum for the moving average.
-
double
epsilon - Small float added to variance to avoid dividing by zero.
-
bool
center - If True, add offset of `beta` to normalized tensor. If False, `beta` is ignored.
-
bool
scale - If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling can be done by the next layer.
-
ImplicitContainer<T>
beta_initializer - Initializer for the beta weight.
-
ImplicitContainer<T>
gamma_initializer - Initializer for the gamma weight.
-
ImplicitContainer<T>
moving_mean_initializer - Initializer for the moving mean.
-
ImplicitContainer<T>
moving_variance_initializer - Initializer for the moving variance.
-
object
beta_regularizer - Optional regularizer for the beta weight.
-
object
gamma_regularizer - Optional regularizer for the gamma weight.
-
object
beta_constraint - An optional projection function to be applied to the `beta` weight after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
gamma_constraint - An optional projection function to be applied to the `gamma` weight after being updated by an `Optimizer`.
-
bool
training - Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (normalized with statistics of the current batch) or in inference mode (normalized with moving statistics). **NOTE**: make sure to set this parameter correctly, or else your training/inference will not work properly.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-
string
name - String, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
-
bool
renorm - Whether to use Batch Renormalization (https://arxiv.org/abs/1702.03275). This adds extra variables during training. The inference is the same for either value of this parameter.
-
object
renorm_clipping - A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar `Tensors` used to clip the renorm correction. The correction `(r, d)` is used as `corrected_value = normalized_value * r + d`, with `r` clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
-
double
renorm_momentum - Momentum used to update the moving means and standard deviations with renorm. Unlike `momentum`, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that `momentum` is still applied to get the means and variances for inference.
-
object
fused - if `None` or `True`, use a faster, fused implementation if possible. If `False`, use the system recommended implementation.
-
Nullable<int>
virtual_batch_size - An `int`. By default, `virtual_batch_size` is `None`, which means batch normalization is performed across the whole batch. When `virtual_batch_size` is not `None`, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.
-
object
adjustment - A function taking the `Tensor` containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1, `adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1))` will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If `None`, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.
Returns
-
object
- Output tensor.
Show Example
x_norm = tf.compat.v1.layers.batch_normalization(x, training=training) #... update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS) train_op = optimizer.minimize(loss) train_op = tf.group([train_op, update_ops])
object batch_normalization(IGraphNodeBase inputs, IEnumerable<int> axis, double momentum, double epsilon, bool center, bool scale, ImplicitContainer<T> beta_initializer, ImplicitContainer<T> gamma_initializer, ImplicitContainer<T> moving_mean_initializer, ImplicitContainer<T> moving_variance_initializer, object beta_regularizer, object gamma_regularizer, object beta_constraint, object gamma_constraint, bool training, bool trainable, string name, Nullable<bool> reuse, bool renorm, object renorm_clipping, double renorm_momentum, object fused, Nullable<int> virtual_batch_size, object adjustment)
Functional interface for the batch normalization layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.BatchNormalization instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.batch_normalization` documentation). Reference: http://arxiv.org/abs/1502.03167 "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift" Sergey Ioffe, Christian Szegedy Note: when training, the moving_mean and moving_variance need to be updated.
By default the update ops are placed in
tf.GraphKeys.UPDATE_OPS
, so they
need to be executed alongside the `train_op`. Also, be sure to add any
batch_normalization ops before getting the update_ops collection. Otherwise,
update_ops will be empty, and training/inference will not work properly. For
example:
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
IEnumerable<int>
axis - An `int`, the axis that should be normalized (typically the features axis). For instance, after a `Convolution2D` layer with `data_format="channels_first"`, set `axis=1` in `BatchNormalization`.
-
double
momentum - Momentum for the moving average.
-
double
epsilon - Small float added to variance to avoid dividing by zero.
-
bool
center - If True, add offset of `beta` to normalized tensor. If False, `beta` is ignored.
-
bool
scale - If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling can be done by the next layer.
-
ImplicitContainer<T>
beta_initializer - Initializer for the beta weight.
-
ImplicitContainer<T>
gamma_initializer - Initializer for the gamma weight.
-
ImplicitContainer<T>
moving_mean_initializer - Initializer for the moving mean.
-
ImplicitContainer<T>
moving_variance_initializer - Initializer for the moving variance.
-
object
beta_regularizer - Optional regularizer for the beta weight.
-
object
gamma_regularizer - Optional regularizer for the gamma weight.
-
object
beta_constraint - An optional projection function to be applied to the `beta` weight after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
gamma_constraint - An optional projection function to be applied to the `gamma` weight after being updated by an `Optimizer`.
-
bool
training - Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (normalized with statistics of the current batch) or in inference mode (normalized with moving statistics). **NOTE**: make sure to set this parameter correctly, or else your training/inference will not work properly.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-
string
name - String, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
-
bool
renorm - Whether to use Batch Renormalization (https://arxiv.org/abs/1702.03275). This adds extra variables during training. The inference is the same for either value of this parameter.
-
object
renorm_clipping - A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar `Tensors` used to clip the renorm correction. The correction `(r, d)` is used as `corrected_value = normalized_value * r + d`, with `r` clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
-
double
renorm_momentum - Momentum used to update the moving means and standard deviations with renorm. Unlike `momentum`, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that `momentum` is still applied to get the means and variances for inference.
-
object
fused - if `None` or `True`, use a faster, fused implementation if possible. If `False`, use the system recommended implementation.
-
Nullable<int>
virtual_batch_size - An `int`. By default, `virtual_batch_size` is `None`, which means batch normalization is performed across the whole batch. When `virtual_batch_size` is not `None`, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.
-
object
adjustment - A function taking the `Tensor` containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1, `adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1))` will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If `None`, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.
Returns
-
object
- Output tensor.
Show Example
x_norm = tf.compat.v1.layers.batch_normalization(x, training=training) #... update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS) train_op = optimizer.minimize(loss) train_op = tf.group([train_op, update_ops])
object batch_normalization(IEnumerable<IGraphNodeBase> inputs, IEnumerable<int> axis, double momentum, double epsilon, bool center, bool scale, ImplicitContainer<T> beta_initializer, ImplicitContainer<T> gamma_initializer, ImplicitContainer<T> moving_mean_initializer, ImplicitContainer<T> moving_variance_initializer, object beta_regularizer, object gamma_regularizer, object beta_constraint, object gamma_constraint, bool training, bool trainable, string name, Nullable<bool> reuse, bool renorm, object renorm_clipping, double renorm_momentum, object fused, Nullable<int> virtual_batch_size, object adjustment)
Functional interface for the batch normalization layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.BatchNormalization instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.batch_normalization` documentation). Reference: http://arxiv.org/abs/1502.03167 "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift" Sergey Ioffe, Christian Szegedy Note: when training, the moving_mean and moving_variance need to be updated.
By default the update ops are placed in
tf.GraphKeys.UPDATE_OPS
, so they
need to be executed alongside the `train_op`. Also, be sure to add any
batch_normalization ops before getting the update_ops collection. Otherwise,
update_ops will be empty, and training/inference will not work properly. For
example:
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - Tensor input.
-
IEnumerable<int>
axis - An `int`, the axis that should be normalized (typically the features axis). For instance, after a `Convolution2D` layer with `data_format="channels_first"`, set `axis=1` in `BatchNormalization`.
-
double
momentum - Momentum for the moving average.
-
double
epsilon - Small float added to variance to avoid dividing by zero.
-
bool
center - If True, add offset of `beta` to normalized tensor. If False, `beta` is ignored.
-
bool
scale - If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling can be done by the next layer.
-
ImplicitContainer<T>
beta_initializer - Initializer for the beta weight.
-
ImplicitContainer<T>
gamma_initializer - Initializer for the gamma weight.
-
ImplicitContainer<T>
moving_mean_initializer - Initializer for the moving mean.
-
ImplicitContainer<T>
moving_variance_initializer - Initializer for the moving variance.
-
object
beta_regularizer - Optional regularizer for the beta weight.
-
object
gamma_regularizer - Optional regularizer for the gamma weight.
-
object
beta_constraint - An optional projection function to be applied to the `beta` weight after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
gamma_constraint - An optional projection function to be applied to the `gamma` weight after being updated by an `Optimizer`.
-
bool
training - Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (normalized with statistics of the current batch) or in inference mode (normalized with moving statistics). **NOTE**: make sure to set this parameter correctly, or else your training/inference will not work properly.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-
string
name - String, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
-
bool
renorm - Whether to use Batch Renormalization (https://arxiv.org/abs/1702.03275). This adds extra variables during training. The inference is the same for either value of this parameter.
-
object
renorm_clipping - A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar `Tensors` used to clip the renorm correction. The correction `(r, d)` is used as `corrected_value = normalized_value * r + d`, with `r` clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
-
double
renorm_momentum - Momentum used to update the moving means and standard deviations with renorm. Unlike `momentum`, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that `momentum` is still applied to get the means and variances for inference.
-
object
fused - if `None` or `True`, use a faster, fused implementation if possible. If `False`, use the system recommended implementation.
-
Nullable<int>
virtual_batch_size - An `int`. By default, `virtual_batch_size` is `None`, which means batch normalization is performed across the whole batch. When `virtual_batch_size` is not `None`, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.
-
object
adjustment - A function taking the `Tensor` containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1, `adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1))` will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If `None`, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.
Returns
-
object
- Output tensor.
Show Example
x_norm = tf.compat.v1.layers.batch_normalization(x, training=training) #... update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS) train_op = optimizer.minimize(loss) train_op = tf.group([train_op, update_ops])
object batch_normalization(IEnumerable<IGraphNodeBase> inputs, IEnumerable<int> axis, double momentum, double epsilon, bool center, bool scale, ImplicitContainer<T> beta_initializer, ImplicitContainer<T> gamma_initializer, ImplicitContainer<T> moving_mean_initializer, ImplicitContainer<T> moving_variance_initializer, object beta_regularizer, object gamma_regularizer, object beta_constraint, object gamma_constraint, IGraphNodeBase training, bool trainable, string name, Nullable<bool> reuse, bool renorm, object renorm_clipping, double renorm_momentum, object fused, Nullable<int> virtual_batch_size, object adjustment)
Functional interface for the batch normalization layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.BatchNormalization instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.batch_normalization` documentation). Reference: http://arxiv.org/abs/1502.03167 "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift" Sergey Ioffe, Christian Szegedy Note: when training, the moving_mean and moving_variance need to be updated.
By default the update ops are placed in
tf.GraphKeys.UPDATE_OPS
, so they
need to be executed alongside the `train_op`. Also, be sure to add any
batch_normalization ops before getting the update_ops collection. Otherwise,
update_ops will be empty, and training/inference will not work properly. For
example:
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - Tensor input.
-
IEnumerable<int>
axis - An `int`, the axis that should be normalized (typically the features axis). For instance, after a `Convolution2D` layer with `data_format="channels_first"`, set `axis=1` in `BatchNormalization`.
-
double
momentum - Momentum for the moving average.
-
double
epsilon - Small float added to variance to avoid dividing by zero.
-
bool
center - If True, add offset of `beta` to normalized tensor. If False, `beta` is ignored.
-
bool
scale - If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling can be done by the next layer.
-
ImplicitContainer<T>
beta_initializer - Initializer for the beta weight.
-
ImplicitContainer<T>
gamma_initializer - Initializer for the gamma weight.
-
ImplicitContainer<T>
moving_mean_initializer - Initializer for the moving mean.
-
ImplicitContainer<T>
moving_variance_initializer - Initializer for the moving variance.
-
object
beta_regularizer - Optional regularizer for the beta weight.
-
object
gamma_regularizer - Optional regularizer for the gamma weight.
-
object
beta_constraint - An optional projection function to be applied to the `beta` weight after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
gamma_constraint - An optional projection function to be applied to the `gamma` weight after being updated by an `Optimizer`.
-
IGraphNodeBase
training - Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (normalized with statistics of the current batch) or in inference mode (normalized with moving statistics). **NOTE**: make sure to set this parameter correctly, or else your training/inference will not work properly.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-
string
name - String, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
-
bool
renorm - Whether to use Batch Renormalization (https://arxiv.org/abs/1702.03275). This adds extra variables during training. The inference is the same for either value of this parameter.
-
object
renorm_clipping - A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar `Tensors` used to clip the renorm correction. The correction `(r, d)` is used as `corrected_value = normalized_value * r + d`, with `r` clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
-
double
renorm_momentum - Momentum used to update the moving means and standard deviations with renorm. Unlike `momentum`, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that `momentum` is still applied to get the means and variances for inference.
-
object
fused - if `None` or `True`, use a faster, fused implementation if possible. If `False`, use the system recommended implementation.
-
Nullable<int>
virtual_batch_size - An `int`. By default, `virtual_batch_size` is `None`, which means batch normalization is performed across the whole batch. When `virtual_batch_size` is not `None`, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.
-
object
adjustment - A function taking the `Tensor` containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1, `adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1))` will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If `None`, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.
Returns
-
object
- Output tensor.
Show Example
x_norm = tf.compat.v1.layers.batch_normalization(x, training=training) #... update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS) train_op = optimizer.minimize(loss) train_op = tf.group([train_op, update_ops])
object batch_normalization_dyn(object inputs, ImplicitContainer<T> axis, ImplicitContainer<T> momentum, ImplicitContainer<T> epsilon, ImplicitContainer<T> center, ImplicitContainer<T> scale, ImplicitContainer<T> beta_initializer, ImplicitContainer<T> gamma_initializer, ImplicitContainer<T> moving_mean_initializer, ImplicitContainer<T> moving_variance_initializer, object beta_regularizer, object gamma_regularizer, object beta_constraint, object gamma_constraint, ImplicitContainer<T> training, ImplicitContainer<T> trainable, object name, object reuse, ImplicitContainer<T> renorm, object renorm_clipping, ImplicitContainer<T> renorm_momentum, object fused, object virtual_batch_size, object adjustment)
Functional interface for the batch normalization layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.BatchNormalization instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.batch_normalization` documentation). Reference: http://arxiv.org/abs/1502.03167 "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift" Sergey Ioffe, Christian Szegedy Note: when training, the moving_mean and moving_variance need to be updated.
By default the update ops are placed in
tf.GraphKeys.UPDATE_OPS
, so they
need to be executed alongside the `train_op`. Also, be sure to add any
batch_normalization ops before getting the update_ops collection. Otherwise,
update_ops will be empty, and training/inference will not work properly. For
example:
Parameters
-
object
inputs - Tensor input.
-
ImplicitContainer<T>
axis - An `int`, the axis that should be normalized (typically the features axis). For instance, after a `Convolution2D` layer with `data_format="channels_first"`, set `axis=1` in `BatchNormalization`.
-
ImplicitContainer<T>
momentum - Momentum for the moving average.
-
ImplicitContainer<T>
epsilon - Small float added to variance to avoid dividing by zero.
-
ImplicitContainer<T>
center - If True, add offset of `beta` to normalized tensor. If False, `beta` is ignored.
-
ImplicitContainer<T>
scale - If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling can be done by the next layer.
-
ImplicitContainer<T>
beta_initializer - Initializer for the beta weight.
-
ImplicitContainer<T>
gamma_initializer - Initializer for the gamma weight.
-
ImplicitContainer<T>
moving_mean_initializer - Initializer for the moving mean.
-
ImplicitContainer<T>
moving_variance_initializer - Initializer for the moving variance.
-
object
beta_regularizer - Optional regularizer for the beta weight.
-
object
gamma_regularizer - Optional regularizer for the gamma weight.
-
object
beta_constraint - An optional projection function to be applied to the `beta` weight after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
gamma_constraint - An optional projection function to be applied to the `gamma` weight after being updated by an `Optimizer`.
-
ImplicitContainer<T>
training - Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (normalized with statistics of the current batch) or in inference mode (normalized with moving statistics). **NOTE**: make sure to set this parameter correctly, or else your training/inference will not work properly.
-
ImplicitContainer<T>
trainable - Boolean, if `True` also add variables to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-
object
name - String, the name of the layer.
-
object
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
-
ImplicitContainer<T>
renorm - Whether to use Batch Renormalization (https://arxiv.org/abs/1702.03275). This adds extra variables during training. The inference is the same for either value of this parameter.
-
object
renorm_clipping - A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar `Tensors` used to clip the renorm correction. The correction `(r, d)` is used as `corrected_value = normalized_value * r + d`, with `r` clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
-
ImplicitContainer<T>
renorm_momentum - Momentum used to update the moving means and standard deviations with renorm. Unlike `momentum`, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that `momentum` is still applied to get the means and variances for inference.
-
object
fused - if `None` or `True`, use a faster, fused implementation if possible. If `False`, use the system recommended implementation.
-
object
virtual_batch_size - An `int`. By default, `virtual_batch_size` is `None`, which means batch normalization is performed across the whole batch. When `virtual_batch_size` is not `None`, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.
-
object
adjustment - A function taking the `Tensor` containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1, `adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1))` will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If `None`, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.
Returns
-
object
- Output tensor.
Show Example
x_norm = tf.compat.v1.layers.batch_normalization(x, training=training) #... update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS) train_op = optimizer.minimize(loss) train_op = tf.group([train_op, update_ops])
object conv1d(IEnumerable<IGraphNodeBase> inputs, int filters, int kernel_size, int strides, string padding, string data_format, int dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, object reuse)
Functional interface for 1D convolution layer (e.g. temporal convolution). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv1D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - An integer or tuple/list of a single integer, specifying the length of the 1D convolution window.
-
int
strides - An integer or tuple/list of a single integer, specifying the stride length of the convolution. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, length, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, length)`.
-
int
dilation_rate - An integer or tuple/list of a single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any `strides` value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
object
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv1d(IGraphNodeBase inputs, int filters, int kernel_size, int strides, string padding, string data_format, int dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, object reuse)
Functional interface for 1D convolution layer (e.g. temporal convolution). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv1D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - An integer or tuple/list of a single integer, specifying the length of the 1D convolution window.
-
int
strides - An integer or tuple/list of a single integer, specifying the stride length of the convolution. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, length, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, length)`.
-
int
dilation_rate - An integer or tuple/list of a single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any `strides` value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
object
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv1d_dyn(object inputs, object filters, object kernel_size, ImplicitContainer<T> strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, ImplicitContainer<T> dilation_rate, object activation, ImplicitContainer<T> use_bias, object kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, ImplicitContainer<T> trainable, object name, object reuse)
Functional interface for 1D convolution layer (e.g. temporal convolution). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv1D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
object
inputs - Tensor input.
-
object
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
object
kernel_size - An integer or tuple/list of a single integer, specifying the length of the 1D convolution window.
-
ImplicitContainer<T>
strides - An integer or tuple/list of a single integer, specifying the stride length of the convolution. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
ImplicitContainer<T>
padding - One of `"valid"` or `"same"` (case-insensitive).
-
ImplicitContainer<T>
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, length, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, length)`.
-
ImplicitContainer<T>
dilation_rate - An integer or tuple/list of a single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any `strides` value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
ImplicitContainer<T>
use_bias - Boolean, whether the layer uses a bias.
-
object
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
ImplicitContainer<T>
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
object
name - A string, the name of the layer.
-
object
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d(IGraphNodeBase inputs, int filters, IEnumerable<int> kernel_size, ImplicitContainer<T> strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
IEnumerable<int>
kernel_size - An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
ImplicitContainer<T>
strides - An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d(IGraphNodeBase inputs, int filters, int kernel_size, int strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
int
strides - An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d(IGraphNodeBase inputs, int filters, int kernel_size, ImplicitContainer<T> strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
ImplicitContainer<T>
strides - An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d(IGraphNodeBase inputs, int filters, IEnumerable<int> kernel_size, int strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
IEnumerable<int>
kernel_size - An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
int
strides - An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d(IGraphNodeBase inputs, int filters, IEnumerable<int> kernel_size, ValueTuple<int, object> strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
IEnumerable<int>
kernel_size - An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
ValueTuple<int, object>
strides - An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d(IGraphNodeBase inputs, int filters, int kernel_size, ValueTuple<int, object> strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
ValueTuple<int, object>
strides - An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d(IEnumerable<IGraphNodeBase> inputs, int filters, int kernel_size, ImplicitContainer<T> strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
ImplicitContainer<T>
strides - An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d(IEnumerable<IGraphNodeBase> inputs, int filters, int kernel_size, ValueTuple<int, object> strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
ValueTuple<int, object>
strides - An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d(IEnumerable<IGraphNodeBase> inputs, int filters, IEnumerable<int> kernel_size, int strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
IEnumerable<int>
kernel_size - An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
int
strides - An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d(IEnumerable<IGraphNodeBase> inputs, int filters, IEnumerable<int> kernel_size, ImplicitContainer<T> strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
IEnumerable<int>
kernel_size - An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
ImplicitContainer<T>
strides - An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d(IEnumerable<IGraphNodeBase> inputs, int filters, IEnumerable<int> kernel_size, ValueTuple<int, object> strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
IEnumerable<int>
kernel_size - An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
ValueTuple<int, object>
strides - An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d(IEnumerable<IGraphNodeBase> inputs, int filters, int kernel_size, int strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
int
strides - An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d_dyn(object inputs, object filters, object kernel_size, ImplicitContainer<T> strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, ImplicitContainer<T> dilation_rate, object activation, ImplicitContainer<T> use_bias, object kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, ImplicitContainer<T> trainable, object name, object reuse)
Functional interface for the 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
object
inputs - Tensor input.
-
object
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
object
kernel_size - An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
ImplicitContainer<T>
strides - An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
ImplicitContainer<T>
padding - One of `"valid"` or `"same"` (case-insensitive).
-
ImplicitContainer<T>
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ImplicitContainer<T>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
ImplicitContainer<T>
use_bias - Boolean, whether the layer uses a bias.
-
object
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
ImplicitContainer<T>
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
object
name - A string, the name of the layer.
-
object
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d_transpose(IGraphNodeBase inputs, int filters, int kernel_size, ValueTuple<int, object> strides, string padding, string data_format, object activation, bool use_bias, object kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for transposed 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2DTranspose
instead. The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
Parameters
-
IGraphNodeBase
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - A tuple or list of 2 positive integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
-
ValueTuple<int, object>
strides - A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - one of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
object
activation - Activation function. Set it to `None` to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If `None`, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d_transpose(IGraphNodeBase inputs, int filters, int kernel_size, IEnumerable<int> strides, string padding, string data_format, object activation, bool use_bias, object kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for transposed 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2DTranspose
instead. The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
Parameters
-
IGraphNodeBase
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - A tuple or list of 2 positive integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
-
IEnumerable<int>
strides - A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - one of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
object
activation - Activation function. Set it to `None` to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If `None`, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d_transpose(IGraphNodeBase inputs, int filters, IEnumerable<int> kernel_size, ValueTuple<int, object> strides, string padding, string data_format, object activation, bool use_bias, object kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for transposed 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2DTranspose
instead. The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
Parameters
-
IGraphNodeBase
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
IEnumerable<int>
kernel_size - A tuple or list of 2 positive integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
-
ValueTuple<int, object>
strides - A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - one of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
object
activation - Activation function. Set it to `None` to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If `None`, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d_transpose(IGraphNodeBase inputs, int filters, IEnumerable<int> kernel_size, IEnumerable<int> strides, string padding, string data_format, object activation, bool use_bias, object kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for transposed 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2DTranspose
instead. The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
Parameters
-
IGraphNodeBase
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
IEnumerable<int>
kernel_size - A tuple or list of 2 positive integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
-
IEnumerable<int>
strides - A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - one of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
object
activation - Activation function. Set it to `None` to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If `None`, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv2d_transpose_dyn(object inputs, object filters, object kernel_size, ImplicitContainer<T> strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, object activation, ImplicitContainer<T> use_bias, object kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, ImplicitContainer<T> trainable, object name, object reuse)
Functional interface for transposed 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv2DTranspose
instead. The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
Parameters
-
object
inputs - Input tensor.
-
object
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
object
kernel_size - A tuple or list of 2 positive integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
-
ImplicitContainer<T>
strides - A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions.
-
ImplicitContainer<T>
padding - one of `"valid"` or `"same"` (case-insensitive).
-
ImplicitContainer<T>
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
object
activation - Activation function. Set it to `None` to maintain a linear activation.
-
ImplicitContainer<T>
use_bias - Boolean, whether the layer uses a bias.
-
object
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If `None`, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
ImplicitContainer<T>
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
object
name - A string, the name of the layer.
-
object
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv3d(IGraphNodeBase inputs, int filters, int kernel_size, ValueTuple<int, object, object> strides, string padding, string data_format, ValueTuple<int, object, object> dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, object reuse)
Functional interface for the 3D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv3D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - An integer or tuple/list of 3 integers, specifying the depth, height and width of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
ValueTuple<int, object, object>
strides - An integer or tuple/list of 3 integers, specifying the strides of the convolution along the depth, height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, depth, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, depth, height, width)`.
-
ValueTuple<int, object, object>
dilation_rate - An integer or tuple/list of 3 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
object
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv3d(IGraphNodeBase inputs, int filters, int kernel_size, int strides, string padding, string data_format, ValueTuple<int, object, object> dilation_rate, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, object reuse)
Functional interface for the 3D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv3D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - An integer or tuple/list of 3 integers, specifying the depth, height and width of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
int
strides - An integer or tuple/list of 3 integers, specifying the strides of the convolution along the depth, height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, depth, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, depth, height, width)`.
-
ValueTuple<int, object, object>
dilation_rate - An integer or tuple/list of 3 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
object
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv3d_dyn(object inputs, object filters, object kernel_size, ImplicitContainer<T> strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, ImplicitContainer<T> dilation_rate, object activation, ImplicitContainer<T> use_bias, object kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, ImplicitContainer<T> trainable, object name, object reuse)
Functional interface for the 3D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv3D
instead. This layer creates a convolution kernel that is convolved
(actually cross-correlated) with the layer input to produce a tensor of
outputs. If `use_bias` is True (and a `bias_initializer` is provided),
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
Parameters
-
object
inputs - Tensor input.
-
object
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
object
kernel_size - An integer or tuple/list of 3 integers, specifying the depth, height and width of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
-
ImplicitContainer<T>
strides - An integer or tuple/list of 3 integers, specifying the strides of the convolution along the depth, height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
ImplicitContainer<T>
padding - One of `"valid"` or `"same"` (case-insensitive).
-
ImplicitContainer<T>
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, depth, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, depth, height, width)`.
-
ImplicitContainer<T>
dilation_rate - An integer or tuple/list of 3 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
ImplicitContainer<T>
use_bias - Boolean, whether the layer uses a bias.
-
object
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
ImplicitContainer<T>
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
object
name - A string, the name of the layer.
-
object
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv3d_transpose(IGraphNodeBase inputs, int filters, int kernel_size, ValueTuple<int, object, object> strides, string padding, string data_format, object activation, bool use_bias, object kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for transposed 3D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv3DTranspose
instead.
Parameters
-
IGraphNodeBase
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - A tuple or list of 3 positive integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
-
ValueTuple<int, object, object>
strides - A tuple or list of 3 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - one of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, depth, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, depth, height, width)`.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv3d_transpose(IGraphNodeBase inputs, int filters, IEnumerable<int> kernel_size, ValueTuple<int, object, object> strides, string padding, string data_format, object activation, bool use_bias, object kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for transposed 3D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv3DTranspose
instead.
Parameters
-
IGraphNodeBase
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
IEnumerable<int>
kernel_size - A tuple or list of 3 positive integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
-
ValueTuple<int, object, object>
strides - A tuple or list of 3 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - one of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, depth, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, depth, height, width)`.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv3d_transpose(IGraphNodeBase inputs, int filters, int kernel_size, Nullable<int> strides, string padding, string data_format, object activation, bool use_bias, object kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for transposed 3D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv3DTranspose
instead.
Parameters
-
IGraphNodeBase
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - A tuple or list of 3 positive integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
-
Nullable<int>
strides - A tuple or list of 3 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - one of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, depth, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, depth, height, width)`.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv3d_transpose(IGraphNodeBase inputs, int filters, IEnumerable<int> kernel_size, Nullable<int> strides, string padding, string data_format, object activation, bool use_bias, object kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for transposed 3D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv3DTranspose
instead.
Parameters
-
IGraphNodeBase
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
IEnumerable<int>
kernel_size - A tuple or list of 3 positive integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
-
Nullable<int>
strides - A tuple or list of 3 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - one of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, depth, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, depth, height, width)`.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object conv3d_transpose_dyn(object inputs, object filters, object kernel_size, ImplicitContainer<T> strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, object activation, ImplicitContainer<T> use_bias, object kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, ImplicitContainer<T> trainable, object name, object reuse)
Functional interface for transposed 3D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.Conv3DTranspose
instead.
Parameters
-
object
inputs - Input tensor.
-
object
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
object
kernel_size - A tuple or list of 3 positive integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
-
ImplicitContainer<T>
strides - A tuple or list of 3 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions.
-
ImplicitContainer<T>
padding - one of `"valid"` or `"same"` (case-insensitive).
-
ImplicitContainer<T>
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, depth, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, depth, height, width)`.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
ImplicitContainer<T>
use_bias - Boolean, whether the layer uses a bias.
-
object
kernel_initializer - An initializer for the convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
kernel_regularizer - Optional regularizer for the convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
kernel_constraint - Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
ImplicitContainer<T>
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
object
name - A string, the name of the layer.
-
object
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object dense(IGraphNodeBase inputs, int units, object activation, bool use_bias, Initializer kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the densely-connected layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.Dense instead. This layer implements the operation:
`outputs = activation(inputs * kernel + bias)`
where `activation` is the activation function passed as the `activation`
argument (if not `None`), `kernel` is a weights matrix created by the layer,
and `bias` is a bias vector created by the layer
(only if `use_bias` is `True`).
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
int
units - Integer or Long, dimensionality of the output space.
-
object
activation - Activation function (callable). Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
Initializer
kernel_initializer - Initializer function for the weight matrix. If `None` (default), weights are initialized using the default initializer used by `tf.compat.v1.get_variable`.
-
ImplicitContainer<T>
bias_initializer - Initializer function for the bias.
-
object
kernel_regularizer - Regularizer function for the weight matrix.
-
object
bias_regularizer - Regularizer function for the bias.
-
object
activity_regularizer - Regularizer function for the output.
-
object
kernel_constraint - An optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - An optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - String, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor the same shape as `inputs` except the last dimension is of size `units`.
object dense_dyn(object inputs, object units, object activation, ImplicitContainer<T> use_bias, object kernel_initializer, ImplicitContainer<T> bias_initializer, object kernel_regularizer, object bias_regularizer, object activity_regularizer, object kernel_constraint, object bias_constraint, ImplicitContainer<T> trainable, object name, object reuse)
Functional interface for the densely-connected layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.Dense instead. This layer implements the operation:
`outputs = activation(inputs * kernel + bias)`
where `activation` is the activation function passed as the `activation`
argument (if not `None`), `kernel` is a weights matrix created by the layer,
and `bias` is a bias vector created by the layer
(only if `use_bias` is `True`).
Parameters
-
object
inputs - Tensor input.
-
object
units - Integer or Long, dimensionality of the output space.
-
object
activation - Activation function (callable). Set it to None to maintain a linear activation.
-
ImplicitContainer<T>
use_bias - Boolean, whether the layer uses a bias.
-
object
kernel_initializer - Initializer function for the weight matrix. If `None` (default), weights are initialized using the default initializer used by `tf.compat.v1.get_variable`.
-
ImplicitContainer<T>
bias_initializer - Initializer function for the bias.
-
object
kernel_regularizer - Regularizer function for the weight matrix.
-
object
bias_regularizer - Regularizer function for the bias.
-
object
activity_regularizer - Regularizer function for the output.
-
object
kernel_constraint - An optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
bias_constraint - An optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
ImplicitContainer<T>
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
object
name - String, the name of the layer.
-
object
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor the same shape as `inputs` except the last dimension is of size `units`.
object dropout(IGraphNodeBase inputs, double rate, bool noise_shape, Nullable<int> seed, bool training, string name)
Applies Dropout to the input. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.dropout instead. Dropout consists in randomly setting a fraction `rate` of input units to 0
at each update during training time, which helps prevent overfitting.
The units that are kept are scaled by `1 / (1 - rate)`, so that their
sum is unchanged at training time and inference time.
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
double
rate - The dropout rate, between 0 and 1. E.g. "rate=0.1" would drop out 10% of input units.
-
bool
noise_shape - 1D tensor of type `int32` representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape `(batch_size, timesteps, features)`, and you want the dropout mask to be the same for all timesteps, you can use `noise_shape=[batch_size, 1, features]`.
-
Nullable<int>
seed - A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
-
bool
training - Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched).
-
string
name - The name of the layer (string).
Returns
-
object
- Output tensor.
object dropout(IGraphNodeBase inputs, double rate, IGraphNodeBase noise_shape, Nullable<int> seed, bool training, string name)
Applies Dropout to the input. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.dropout instead. Dropout consists in randomly setting a fraction `rate` of input units to 0
at each update during training time, which helps prevent overfitting.
The units that are kept are scaled by `1 / (1 - rate)`, so that their
sum is unchanged at training time and inference time.
Parameters
-
IGraphNodeBase
inputs - Tensor input.
-
double
rate - The dropout rate, between 0 and 1. E.g. "rate=0.1" would drop out 10% of input units.
-
IGraphNodeBase
noise_shape - 1D tensor of type `int32` representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape `(batch_size, timesteps, features)`, and you want the dropout mask to be the same for all timesteps, you can use `noise_shape=[batch_size, 1, features]`.
-
Nullable<int>
seed - A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
-
bool
training - Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched).
-
string
name - The name of the layer (string).
Returns
-
object
- Output tensor.
object dropout(IEnumerable<IGraphNodeBase> inputs, double rate, IGraphNodeBase noise_shape, Nullable<int> seed, bool training, string name)
Applies Dropout to the input. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.dropout instead. Dropout consists in randomly setting a fraction `rate` of input units to 0
at each update during training time, which helps prevent overfitting.
The units that are kept are scaled by `1 / (1 - rate)`, so that their
sum is unchanged at training time and inference time.
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - Tensor input.
-
double
rate - The dropout rate, between 0 and 1. E.g. "rate=0.1" would drop out 10% of input units.
-
IGraphNodeBase
noise_shape - 1D tensor of type `int32` representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape `(batch_size, timesteps, features)`, and you want the dropout mask to be the same for all timesteps, you can use `noise_shape=[batch_size, 1, features]`.
-
Nullable<int>
seed - A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
-
bool
training - Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched).
-
string
name - The name of the layer (string).
Returns
-
object
- Output tensor.
object dropout(IEnumerable<IGraphNodeBase> inputs, double rate, bool noise_shape, Nullable<int> seed, bool training, string name)
Applies Dropout to the input. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.dropout instead. Dropout consists in randomly setting a fraction `rate` of input units to 0
at each update during training time, which helps prevent overfitting.
The units that are kept are scaled by `1 / (1 - rate)`, so that their
sum is unchanged at training time and inference time.
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - Tensor input.
-
double
rate - The dropout rate, between 0 and 1. E.g. "rate=0.1" would drop out 10% of input units.
-
bool
noise_shape - 1D tensor of type `int32` representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape `(batch_size, timesteps, features)`, and you want the dropout mask to be the same for all timesteps, you can use `noise_shape=[batch_size, 1, features]`.
-
Nullable<int>
seed - A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
-
bool
training - Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched).
-
string
name - The name of the layer (string).
Returns
-
object
- Output tensor.
object dropout_dyn(object inputs, ImplicitContainer<T> rate, object noise_shape, object seed, ImplicitContainer<T> training, object name)
Applies Dropout to the input. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.dropout instead. Dropout consists in randomly setting a fraction `rate` of input units to 0
at each update during training time, which helps prevent overfitting.
The units that are kept are scaled by `1 / (1 - rate)`, so that their
sum is unchanged at training time and inference time.
Parameters
-
object
inputs - Tensor input.
-
ImplicitContainer<T>
rate - The dropout rate, between 0 and 1. E.g. "rate=0.1" would drop out 10% of input units.
-
object
noise_shape - 1D tensor of type `int32` representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape `(batch_size, timesteps, features)`, and you want the dropout mask to be the same for all timesteps, you can use `noise_shape=[batch_size, 1, features]`.
-
object
seed - A Python integer. Used to create random seeds. See `tf.compat.v1.set_random_seed` for behavior.
-
ImplicitContainer<T>
training - Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched).
-
object
name - The name of the layer (string).
Returns
-
object
- Output tensor.
object flatten_dyn(object inputs, object name, ImplicitContainer<T> data_format)
Flattens an input tensor while preserving the batch axis (axis 0). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.flatten instead.
Parameters
-
object
inputs - Tensor input.
-
object
name - The name of the layer (string).
-
ImplicitContainer<T>
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
Returns
-
object
- Reshaped tensor. Examples: ``` x = tf.compat.v1.placeholder(shape=(None, 4, 4), dtype='float32') y = flatten(x) # now `y` has shape `(None, 16)` x = tf.compat.v1.placeholder(shape=(None, 3, None), dtype='float32') y = flatten(x) # now `y` has shape `(None, None)` ```
object max_pooling1d(object inputs, object pool_size, object strides, string padding, string data_format, string name)
Max Pooling layer for 1D inputs. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling1D instead.
Parameters
-
object
inputs - The tensor over which to pool. Must have rank 3.
-
object
pool_size - An integer or tuple/list of a single integer, representing the size of the pooling window.
-
object
strides - An integer or tuple/list of a single integer, specifying the strides of the pooling operation.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, length, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, length)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- The output tensor, of rank 3.
object max_pooling1d_dyn(object inputs, object pool_size, object strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, object name)
Max Pooling layer for 1D inputs. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling1D instead.
Parameters
-
object
inputs - The tensor over which to pool. Must have rank 3.
-
object
pool_size - An integer or tuple/list of a single integer, representing the size of the pooling window.
-
object
strides - An integer or tuple/list of a single integer, specifying the strides of the pooling operation.
-
ImplicitContainer<T>
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
ImplicitContainer<T>
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, length, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, length)`.
-
object
name - A string, the name of the layer.
Returns
-
object
- The output tensor, of rank 3.
object max_pooling2d(IGraphNodeBase inputs, int pool_size, int strides, string padding, string data_format, string name)
Max pooling layer for 2D inputs (e.g. images). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling2D instead.
Parameters
-
IGraphNodeBase
inputs - The tensor over which to pool. Must have rank 4.
-
int
pool_size - An integer or tuple/list of 2 integers: (pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
int
strides - An integer or tuple/list of 2 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object max_pooling2d(IEnumerable<IGraphNodeBase> inputs, IEnumerable<int> pool_size, IEnumerable<int> strides, string padding, string data_format, string name)
Max pooling layer for 2D inputs (e.g. images). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling2D instead.
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - The tensor over which to pool. Must have rank 4.
-
IEnumerable<int>
pool_size - An integer or tuple/list of 2 integers: (pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
IEnumerable<int>
strides - An integer or tuple/list of 2 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object max_pooling2d(IEnumerable<IGraphNodeBase> inputs, IEnumerable<int> pool_size, int strides, string padding, string data_format, string name)
Max pooling layer for 2D inputs (e.g. images). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling2D instead.
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - The tensor over which to pool. Must have rank 4.
-
IEnumerable<int>
pool_size - An integer or tuple/list of 2 integers: (pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
int
strides - An integer or tuple/list of 2 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object max_pooling2d(IEnumerable<IGraphNodeBase> inputs, int pool_size, IEnumerable<int> strides, string padding, string data_format, string name)
Max pooling layer for 2D inputs (e.g. images). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling2D instead.
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - The tensor over which to pool. Must have rank 4.
-
int
pool_size - An integer or tuple/list of 2 integers: (pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
IEnumerable<int>
strides - An integer or tuple/list of 2 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object max_pooling2d(IEnumerable<IGraphNodeBase> inputs, int pool_size, int strides, string padding, string data_format, string name)
Max pooling layer for 2D inputs (e.g. images). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling2D instead.
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - The tensor over which to pool. Must have rank 4.
-
int
pool_size - An integer or tuple/list of 2 integers: (pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
int
strides - An integer or tuple/list of 2 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object max_pooling2d(IGraphNodeBase inputs, IEnumerable<int> pool_size, IEnumerable<int> strides, string padding, string data_format, string name)
Max pooling layer for 2D inputs (e.g. images). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling2D instead.
Parameters
-
IGraphNodeBase
inputs - The tensor over which to pool. Must have rank 4.
-
IEnumerable<int>
pool_size - An integer or tuple/list of 2 integers: (pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
IEnumerable<int>
strides - An integer or tuple/list of 2 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object max_pooling2d(IGraphNodeBase inputs, IEnumerable<int> pool_size, int strides, string padding, string data_format, string name)
Max pooling layer for 2D inputs (e.g. images). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling2D instead.
Parameters
-
IGraphNodeBase
inputs - The tensor over which to pool. Must have rank 4.
-
IEnumerable<int>
pool_size - An integer or tuple/list of 2 integers: (pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
int
strides - An integer or tuple/list of 2 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object max_pooling2d(IGraphNodeBase inputs, int pool_size, IEnumerable<int> strides, string padding, string data_format, string name)
Max pooling layer for 2D inputs (e.g. images). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling2D instead.
Parameters
-
IGraphNodeBase
inputs - The tensor over which to pool. Must have rank 4.
-
int
pool_size - An integer or tuple/list of 2 integers: (pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
IEnumerable<int>
strides - An integer or tuple/list of 2 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object max_pooling2d_dyn(object inputs, object pool_size, object strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, object name)
Max pooling layer for 2D inputs (e.g. images). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling2D instead.
Parameters
-
object
inputs - The tensor over which to pool. Must have rank 4.
-
object
pool_size - An integer or tuple/list of 2 integers: (pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
object
strides - An integer or tuple/list of 2 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
ImplicitContainer<T>
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
ImplicitContainer<T>
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
object
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object max_pooling3d(IGraphNodeBase inputs, int pool_size, int strides, string padding, string data_format, string name)
Max pooling layer for 3D inputs (e.g. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling3D instead. volumes).
Parameters
-
IGraphNodeBase
inputs - The tensor over which to pool. Must have rank 5.
-
int
pool_size - An integer or tuple/list of 3 integers: (pool_depth, pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
int
strides - An integer or tuple/list of 3 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, depth, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, depth, height, width)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object max_pooling3d(IGraphNodeBase inputs, ValueTuple<int, object, int> pool_size, ValueTuple<int, object, object> strides, string padding, string data_format, string name)
Max pooling layer for 3D inputs (e.g. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling3D instead. volumes).
Parameters
-
IGraphNodeBase
inputs - The tensor over which to pool. Must have rank 5.
-
ValueTuple<int, object, int>
pool_size - An integer or tuple/list of 3 integers: (pool_depth, pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
ValueTuple<int, object, object>
strides - An integer or tuple/list of 3 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, depth, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, depth, height, width)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object max_pooling3d(IGraphNodeBase inputs, ValueTuple<int, object, int> pool_size, int strides, string padding, string data_format, string name)
Max pooling layer for 3D inputs (e.g. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling3D instead. volumes).
Parameters
-
IGraphNodeBase
inputs - The tensor over which to pool. Must have rank 5.
-
ValueTuple<int, object, int>
pool_size - An integer or tuple/list of 3 integers: (pool_depth, pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
int
strides - An integer or tuple/list of 3 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, depth, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, depth, height, width)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object max_pooling3d(IGraphNodeBase inputs, int pool_size, ValueTuple<int, object, object> strides, string padding, string data_format, string name)
Max pooling layer for 3D inputs (e.g. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling3D instead. volumes).
Parameters
-
IGraphNodeBase
inputs - The tensor over which to pool. Must have rank 5.
-
int
pool_size - An integer or tuple/list of 3 integers: (pool_depth, pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
ValueTuple<int, object, object>
strides - An integer or tuple/list of 3 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
string
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
string
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, depth, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, depth, height, width)`.
-
string
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object max_pooling3d_dyn(object inputs, object pool_size, object strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, object name)
Max pooling layer for 3D inputs (e.g. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling3D instead. volumes).
Parameters
-
object
inputs - The tensor over which to pool. Must have rank 5.
-
object
pool_size - An integer or tuple/list of 3 integers: (pool_depth, pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
-
object
strides - An integer or tuple/list of 3 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
-
ImplicitContainer<T>
padding - A string. The padding method, either 'valid' or 'same'. Case-insensitive.
-
ImplicitContainer<T>
data_format - A string. The ordering of the dimensions in the inputs. `channels_last` (default) and `channels_first` are supported. `channels_last` corresponds to inputs with shape `(batch, depth, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, depth, height, width)`.
-
object
name - A string, the name of the layer.
Returns
-
object
- Output tensor.
object separable_conv1d(IGraphNodeBase inputs, int filters, ValueTuple<int, int> kernel_size, ValueTuple<int, int> strides, string padding, string data_format, int dilation_rate, int depth_multiplier, object activation, bool use_bias, object depthwise_initializer, object pointwise_initializer, ImplicitContainer<T> bias_initializer, object depthwise_regularizer, object pointwise_regularizer, object bias_regularizer, object activity_regularizer, object depthwise_constraint, object pointwise_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the depthwise separable 1D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.SeparableConv1D
instead. This layer performs a depthwise convolution that acts separately on
channels, followed by a pointwise convolution that mixes channels.
If `use_bias` is True and a bias initializer is provided,
it adds a bias vector to the output.
It then optionally applies an activation function to produce the final output.
Parameters
-
IGraphNodeBase
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
ValueTuple<int, int>
kernel_size - A single integer specifying the spatial dimensions of the filters.
-
ValueTuple<int, int>
strides - A single integer specifying the strides of the convolution. Specifying any `stride` value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, length, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, length)`.
-
int
dilation_rate - A single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
int
depth_multiplier - The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to `num_filters_in * depth_multiplier`.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
depthwise_initializer - An initializer for the depthwise convolution kernel.
-
object
pointwise_initializer - An initializer for the pointwise convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
depthwise_regularizer - Optional regularizer for the depthwise convolution kernel.
-
object
pointwise_regularizer - Optional regularizer for the pointwise convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
depthwise_constraint - Optional projection function to be applied to the depthwise kernel after being updated by an `Optimizer` (e.g. used for norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
pointwise_constraint - Optional projection function to be applied to the pointwise kernel after being updated by an `Optimizer`.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object separable_conv1d(IGraphNodeBase inputs, int filters, ValueTuple<int, int> kernel_size, Nullable<int> strides, string padding, string data_format, int dilation_rate, int depth_multiplier, object activation, bool use_bias, object depthwise_initializer, object pointwise_initializer, ImplicitContainer<T> bias_initializer, object depthwise_regularizer, object pointwise_regularizer, object bias_regularizer, object activity_regularizer, object depthwise_constraint, object pointwise_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the depthwise separable 1D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.SeparableConv1D
instead. This layer performs a depthwise convolution that acts separately on
channels, followed by a pointwise convolution that mixes channels.
If `use_bias` is True and a bias initializer is provided,
it adds a bias vector to the output.
It then optionally applies an activation function to produce the final output.
Parameters
-
IGraphNodeBase
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
ValueTuple<int, int>
kernel_size - A single integer specifying the spatial dimensions of the filters.
-
Nullable<int>
strides - A single integer specifying the strides of the convolution. Specifying any `stride` value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, length, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, length)`.
-
int
dilation_rate - A single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
int
depth_multiplier - The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to `num_filters_in * depth_multiplier`.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
depthwise_initializer - An initializer for the depthwise convolution kernel.
-
object
pointwise_initializer - An initializer for the pointwise convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
depthwise_regularizer - Optional regularizer for the depthwise convolution kernel.
-
object
pointwise_regularizer - Optional regularizer for the pointwise convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
depthwise_constraint - Optional projection function to be applied to the depthwise kernel after being updated by an `Optimizer` (e.g. used for norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
pointwise_constraint - Optional projection function to be applied to the pointwise kernel after being updated by an `Optimizer`.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object separable_conv1d(IGraphNodeBase inputs, int filters, Nullable<int> kernel_size, Nullable<int> strides, string padding, string data_format, int dilation_rate, int depth_multiplier, object activation, bool use_bias, object depthwise_initializer, object pointwise_initializer, ImplicitContainer<T> bias_initializer, object depthwise_regularizer, object pointwise_regularizer, object bias_regularizer, object activity_regularizer, object depthwise_constraint, object pointwise_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the depthwise separable 1D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.SeparableConv1D
instead. This layer performs a depthwise convolution that acts separately on
channels, followed by a pointwise convolution that mixes channels.
If `use_bias` is True and a bias initializer is provided,
it adds a bias vector to the output.
It then optionally applies an activation function to produce the final output.
Parameters
-
IGraphNodeBase
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
Nullable<int>
kernel_size - A single integer specifying the spatial dimensions of the filters.
-
Nullable<int>
strides - A single integer specifying the strides of the convolution. Specifying any `stride` value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, length, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, length)`.
-
int
dilation_rate - A single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
int
depth_multiplier - The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to `num_filters_in * depth_multiplier`.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
depthwise_initializer - An initializer for the depthwise convolution kernel.
-
object
pointwise_initializer - An initializer for the pointwise convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
depthwise_regularizer - Optional regularizer for the depthwise convolution kernel.
-
object
pointwise_regularizer - Optional regularizer for the pointwise convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
depthwise_constraint - Optional projection function to be applied to the depthwise kernel after being updated by an `Optimizer` (e.g. used for norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
pointwise_constraint - Optional projection function to be applied to the pointwise kernel after being updated by an `Optimizer`.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object separable_conv1d(IGraphNodeBase inputs, int filters, Nullable<int> kernel_size, ValueTuple<int, int> strides, string padding, string data_format, int dilation_rate, int depth_multiplier, object activation, bool use_bias, object depthwise_initializer, object pointwise_initializer, ImplicitContainer<T> bias_initializer, object depthwise_regularizer, object pointwise_regularizer, object bias_regularizer, object activity_regularizer, object depthwise_constraint, object pointwise_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the depthwise separable 1D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.SeparableConv1D
instead. This layer performs a depthwise convolution that acts separately on
channels, followed by a pointwise convolution that mixes channels.
If `use_bias` is True and a bias initializer is provided,
it adds a bias vector to the output.
It then optionally applies an activation function to produce the final output.
Parameters
-
IGraphNodeBase
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
Nullable<int>
kernel_size - A single integer specifying the spatial dimensions of the filters.
-
ValueTuple<int, int>
strides - A single integer specifying the strides of the convolution. Specifying any `stride` value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, length, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, length)`.
-
int
dilation_rate - A single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
int
depth_multiplier - The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to `num_filters_in * depth_multiplier`.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
depthwise_initializer - An initializer for the depthwise convolution kernel.
-
object
pointwise_initializer - An initializer for the pointwise convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
depthwise_regularizer - Optional regularizer for the depthwise convolution kernel.
-
object
pointwise_regularizer - Optional regularizer for the pointwise convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
depthwise_constraint - Optional projection function to be applied to the depthwise kernel after being updated by an `Optimizer` (e.g. used for norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
pointwise_constraint - Optional projection function to be applied to the pointwise kernel after being updated by an `Optimizer`.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object separable_conv1d_dyn(object inputs, object filters, object kernel_size, ImplicitContainer<T> strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, ImplicitContainer<T> dilation_rate, ImplicitContainer<T> depth_multiplier, object activation, ImplicitContainer<T> use_bias, object depthwise_initializer, object pointwise_initializer, ImplicitContainer<T> bias_initializer, object depthwise_regularizer, object pointwise_regularizer, object bias_regularizer, object activity_regularizer, object depthwise_constraint, object pointwise_constraint, object bias_constraint, ImplicitContainer<T> trainable, object name, object reuse)
Functional interface for the depthwise separable 1D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.SeparableConv1D
instead. This layer performs a depthwise convolution that acts separately on
channels, followed by a pointwise convolution that mixes channels.
If `use_bias` is True and a bias initializer is provided,
it adds a bias vector to the output.
It then optionally applies an activation function to produce the final output.
Parameters
-
object
inputs - Input tensor.
-
object
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
object
kernel_size - A single integer specifying the spatial dimensions of the filters.
-
ImplicitContainer<T>
strides - A single integer specifying the strides of the convolution. Specifying any `stride` value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
ImplicitContainer<T>
padding - One of `"valid"` or `"same"` (case-insensitive).
-
ImplicitContainer<T>
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, length, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, length)`.
-
ImplicitContainer<T>
dilation_rate - A single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
ImplicitContainer<T>
depth_multiplier - The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to `num_filters_in * depth_multiplier`.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
ImplicitContainer<T>
use_bias - Boolean, whether the layer uses a bias.
-
object
depthwise_initializer - An initializer for the depthwise convolution kernel.
-
object
pointwise_initializer - An initializer for the pointwise convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
depthwise_regularizer - Optional regularizer for the depthwise convolution kernel.
-
object
pointwise_regularizer - Optional regularizer for the pointwise convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
depthwise_constraint - Optional projection function to be applied to the depthwise kernel after being updated by an `Optimizer` (e.g. used for norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
pointwise_constraint - Optional projection function to be applied to the pointwise kernel after being updated by an `Optimizer`.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
ImplicitContainer<T>
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
object
name - A string, the name of the layer.
-
object
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object separable_conv2d(IGraphNodeBase inputs, int filters, int kernel_size, Nullable<ValueTuple<int, object>> strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, int depth_multiplier, object activation, bool use_bias, object depthwise_initializer, object pointwise_initializer, ImplicitContainer<T> bias_initializer, object depthwise_regularizer, object pointwise_regularizer, object bias_regularizer, object activity_regularizer, object depthwise_constraint, object pointwise_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the depthwise separable 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.SeparableConv2D
instead. This layer performs a depthwise convolution that acts separately on
channels, followed by a pointwise convolution that mixes channels.
If `use_bias` is True and a bias initializer is provided,
it adds a bias vector to the output.
It then optionally applies an activation function to produce the final output.
Parameters
-
IGraphNodeBase
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - A tuple or list of 2 integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
-
Nullable<ValueTuple<int, object>>
strides - A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions. Specifying any `stride` value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
int
depth_multiplier - The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to `num_filters_in * depth_multiplier`.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
depthwise_initializer - An initializer for the depthwise convolution kernel.
-
object
pointwise_initializer - An initializer for the pointwise convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
depthwise_regularizer - Optional regularizer for the depthwise convolution kernel.
-
object
pointwise_regularizer - Optional regularizer for the pointwise convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
depthwise_constraint - Optional projection function to be applied to the depthwise kernel after being updated by an `Optimizer` (e.g. used for norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
pointwise_constraint - Optional projection function to be applied to the pointwise kernel after being updated by an `Optimizer`.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object separable_conv2d(IGraphNodeBase inputs, int filters, IEnumerable<int> kernel_size, Nullable<ValueTuple<int, object>> strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, int depth_multiplier, object activation, bool use_bias, object depthwise_initializer, object pointwise_initializer, ImplicitContainer<T> bias_initializer, object depthwise_regularizer, object pointwise_regularizer, object bias_regularizer, object activity_regularizer, object depthwise_constraint, object pointwise_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the depthwise separable 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.SeparableConv2D
instead. This layer performs a depthwise convolution that acts separately on
channels, followed by a pointwise convolution that mixes channels.
If `use_bias` is True and a bias initializer is provided,
it adds a bias vector to the output.
It then optionally applies an activation function to produce the final output.
Parameters
-
IGraphNodeBase
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
IEnumerable<int>
kernel_size - A tuple or list of 2 integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
-
Nullable<ValueTuple<int, object>>
strides - A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions. Specifying any `stride` value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
int
depth_multiplier - The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to `num_filters_in * depth_multiplier`.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
depthwise_initializer - An initializer for the depthwise convolution kernel.
-
object
pointwise_initializer - An initializer for the pointwise convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
depthwise_regularizer - Optional regularizer for the depthwise convolution kernel.
-
object
pointwise_regularizer - Optional regularizer for the pointwise convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
depthwise_constraint - Optional projection function to be applied to the depthwise kernel after being updated by an `Optimizer` (e.g. used for norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
pointwise_constraint - Optional projection function to be applied to the pointwise kernel after being updated by an `Optimizer`.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object separable_conv2d(IEnumerable<IGraphNodeBase> inputs, int filters, int kernel_size, Nullable<ValueTuple<int, object>> strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, int depth_multiplier, object activation, bool use_bias, object depthwise_initializer, object pointwise_initializer, ImplicitContainer<T> bias_initializer, object depthwise_regularizer, object pointwise_regularizer, object bias_regularizer, object activity_regularizer, object depthwise_constraint, object pointwise_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the depthwise separable 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.SeparableConv2D
instead. This layer performs a depthwise convolution that acts separately on
channels, followed by a pointwise convolution that mixes channels.
If `use_bias` is True and a bias initializer is provided,
it adds a bias vector to the output.
It then optionally applies an activation function to produce the final output.
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
int
kernel_size - A tuple or list of 2 integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
-
Nullable<ValueTuple<int, object>>
strides - A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions. Specifying any `stride` value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
int
depth_multiplier - The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to `num_filters_in * depth_multiplier`.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
depthwise_initializer - An initializer for the depthwise convolution kernel.
-
object
pointwise_initializer - An initializer for the pointwise convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
depthwise_regularizer - Optional regularizer for the depthwise convolution kernel.
-
object
pointwise_regularizer - Optional regularizer for the pointwise convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
depthwise_constraint - Optional projection function to be applied to the depthwise kernel after being updated by an `Optimizer` (e.g. used for norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
pointwise_constraint - Optional projection function to be applied to the pointwise kernel after being updated by an `Optimizer`.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object separable_conv2d(IEnumerable<IGraphNodeBase> inputs, int filters, IEnumerable<int> kernel_size, Nullable<ValueTuple<int, object>> strides, string padding, string data_format, ValueTuple<int, object> dilation_rate, int depth_multiplier, object activation, bool use_bias, object depthwise_initializer, object pointwise_initializer, ImplicitContainer<T> bias_initializer, object depthwise_regularizer, object pointwise_regularizer, object bias_regularizer, object activity_regularizer, object depthwise_constraint, object pointwise_constraint, object bias_constraint, bool trainable, string name, Nullable<bool> reuse)
Functional interface for the depthwise separable 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.SeparableConv2D
instead. This layer performs a depthwise convolution that acts separately on
channels, followed by a pointwise convolution that mixes channels.
If `use_bias` is True and a bias initializer is provided,
it adds a bias vector to the output.
It then optionally applies an activation function to produce the final output.
Parameters
-
IEnumerable<IGraphNodeBase>
inputs - Input tensor.
-
int
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
IEnumerable<int>
kernel_size - A tuple or list of 2 integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
-
Nullable<ValueTuple<int, object>>
strides - A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions. Specifying any `stride` value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
string
padding - One of `"valid"` or `"same"` (case-insensitive).
-
string
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ValueTuple<int, object>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
int
depth_multiplier - The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to `num_filters_in * depth_multiplier`.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
bool
use_bias - Boolean, whether the layer uses a bias.
-
object
depthwise_initializer - An initializer for the depthwise convolution kernel.
-
object
pointwise_initializer - An initializer for the pointwise convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
depthwise_regularizer - Optional regularizer for the depthwise convolution kernel.
-
object
pointwise_regularizer - Optional regularizer for the pointwise convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
depthwise_constraint - Optional projection function to be applied to the depthwise kernel after being updated by an `Optimizer` (e.g. used for norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
pointwise_constraint - Optional projection function to be applied to the pointwise kernel after being updated by an `Optimizer`.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
bool
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
string
name - A string, the name of the layer.
-
Nullable<bool>
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.
object separable_conv2d_dyn(object inputs, object filters, object kernel_size, ImplicitContainer<T> strides, ImplicitContainer<T> padding, ImplicitContainer<T> data_format, ImplicitContainer<T> dilation_rate, ImplicitContainer<T> depth_multiplier, object activation, ImplicitContainer<T> use_bias, object depthwise_initializer, object pointwise_initializer, ImplicitContainer<T> bias_initializer, object depthwise_regularizer, object pointwise_regularizer, object bias_regularizer, object activity_regularizer, object depthwise_constraint, object pointwise_constraint, object bias_constraint, ImplicitContainer<T> trainable, object name, object reuse)
Functional interface for the depthwise separable 2D convolution layer. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use
tf.keras.layers.SeparableConv2D
instead. This layer performs a depthwise convolution that acts separately on
channels, followed by a pointwise convolution that mixes channels.
If `use_bias` is True and a bias initializer is provided,
it adds a bias vector to the output.
It then optionally applies an activation function to produce the final output.
Parameters
-
object
inputs - Input tensor.
-
object
filters - Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
-
object
kernel_size - A tuple or list of 2 integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
-
ImplicitContainer<T>
strides - A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions. Specifying any `stride` value != 1 is incompatible with specifying any `dilation_rate` value != 1.
-
ImplicitContainer<T>
padding - One of `"valid"` or `"same"` (case-insensitive).
-
ImplicitContainer<T>
data_format - A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`.
-
ImplicitContainer<T>
dilation_rate - An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any `dilation_rate` value != 1 is incompatible with specifying any stride value != 1.
-
ImplicitContainer<T>
depth_multiplier - The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to `num_filters_in * depth_multiplier`.
-
object
activation - Activation function. Set it to None to maintain a linear activation.
-
ImplicitContainer<T>
use_bias - Boolean, whether the layer uses a bias.
-
object
depthwise_initializer - An initializer for the depthwise convolution kernel.
-
object
pointwise_initializer - An initializer for the pointwise convolution kernel.
-
ImplicitContainer<T>
bias_initializer - An initializer for the bias vector. If None, the default initializer will be used.
-
object
depthwise_regularizer - Optional regularizer for the depthwise convolution kernel.
-
object
pointwise_regularizer - Optional regularizer for the pointwise convolution kernel.
-
object
bias_regularizer - Optional regularizer for the bias vector.
-
object
activity_regularizer - Optional regularizer function for the output.
-
object
depthwise_constraint - Optional projection function to be applied to the depthwise kernel after being updated by an `Optimizer` (e.g. used for norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
-
object
pointwise_constraint - Optional projection function to be applied to the pointwise kernel after being updated by an `Optimizer`.
-
object
bias_constraint - Optional projection function to be applied to the bias after being updated by an `Optimizer`.
-
ImplicitContainer<T>
trainable - Boolean, if `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see
tf.Variable
). -
object
name - A string, the name of the layer.
-
object
reuse - Boolean, whether to reuse the weights of a previous layer by the same name.
Returns
-
object
- Output tensor.