Type Conv2DTranspose
Namespace tensorflow.keras.layers
Parent Conv2D
Interfaces IConv2DTranspose
Transposed convolution layer (sometimes called Deconvolution). The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution. When using this layer as the first layer in a model,
provide the keyword argument `input_shape`
(tuple of integers, does not include the sample axis),
e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures
in `data_format="channels_last"`.
Methods
Properties
- activation
- activity_regularizer
- activity_regularizer_dyn
- bias
- bias_constraint
- bias_initializer
- bias_regularizer
- built
- data_format
- dilation_rate
- dtype
- dtype_dyn
- dynamic
- dynamic_dyn
- filters
- inbound_nodes
- inbound_nodes_dyn
- input
- input_dyn
- input_mask
- input_mask_dyn
- input_shape
- input_shape_dyn
- input_spec
- input_spec_dyn
- kernel
- kernel_constraint
- kernel_initializer
- kernel_regularizer
- kernel_size
- losses
- losses_dyn
- metrics
- metrics_dyn
- name
- name_dyn
- name_scope
- name_scope_dyn
- non_trainable_variables
- non_trainable_variables_dyn
- non_trainable_weights
- non_trainable_weights_dyn
- outbound_nodes
- outbound_nodes_dyn
- output
- output_dyn
- output_mask
- output_mask_dyn
- output_padding
- output_shape
- output_shape_dyn
- padding
- PythonObject
- rank
- stateful
- strides
- submodules
- submodules_dyn
- supports_masking
- trainable
- trainable_dyn
- trainable_variables
- trainable_variables_dyn
- trainable_weights
- trainable_weights_dyn
- updates
- updates_dyn
- use_bias
- variables
- variables_dyn
- weights
- weights_dyn