Convolutional layers are used with convolutional neural networks (CNNs)
There are three different convolutional layer dimensions:
tf.keras.layers.Conv2D(
filters,
kernel_size,
strides=(1, 1),
padding="valid",
data_format=None,
dilation_rate=(1, 1),
groups=1,
activation=None,
use_bias=True,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs
)
(batch_size, height, width, channels)
as input(batch_size, channels, height, width)
as inputInput Shape: batch_shape + (channels, rows, cols)
if data format is channels first
Output Shape: batch_shape + (filters, new_rows, new_cols)
if data format is channels first. Rows and columns may change due to padding.
Similar to Conv*D layers, but channels are kept separate at first and then mixed at the end. The 2D version is similar to an Inception Block.
Performs the first half of SeperableConv2D, where channels are kept separate.
"Undoes" a convolutional layer. Is generally used to increase the dimensionality (rows and columns) while decreasing the channel number.
tf.keras.layers.Conv2DTranspose(
filters,
kernel_size,
strides=(1, 1),
padding="valid",
output_padding=None,
data_format=None,
dilation_rate=(1, 1),
activation=None,
use_bias=True,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs
)