Used to instantiate a keras tensor
tf.keras.Input(
    shape=None,
    batch_size=None,
    name=None,
    dtype=None,
    sparse=False,
    tensor=None,
    ragged=False,
    **kwargs
)The most common layer type. A layer that is completely connected to the previous layer.
tf.keras.layers.Dense(
    units,
    activation=None,
    use_bias=True,
    kernel_initializer="glorot_uniform",
    bias_initializer="zeros",
    kernel_regularizer=None,
    bias_regularizer=None,
    activity_regularizer=None,
    kernel_constraint=None,
    bias_constraint=None,
    **kwargs
)Add an activation function to the previous layer.
tf.keras.layers.Activation(activation, **kwargs)THIS site does a great job of explaining what an embedding layer does. "an embedding learns tries to find the optimal mapping of each of the unique words to a vector of real numbers. The size of that vectors is equal to the output_dim". An embedding layer maps a vector that consists of a small sample of the vocabulary to a feature vector.
Must be the first layer of a model.
tf.keras.layers.Embedding(
    input_dim,
    output_dim,
    embeddings_initializer="uniform",
    embeddings_regularizer=None,
    activity_regularizer=None,
    embeddings_constraint=None,
    mask_zero=False,
    input_length=None,
    **kwargs
)Flatten followed by Dense later on.Used primarily in RNNs. Skips timesteps. Good for skipping padding when using LSTM.
tf.keras.layers.Masking(mask_value=0.0, **kwargs)tf.keras.layers.Lambda(
    function, output_shape=None, mask=None, arguments=None, **kwargs
)