Applies Alpha Dropout to the input.
Alpha Dropout is a dropout that keeps mean and variance of inputs to their original values, in order to ensure the self-normalizing property even after this dropout.
layer_alpha_dropout( object, rate, noise_shape = NULL, seed = NULL, input_shape = NULL, batch_input_shape = NULL, batch_size = NULL, dtype = NULL, name = NULL, trainable = NULL, weights = NULL )
object |
Model or layer object |
rate |
float, drop probability (as with |
noise_shape |
Noise shape |
seed |
An integer to use as random seed. |
input_shape |
Dimensionality of the input (integer) not including the samples axis. This argument is required when using this layer as the first layer in a model. |
batch_input_shape |
Shapes, including the batch size. For instance,
|
batch_size |
Fixed batch size for layer |
dtype |
The data type expected by the input, as a string ( |
name |
An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. |
trainable |
Whether the layer weights will be updated during training. |
weights |
Initial weights for layer. |
Alpha Dropout fits well to Scaled Exponential Linear Units by randomly setting activations to the negative saturation value.
Arbitrary. Use the keyword argument input_shape
(list
of integers, does not include the samples axis) when using this layer as
the first layer in a model.
Same shape as input.
Other noise layers:
layer_gaussian_dropout()
,
layer_gaussian_noise()
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.