Stochastic gradient descent optimizer
Stochastic gradient descent optimizer with support for momentum, learning rate decay, and Nesterov momentum.
optimizer_sgd( lr = 0.01, momentum = 0, decay = 0, nesterov = FALSE, clipnorm = NULL, clipvalue = NULL )
lr |
float >= 0. Learning rate. |
momentum |
float >= 0. Parameter that accelerates SGD in the relevant direction and dampens oscillations. |
decay |
float >= 0. Learning rate decay over each update. |
nesterov |
boolean. Whether to apply Nesterov momentum. |
clipnorm |
Gradients will be clipped when their L2 norm exceeds this value. |
clipvalue |
Gradients will be clipped when their absolute value exceeds this value. |
Optimizer for use with compile.keras.engine.training.Model
.
Other optimizers:
optimizer_adadelta()
,
optimizer_adagrad()
,
optimizer_adamax()
,
optimizer_adam()
,
optimizer_nadam()
,
optimizer_rmsprop()
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.