Reduce learning rate when a metric has stopped improving.
Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.
callback_reduce_lr_on_plateau( monitor = "val_loss", factor = 0.1, patience = 10, verbose = 0, mode = c("auto", "min", "max"), min_delta = 1e-04, cooldown = 0, min_lr = 0 )
monitor |
quantity to be monitored. |
factor |
factor by which the learning rate will be reduced. new_lr = lr
|
patience |
number of epochs with no improvement after which learning rate will be reduced. |
verbose |
int. 0: quiet, 1: update messages. |
mode |
one of "auto", "min", "max". In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing; in auto mode, the direction is automatically inferred from the name of the monitored quantity. |
min_delta |
threshold for measuring the new optimum, to only focus on significant changes. |
cooldown |
number of epochs to wait before resuming normal operation after lr has been reduced. |
min_lr |
lower bound on the learning rate. |
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.