Configuration for the eval component of train_and_evaluate
EvalSpec
combines details of evaluation of the trained model as well as its
export. Evaluation consists of computing metrics to judge the performance of
the trained model. Export writes out the trained model on to external
storage.
eval_spec(input_fn, steps = 100, name = NULL, hooks = NULL, exporters = NULL, start_delay_secs = 120, throttle_secs = 600)
input_fn |
Evaluation input function returning a tuple of:
|
steps |
Positive number of steps for which to evaluate model.
If |
name |
Name of the evaluation if user needs to run multiple evaluations on different data sets. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
hooks |
List of session run hooks to run during evaluation. |
exporters |
List of |
start_delay_secs |
Start evaluating after waiting for this many seconds. |
throttle_secs |
Do not re-evaluate unless the last evaluation was started at least this many seconds ago. Of course, evaluation does not occur if no new checkpoints are available, hence, this is the minimum. |
Other training methods: train_and_evaluate.tf_estimator
,
train_spec
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.