Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

dnn_linear_combined_estimators

Linear Combined Deep Neural Networks


Description

Also known as wide-n-deep estimators, these are estimators for TensorFlow Linear and DNN joined models for regression.

Usage

dnn_linear_combined_regressor(model_dir = NULL,
  linear_feature_columns = NULL, linear_optimizer = "Ftrl",
  dnn_feature_columns = NULL, dnn_optimizer = "Adagrad",
  dnn_hidden_units = NULL, dnn_activation_fn = "relu",
  dnn_dropout = NULL, label_dimension = 1L, weight_column = NULL,
  input_layer_partitioner = NULL, config = NULL)

dnn_linear_combined_classifier(model_dir = NULL,
  linear_feature_columns = NULL, linear_optimizer = "Ftrl",
  dnn_feature_columns = NULL, dnn_optimizer = "Adagrad",
  dnn_hidden_units = NULL, dnn_activation_fn = "relu",
  dnn_dropout = NULL, n_classes = 2L, weight_column = NULL,
  label_vocabulary = NULL, input_layer_partitioner = NULL,
  config = NULL)

Arguments

model_dir

Directory to save the model parameters, graph, and so on. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.

linear_feature_columns

The feature columns used by linear (wide) part of the model.

linear_optimizer

Either the name of the optimizer to be used when training the model, or a TensorFlow optimizer instance. Defaults to the FTRL optimizer.

dnn_feature_columns

The feature columns used by the neural network (deep) part in the model.

dnn_optimizer

Either the name of the optimizer to be used when training the model, or a TensorFlow optimizer instance. Defaults to the Adagrad optimizer.

dnn_hidden_units

An integer vector, indicating the number of hidden units in each layer. All layers are fully connected. For example, c(64, 32) means the first layer has 64 nodes, and the second layer has 32 nodes.

dnn_activation_fn

The activation function to apply to each layer. This can either be an actual activation function (e.g. tf$nn$relu), or the name of an activation function (e.g. "relu"). Defaults to the "relu" activation function. See https://www.tensorflow.org/api_guides/python/nn#Activation_Functions for documentation related to the set of activation functions available in TensorFlow.

dnn_dropout

When not NULL, the probability we will drop out a given coordinate.

label_dimension

Number of regression targets per example. This is the size of the last dimension of the labels and logits Tensor objects (typically, these have shape [batch_size, label_dimension]).

weight_column

A string, or a numeric column created by column_numeric() defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the features argument. If it is a numeric column, then the raw tensor is fetched by key weight_column$key, then weight_column$normalizer_fn is applied on it to get weight tensor.

input_layer_partitioner

An optional partitioner for the input layer. Defaults to min_max_variable_partitioner with min_slice_size 64 << 20.

config

A run configuration created by run_config(), used to configure the runtime settings.

n_classes

The number of label classes.

label_vocabulary

A list of strings represents possible label values. If given, labels must be string type and have any value in label_vocabulary. If it is not given, that means labels are already encoded as integer or float within [0, 1] for n_classes == 2 and encoded as integer values in {0, 1,..., n_classes -1} for n_classes > 2. Also there will be errors if vocabulary is not provided and labels are string.

See Also


tfestimators

Interface to 'TensorFlow' Estimators

v1.9.1
Apache License 2.0
Authors
JJ Allaire [aut], Yuan Tang [aut] (<https://orcid.org/0000-0001-5243-233X>), Kevin Ushey [aut], Kevin Kuo [aut, cre] (<https://orcid.org/0000-0001-7803-7901>), Daniel Falbel [ctb, cph], RStudio [cph, fnd], Google Inc. [cph]
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.