Subsetting operator for Spark dataframe
Susetting operator for Spark dataframe allowing a subset of column(s) to be selected using syntaxes similar to those supported by R dataframes
## S3 method for class 'tbl_spark' x[i]
x |
The Spark dataframe |
i |
Expression specifying subset of column(s) to include or exclude from the result (e.g., '["col1"]', '[c("col1", "col2")]', '[1:10]', '[-1]', '[NULL]', or '[]') |
## Not run: library(sparklyr) sc <- spark_connect(master = "spark://HOST:PORT") example_sdf <- copy_to(sc, tibble::tibble(a = 1, b = 2)) example_sdf["a"] %>% print() ## End(Not run)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.