Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

spark_write_avro

Serialize a Spark DataFrame into Apache Avro format


Description

Serialize a Spark DataFrame into Apache Avro format. Notice this functionality requires the Spark connection sc to be instantiated with either an explicitly specified Spark version (i.e., spark_connect(..., version = <version>, packages = c("avro", <other package(s)>), ...)) or a specific version of Spark avro package to use (e.g., spark_connect(..., packages = c("org.apache.spark:spark-avro_2.12:3.0.0", <other package(s)>), ...)).

Usage

spark_write_avro(
  x,
  path,
  avro_schema = NULL,
  record_name = "topLevelRecord",
  record_namespace = "",
  compression = "snappy",
  partition_by = NULL
)

Arguments

x

A Spark DataFrame or dplyr operation

path

The path to the file. Needs to be accessible from the cluster. Supports the "hdfs://", "s3a://" and "file://" protocols.

avro_schema

Optional Avro schema in JSON format

record_name

Optional top level record name in write result (default: "topLevelRecord")

record_namespace

Record namespace in write result (default: "")

compression

Compression codec to use (default: "snappy")

partition_by

A character vector. Partitions the output by the given columns on the file system.

See Also


sparklyr

R Interface to Apache Spark

v1.6.2
Apache License 2.0 | file LICENSE
Authors
Javier Luraschi [aut], Kevin Kuo [aut] (<https://orcid.org/0000-0001-7803-7901>), Kevin Ushey [aut], JJ Allaire [aut], Samuel Macedo [ctb], Hossein Falaki [aut], Lu Wang [aut], Andy Zhang [aut], Yitao Li [aut, cre] (<https://orcid.org/0000-0002-1261-905X>), Jozef Hajnala [ctb], Maciej Szymkiewicz [ctb] (<https://orcid.org/0000-0003-1469-9396>), Wil Davis [ctb], RStudio [cph], The Apache Software Foundation [aut, cph]
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.