Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

open_dataset

Open a multi-file dataset


Description

Arrow Datasets allow you to query against data that has been split across multiple files. This sharding of data may indicate partitioning, which can accelerate queries that only touch some partitions (files). Call open_dataset() to point to a directory of data files and return a Dataset, then use dplyr methods to query it.

Usage

open_dataset(
  sources,
  schema = NULL,
  partitioning = hive_partition(),
  unify_schemas = NULL,
  ...
)

Arguments

sources

One of:

  • a string path or URI to a directory containing data files

  • a string path or URI to a single file

  • a character vector of paths or URIs to individual data files

  • a list of Dataset objects as created by this function

  • a list of DatasetFactory objects as created by dataset_factory().

When sources is a vector of file URIs, they must all use the same protocol and point to files located in the same file system and having the same format.

schema

Schema for the Dataset. If NULL (the default), the schema will be inferred from the data sources.

partitioning

When sources is a directory path/URI, one of:

  • a Schema, in which case the file paths relative to sources will be parsed, and path segments will be matched with the schema fields. For example, schema(year = int16(), month = int8()) would create partitions for file paths like "2019/01/file.parquet", "2019/02/file.parquet", etc.

  • a character vector that defines the field names corresponding to those path segments (that is, you're providing the names that would correspond to a Schema but the types will be autodetected)

  • a HivePartitioning or HivePartitioningFactory, as returned by hive_partition() which parses explicit or autodetected fields from Hive-style path segments

  • NULL for no partitioning

The default is to autodetect Hive-style partitions. When sources is not a directory path/URI, partitioning is ignored.

unify_schemas

logical: should all data fragments (files, Datasets) be scanned in order to create a unified schema from them? If FALSE, only the first fragment will be inspected for its schema. Use this fast path when you know and trust that all fragments have an identical schema. The default is FALSE when creating a dataset from a directory path/URI or vector of file paths/URIs (because there may be many files and scanning may be slow) but TRUE when sources is a list of Datasets (because there should be few Datasets in the list and their Schemas are already in memory).

...

additional arguments passed to dataset_factory() when sources is a directory path/URI or vector of file paths/URIs, otherwise ignored. These may include format to indicate the file format, or other format-specific options.

Value

A Dataset R6 object. Use dplyr methods on it to query the data, or call $NewScan() to construct a query directly.

See Also

vignette("dataset", package = "arrow")


arrow

Integration to 'Apache' 'Arrow'

v4.0.0.1
Apache License (>= 2.0)
Authors
Neal Richardson [aut, cre], Ian Cook [aut], Jonathan Keane [aut], Romain François [aut] (<https://orcid.org/0000-0002-2444-4226>), Jeroen Ooms [aut], Javier Luraschi [ctb], Jeffrey Wong [ctb], Apache Arrow [aut, cph]
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.