Simple async client
An async client to work with many URLs, but all with the same HTTP method
See HttpClient()
for information on parameters.
a list, with objects of class HttpResponse()
.
Responses are returned in the order they are passed in. We print the
first 10.
HTTP requests mostly fail in ways that you are probably familiar with, including when there's a 400 response (the URL not found), and when the server made a mistake (a 500 series HTTP status code).
But requests can fail sometimes where there is no HTTP status code, and no agreed upon way to handle it other than to just fail immediately.
When a request fails when using synchronous requests (see HttpClient) you get an error message that stops your code progression immediately saying for example:
"Could not resolve host: https://foo.com"
"Failed to connect to foo.com"
"Resolving timed out after 10 milliseconds"
However, for async requests we don't want to fail immediately because that would stop the subsequent requests from occurring. Thus, when we find that a request fails for one of the reasons above we give back a HttpResponse object just like any other response, and:
capture the error message and put it in the content
slot of the
response object (thus calls to content
and parse()
work correctly)
give back a 0
HTTP status code. we handle this specially when testing
whether the request was successful or not with e.g., the success()
method
urls
(character) one or more URLs
opts
any curl options
proxies
named list of headers
auth
an object of class auth
headers
named list of headers
print()
print method for Async objects
Async$print(x, ...)
x
self
...
ignored
new()
Create a new Async object
Async$new(urls, opts, proxies, auth, headers)
A new Async
object.
get()
execute the GET
http verb for the urls
Async$get(path = NULL, query = list(), disk = NULL, stream = NULL, ...)
path
(character) URL path, appended to the base URL
query
(list) query terms, as a named list
disk
a path to write to. if NULL (default), memory used.
See curl::curl_fetch_disk()
for help.
stream
an R function to determine how to stream data. if
NULL
(default), memory used. See curl::curl_fetch_stream()
for help
...
curl options, only those in the acceptable set from
curl::curl_options()
except the following: httpget, httppost, post,
postfields, postfieldsize, and customrequest
\dontrun{ (cc <- Async$new(urls = c( 'https://httpbin.org/', 'https://httpbin.org/get?a=5', 'https://httpbin.org/get?foo=bar' ))) (res <- cc$get()) }
post()
execute the POST
http verb for the urls
Async$post( path = NULL, query = list(), body = NULL, encode = "multipart", disk = NULL, stream = NULL, ... )
path
(character) URL path, appended to the base URL
query
(list) query terms, as a named list
body
body as an R list
encode
one of form, multipart, json, or raw
disk
a path to write to. if NULL (default), memory used.
See curl::curl_fetch_disk()
for help.
stream
an R function to determine how to stream data. if
NULL
(default), memory used. See curl::curl_fetch_stream()
for help
...
curl options, only those in the acceptable set from
curl::curl_options()
except the following: httpget, httppost, post,
postfields, postfieldsize, and customrequest
put()
execute the PUT
http verb for the urls
Async$put( path = NULL, query = list(), body = NULL, encode = "multipart", disk = NULL, stream = NULL, ... )
path
(character) URL path, appended to the base URL
query
(list) query terms, as a named list
body
body as an R list
encode
one of form, multipart, json, or raw
disk
a path to write to. if NULL (default), memory used.
See curl::curl_fetch_disk()
for help.
stream
an R function to determine how to stream data. if
NULL
(default), memory used. See curl::curl_fetch_stream()
for help
...
curl options, only those in the acceptable set from
curl::curl_options()
except the following: httpget, httppost, post,
postfields, postfieldsize, and customrequest
patch()
execute the PATCH
http verb for the urls
Async$patch( path = NULL, query = list(), body = NULL, encode = "multipart", disk = NULL, stream = NULL, ... )
path
(character) URL path, appended to the base URL
query
(list) query terms, as a named list
body
body as an R list
encode
one of form, multipart, json, or raw
disk
a path to write to. if NULL (default), memory used.
See curl::curl_fetch_disk()
for help.
stream
an R function to determine how to stream data. if
NULL
(default), memory used. See curl::curl_fetch_stream()
for help
...
curl options, only those in the acceptable set from
curl::curl_options()
except the following: httpget, httppost, post,
postfields, postfieldsize, and customrequest
delete()
execute the DELETE
http verb for the urls
Async$delete( path = NULL, query = list(), body = NULL, encode = "multipart", disk = NULL, stream = NULL, ... )
path
(character) URL path, appended to the base URL
query
(list) query terms, as a named list
body
body as an R list
encode
one of form, multipart, json, or raw
disk
a path to write to. if NULL (default), memory used.
See curl::curl_fetch_disk()
for help.
stream
an R function to determine how to stream data. if
NULL
(default), memory used. See curl::curl_fetch_stream()
for help
...
curl options, only those in the acceptable set from
curl::curl_options()
except the following: httpget, httppost, post,
postfields, postfieldsize, and customrequest
head()
execute the HEAD
http verb for the urls
Async$head(path = NULL, ...)
path
(character) URL path, appended to the base URL
...
curl options, only those in the acceptable set from
curl::curl_options()
except the following: httpget, httppost, post,
postfields, postfieldsize, and customrequest
verb()
execute any supported HTTP verb
Async$verb(verb, ...)
verb
(character) a supported HTTP verb: get, post, put, patch, delete, head.
...
curl options, only those in the acceptable set from
curl::curl_options()
except the following: httpget, httppost, post,
postfields, postfieldsize, and customrequest
\dontrun{ cc <- Async$new( urls = c( 'https://httpbin.org/', 'https://httpbin.org/get?a=5', 'https://httpbin.org/get?foo=bar' ) ) (res <- cc$verb('get')) lapply(res, function(z) z$parse("UTF-8")) }
clone()
The objects of this class are cloneable with this method.
Async$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other async:
AsyncQueue
,
AsyncVaried
,
HttpRequest
## Not run: cc <- Async$new( urls = c( 'https://httpbin.org/', 'https://httpbin.org/get?a=5', 'https://httpbin.org/get?foo=bar' ) ) cc (res <- cc$get()) res[[1]] res[[1]]$url res[[1]]$success() res[[1]]$status_http() res[[1]]$response_headers res[[1]]$method res[[1]]$content res[[1]]$parse("UTF-8") lapply(res, function(z) z$parse("UTF-8")) # curl options/headers with async urls = c( 'https://httpbin.org/', 'https://httpbin.org/get?a=5', 'https://httpbin.org/get?foo=bar' ) cc <- Async$new(urls = urls, opts = list(verbose = TRUE), headers = list(foo = "bar") ) cc (res <- cc$get()) # using auth with async dd <- Async$new( urls = rep('https://httpbin.org/basic-auth/user/passwd', 3), auth = auth(user = "foo", pwd = "passwd"), opts = list(verbose = TRUE) ) dd res <- dd$get() res vapply(res, function(z) z$status_code, double(1)) vapply(res, function(z) z$success(), logical(1)) lapply(res, function(z) z$parse("UTF-8")) # failure behavior ## e.g. when a URL doesn't exist, a timeout, etc. urls <- c("http://stuffthings.gvb", "https://foo.com", "https://httpbin.org/get") conn <- Async$new(urls = urls) res <- conn$get() res[[1]]$parse("UTF-8") # a failure res[[2]]$parse("UTF-8") # a failure res[[3]]$parse("UTF-8") # a success ## End(Not run) ## ------------------------------------------------ ## Method `Async$get` ## ------------------------------------------------ ## Not run: (cc <- Async$new(urls = c( 'https://httpbin.org/', 'https://httpbin.org/get?a=5', 'https://httpbin.org/get?foo=bar' ))) (res <- cc$get()) ## End(Not run) ## ------------------------------------------------ ## Method `Async$verb` ## ------------------------------------------------ ## Not run: cc <- Async$new( urls = c( 'https://httpbin.org/', 'https://httpbin.org/get?a=5', 'https://httpbin.org/get?foo=bar' ) ) (res <- cc$verb('get')) lapply(res, function(z) z$parse("UTF-8")) ## End(Not run)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.