Async Multi Download
AJAX style concurrent requests, possibly using HTTP/2 multiplexing. Results are only available via callback functions. Advanced use only!
multi_add(handle, done = NULL, fail = NULL, data = NULL, pool = NULL) multi_run(timeout = Inf, poll = FALSE, pool = NULL) multi_set(total_con = 50, host_con = 6, multiplex = TRUE, pool = NULL) multi_list(pool = NULL) multi_cancel(handle) new_pool(total_con = 100, host_con = 6, multiplex = TRUE) multi_fdset(pool = NULL)
handle |
a curl handle with preconfigured |
done |
callback function for completed request. Single argument with response data in same structure as curl_fetch_memory. |
fail |
callback function called on failed request. Argument contains error message. |
data |
(advanced) callback function, file path, or connection object for writing
incoming data. This callback should only be used for streaming applications,
where small pieces of incoming data get written before the request has completed. The
signature for the callback function is |
pool |
a multi handle created by new_pool. Default uses a global pool. |
timeout |
max time in seconds to wait for results. Use |
poll |
If |
total_con |
max total concurrent connections. |
host_con |
max concurrent connections per host. |
multiplex |
enable HTTP/2 multiplexing if supported by host and client. |
Requests are created in the usual way using a curl handle and added
to the scheduler with multi_add. This function returns immediately
and does not perform the request yet. The user needs to call multi_run
which performs all scheduled requests concurrently. It returns when all
requests have completed, or case of a timeout
or SIGINT
(e.g.
if the user presses ESC
or CTRL+C
in the console). In case of
the latter, simply call multi_run again to resume pending requests.
When the request succeeded, the done
callback gets triggered with
the response data. The structure if this data is identical to curl_fetch_memory.
When the request fails, the fail
callback is triggered with an error
message. Note that failure here means something went wrong in performing the
request such as a connection failure, it does not check the http status code.
Just like curl_fetch_memory, the user has to implement application logic.
Raising an error within a callback function stops execution of that function but does not affect other requests.
A single handle cannot be used for multiple simultaneous requests. However it is possible to add new requests to a pool while it is running, so you can re-use a handle within the callback of a request from that same handle. It is up to the user to make sure the same handle is not used in concurrent requests.
The multi_cancel function can be used to cancel a pending request. It has no effect if the request was already completed or canceled.
The multi_fdset function returns the file descriptors curl is
polling currently, and also a timeout parameter, the number of
milliseconds an application should wait (at most) before proceeding. It
is equivalent to the curl_multi_fdset
and
curl_multi_timeout
calls. It is handy for applications that is
expecting input (or writing output) through both curl, and other file
descriptors.
results <- list() success <- function(x){ results <<- append(results, list(x)) } failure <- function(str){ cat(paste("Failed request:", str), file = stderr()) } # This handle will take longest (3sec) h1 <- new_handle(url = "https://eu.httpbin.org/delay/3") multi_add(h1, done = success, fail = failure) # This handle writes data to a file con <- file("output.txt") h2 <- new_handle(url = "https://eu.httpbin.org/post", postfields = "bla bla") multi_add(h2, done = success, fail = failure, data = con) # This handle raises an error h3 <- new_handle(url = "https://urldoesnotexist.xyz") multi_add(h3, done = success, fail = failure) # Actually perform the requests multi_run(timeout = 2) multi_run() # Check the file readLines("output.txt") unlink("output.txt")
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.