Skip to content

Conversation

@boshek
Copy link
Contributor

@boshek boshek commented May 18, 2022

This PR enables reading/writing compressed data streams over s3 and locally and adds some tests to test some of those round trips. For the filesystem path I had to do a little regex on the string for compression detection but any feedback on alternative approaches is very welcome. Previously supplying a file with a compression extension wrote out an uncompressed file. Here is a reprex of the updated writing behaviour:

library(arrow, warn.conflicts = FALSE)
## local
write_csv_arrow(mtcars, file = file)
write_csv_arrow(mtcars, file = comp_file)
file.size(file)
[1] 1303
file.size(comp_file)
[1] 567

## or with s3
dir <- tempfile()
dir.create(dir)
subdir <- file.path(dir, "bucket")
dir.create(subdir)

minio_server <- processx::process$new("minio", args = c("server", dir), supervise = TRUE)
Sys.sleep(2)
stopifnot(minio_server$is_alive())

s3_uri <- "s3://minioadmin:minioadmin@?scheme=http&endpoint_override=localhost%3A9000"
bucket <- s3_bucket(s3_uri)

write_csv_arrow(mtcars, bucket$path("bucket/data.csv.gz"))
write_csv_arrow(mtcars, bucket$path("bucket/data.csv"))

file.size(file.path(subdir, "data.csv.gz"))
[1] 567
file.size(file.path(subdir, "data.csv"))
[1] 1303

@github-actions
Copy link

Thanks for opening a pull request!

If this is not a minor PR. Could you open an issue for this pull request on JIRA? https://issues.apache.org/jira/browse/ARROW

Opening JIRAs ahead of time contributes to the Openness of the Apache Arrow project.

Then could you also rename pull request title in the following format?

ARROW-${JIRA_ID}: [${COMPONENT}] ${SUMMARY}

or

MINOR: [${COMPONENT}] ${SUMMARY}

See also:

@boshek boshek changed the title Arrow 16144: [R] Write compressed data streams (particularly over S3) ARROW-16144: [R] Write compressed data streams (particularly over S3) May 18, 2022
@github-actions
Copy link

@github-actions
Copy link

⚠️ Ticket has not been started in JIRA, please click 'Start Progress'.

@boshek boshek marked this pull request as ready for review May 18, 2022 04:57
Copy link
Member

@nealrichardson nealrichardson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a nice addition

}

make_output_stream <- function(x, filesystem = NULL) {
make_output_stream <- function(x, filesystem = NULL, compression = NULL) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing to watch out here: sometimes people name their parquet files something.parquet.snappy, but you wouldn't use a CompressedOutputStream for that, you'd pass the compression option to the parquet writer itself. I would guess that the make_readable_file() path handles this already, maybe that can be a model (or maybe it doesn't and needs to).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So for the parquet.snappy or even snappy.parquet I think it works because "snappy" isn't included here:

arrow/r/R/io.R

Lines 325 to 330 in 3df2e05

switch(tools::file_ext(path),
bz2 = "bz2",
gz = "gzip",
lz4 = "lz4",
zst = "zstd",
"uncompressed"

But if someone tried something like this we do get an error that isn't super informative. I think this is outside this PR so could the resolution here be to open another ticket for this specifically?

library(arrow, warn.conflicts = FALSE)
tf <- tempfile(fileext = ".parquet.gz")
write_parquet(data.frame(x = 1:5), tf, compression = "gzip", compression_level = 5)
read_parquet(tf)
#> Error: file must be a "RandomAccessFile"

Copy link
Member

@nealrichardson nealrichardson May 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, this fails on master too so ok to make a separate JIRA (please link to it here when you make it).

@ursabot
Copy link

ursabot commented May 19, 2022

Benchmark runs are scheduled for baseline = ce4dcbd and contender = d2cbe9e. d2cbe9e is a master commit associated with this PR. Results will be available as each benchmark for each run completes.
Conbench compare runs links:
[Finished ⬇️0.0% ⬆️0.0%] ec2-t3-xlarge-us-east-2
[Failed ⬇️1.29% ⬆️0.0%] test-mac-arm
[Failed ⬇️0.0% ⬆️0.0%] ursa-i9-9960x
[Finished ⬇️0.28% ⬆️0.08%] ursa-thinkcentre-m75q
Buildkite builds:
[Finished] d2cbe9e0 ec2-t3-xlarge-us-east-2
[Failed] d2cbe9e0 test-mac-arm
[Failed] d2cbe9e0 ursa-i9-9960x
[Finished] d2cbe9e0 ursa-thinkcentre-m75q
[Finished] ce4dcbdf ec2-t3-xlarge-us-east-2
[Failed] ce4dcbdf test-mac-arm
[Failed] ce4dcbdf ursa-i9-9960x
[Finished] ce4dcbdf ursa-thinkcentre-m75q
Supported benchmarks:
ec2-t3-xlarge-us-east-2: Supported benchmark langs: Python, R. Runs only benchmarks with cloud = True
test-mac-arm: Supported benchmark langs: C++, Python, R
ursa-i9-9960x: Supported benchmark langs: Python, R, JavaScript
ursa-thinkcentre-m75q: Supported benchmark langs: C++, Java

kou pushed a commit that referenced this pull request Feb 20, 2023
…Hub issue numbers (#34260)

Rewrite the Jira issue numbers to the GitHub issue numbers, so that the GitHub issue numbers are automatically linked to the issues by pkgdown's auto-linking feature.

Issue numbers have been rewritten based on the following correspondence.
Also, the pkgdown settings have been changed and updated to link to GitHub.

I generated the Changelog page using the `pkgdown::build_news()` function and verified that the links work correctly.

---
ARROW-6338	#5198
ARROW-6364	#5201
ARROW-6323	#5169
ARROW-6278	#5141
ARROW-6360	#5329
ARROW-6533	#5450
ARROW-6348	#5223
ARROW-6337	#5399
ARROW-10850	#9128
ARROW-10624	#9092
ARROW-10386	#8549
ARROW-6994	#23308
ARROW-12774	#10320
ARROW-12670	#10287
ARROW-16828	#13484
ARROW-14989	#13482
ARROW-16977	#13514
ARROW-13404	#10999
ARROW-16887	#13601
ARROW-15906	#13206
ARROW-15280	#13171
ARROW-16144	#13183
ARROW-16511	#13105
ARROW-16085	#13088
ARROW-16715	#13555
ARROW-16268	#13550
ARROW-16700	#13518
ARROW-16807	#13583
ARROW-16871	#13517
ARROW-16415	#13190
ARROW-14821	#12154
ARROW-16439	#13174
ARROW-16394	#13118
ARROW-16516	#13163
ARROW-16395	#13627
ARROW-14848	#12589
ARROW-16407	#13196
ARROW-16653	#13506
ARROW-14575	#13160
ARROW-15271	#13170
ARROW-16703	#13650
ARROW-16444	#13397
ARROW-15016	#13541
ARROW-16776	#13563
ARROW-15622	#13090
ARROW-18131	#14484
ARROW-18305	#14581
ARROW-18285	#14615
* Closes: #33631

Authored-by: SHIMA Tatsuya <[email protected]>
Signed-off-by: Sutou Kouhei <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants