-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Open
Labels
Description
Describe the bug
When copying a large path from an s3-based cache, the nix binary appears to buffer the entire download into memory.
When the path is large enough (above approximately 3.5GB) this also reliably causes nix to segfault.
I can only reproduce for an s3-based nix cache. If I use an HTTP cache, memory usage stays low and constant.
Steps To Reproduce
cd $(mktemp -d)
dd if=/dev/urandom of=./random_4g.bin bs=1M count=4096
path=$(nix store add-path . --name large-random)
# copy path to s3 store
nix copy --from local --to <the s3 cache> $path
# delete it from the local store
nix store delete $path
# attempt to copy from s3 store
nix copy --to local --from <the s3 cache> $path
# experience segfault
# 74861 segmentation fault nix copy --to local /nix/store/rv559vmhs7751xizmfnxk5bwyjhfizpa-large-random
Expected behavior
Nix uses fixed amount of memory and does not segfault.
Metadata
nix-env (Nix) 2.22.0
but I have also experienced this with nix 2.25.0
Additional context
Checklist
- checked latest Nix manual (source)
- checked open bug issues and pull requests for possible duplicates
Add 👍 to issues you find important.
guskovd and omarjatoi