-
Notifications
You must be signed in to change notification settings - Fork 133
fuse passthrough: fix oom when running huge images #1923
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Could you rebase this on the latest main branch to enable CI? |
done, and this #1905 also has finished rebase and ready to be merged |
fs/reader/reader.go
Outdated
return fmt.Errorf("failed to seek to end of file: %w", err) | ||
} | ||
|
||
if _, err := file.Write(ip); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blobs in the cache shouldn't be modified directly. The blob should be added to the cache after the entire contents becoming available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blobs in the cache shouldn't be modified directly. The blob should be added to the cache after the entire contents becoming available.
I have considered two potential solutions:
-
Cache all chunks locally using cacheData, and then merge them into a single file by using cache.Get -> ReadAt -> file.Write. However, this approach incurs significant overhead and results in redundant files.
-
Limit the specifications of the images being used. Before calling combinedBuffer.Write, we can check for potential OOM risks. If such a risk is detected, we can immediately report an error and exit.
I was wondering if there are any other solutions you would recommend?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cache.BlobCache.Add()
returns a writer and you can write chunks to that writer. Once the entire data is written to that writer, you can call cache.Writer.Commit()
method then that blob is added to the cache. So can we use cache.Writer
instead of *os.File
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
fs/reader/reader.go
Outdated
} | ||
defer w.Close() | ||
|
||
seeker, ok := w.(io.Seeker) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need to seek? If it's guaranteed that we only append data to the cached blob, can we do this without seeking? Or are there other reasons to do seeking?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, indeed, I made a small mistake, and I have fixed it, done
Signed-off-by: abushwang <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks
During the testing of passthrough with large images, we observed that the containerd-stargz-grpc process was terminated. The reason for this issue is that the content to be merged is currently stored in bufPool, which poses a risk of memory exhaustion.
This commit modifies the implementation to directly write the chunks obtained from prefetchEntireFile to disk, thereby mitigating the risk of running out of memory.