-
-
Notifications
You must be signed in to change notification settings - Fork 28
Fuzz coverage error paths #344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Flags with carried forward coverage won't be shown. Click here to find out more. see 5 files with indirect coverage changes 🚀 New features to boost your workflow:
|
bc07162
to
291a345
Compare
291a345
to
893cc15
Compare
data.len(), | ||
) | ||
// Small enough to hit interesting cases, but large enough to hit the fast path | ||
let chunk_size = 64; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would varying the chunk size make sense? Especially non-power-of-two chunk sizes seem like they could be useful for finding edge-cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might, but I don't see how we could do that in a deterministic way and continue to use the compression corpus.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could take the chunk size from the fuzz input, right? So for example interpreting the fuzz input as series of 1 byte chunk size + n bytes chunk data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would that not mess with how effective the corpus is?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could reformat the existing corpus into this format, right? In any case I guess this is better left as future improvement.
This makes three changes to the uncompress fuzzer
uncompress2
touncompress
(the olduncompress
fuzzer was removed)With those changes we get much better coverage of
inflate.rs
:https://app.codecov.io/github/trifectatechfoundation/zlib-rs/commit/291a345601f0e817b1fb2a385e3df400e576662b/blob/zlib-rs/src/inflate.rs?flags%5B0%5D=fuzz-decompress&dropdown=coverage
cc @inahga