-
Notifications
You must be signed in to change notification settings - Fork 40
[precompile] sync master and rollback partial of mainflow change #963
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
hero78119
merged 66 commits into
scroll-tech:tianyi/refactor-prover
from
hero78119:feat/merge_to_master
Jun 6, 2025
Merged
[precompile] sync master and rollback partial of mainflow change #963
hero78119
merged 66 commits into
scroll-tech:tianyi/refactor-prover
from
hero78119:feat/merge_to_master
Jun 6, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
benchmark shows there are quite of time spending on glibc free (drop) when object end of its scopes. Follow openvm using [jemalloc](https://github.com/openvm-org/openvm/blob/c771a213f5e7f0732e0ddbafb273e15d99c5049d/crates/vm/Cargo.toml#L56) as global allocators. and set jemalloc parameter follows https://github.com/openvm-org/openvm/blob/c771a213f5e7f0732e0ddbafb273e15d99c5049d/.github/workflows/benchmark-call.yml#L218 > I do not use jemalloc "background_thread: true" as I thought thread in background might occupied other schedule which affect cpu intensive program ### change scope - enable jemalloc by default when compiling ceno_cli - support `cargo make cli` to install ceno_cli - introduce "jemalloc" features ### benchmark benchmark on AMD EPYC 32 cores with command `JEMALLOC_SYS_WITH_MALLOC_CONF="retain:true,metadata_thp:always,thp:always,dirty_decay_ms:-1,muzzy_decay_ms:-1,abort_conf:true" cargo bench --bench fibonacci --features jemalloc --package ceno_zkvm -- --baseline opt-baseline` | Benchmark | Average Time | Improvement | Throughput (instructions/sec) | |-----------------|--------------|-------------|---------------------------| | fibonacci 2^20 | 2.0020 s | -14.74% | 523.76k | | fibonacci 2^21 | 3.5903 s | -18.89% | 584.34k | | fibonacci 2^22 | 6.6531 s | -24.69% | 630.28k | --------- Co-authored-by: Zhang Zhuo <[email protected]>
## Motivation
We want to unify the prover's workflow for opcode circuits and table
circuits. As they follow the same kind of workflow, i.e.
1. infer tower witness;
2. run tower prover;
3. run main sumcheck which is optional for table circuits.
Before this pr, the **opcode** circuit includes multiple
read/write/logup records in a **single** tower while **table** circuit
packs read/write/logup records into one dedicated tower for each
read/write/logup expression. We found that the way that's used by table
circuit to build tower tree is better than that of opcode.
## Performance
| benchmark | proof size (MB) | proving time |
|------------|-----------------|-------------|
| fibonacci 2^20 | 1.14 -> 1.2 (5%) | -0.8% |
| fibonacci 2^21 | 1.22 -> 1.28 (5%) | -5% |
| fibonacci 2^22 | 1.3 -> 1.37 (5%) | -10%|
**New issue**: The proof size increase is due to we have more `ProdSpec`
and `LogupSpec` which implies more points and evaluations in the `struct
TowerProof`. Note that after we abandon the old "interleaving" method,
the number of rounds per product spec and lougup spec are same now,
therefore, we can remove this new overhead in follow up pr.
## Impact
Blocker for scroll-tech#923.
---------
Co-authored-by: sm.wu <[email protected]>
To serve for various purpose, e.g. benchmark
…oll-tech#954) ### Change Scope - [x] example run failed in e2e https://github.com/scroll-tech/ceno/blob/ef93198c83e3b4fcd7f9949ebbc07bc9c93e4de9/examples/examples/hashing.rs#L16 In e2e we only support hints as u32 item and write one by one. But some example requires it as whole vector. Thus, guest program will always failed since unable to serve hint properly. - [x] move most of verbose message from `info` to `trace/debug` so the default e2e be more clean - [x] more comments and polish readme --------- Co-authored-by: Akase Haruka <[email protected]>
…-tech#956) Extracted from scroll-tech#952. Observe a bottleneck on previous interpolation which contribute to most of time due to `vector.extend` operation and bunch of allocations. This PR rewrite univariate extrapolation 1. as the point to be interpolate are fixed set, we can pre-compute all stuff require field inverse 2. in-place change to avoid allocation ### benchmark In Ceno opcode main sumcheck part we batch different degree > 1 into one batch so this function will be used. It shows a slightly improvement (~3%) on Fibonacci 2^24 e2e | Benchmark | Median Time (s) | Median Change (%) | |----------------------------------|------------------|--------------------| | fibonacci_max_steps_1048576 | 2.3978 | +0.9805% (No significant change ) | | fibonacci_max_steps_2097152 | 4.2579 | +1.7587% (Change within noise) | | fibonacci_max_steps_4194304 | 7.7561 | -3.5338% |
build on top of scroll-tech#956 to address review comments clean up point from sumcheck proof, as verifier should derived itself
refactor univariate interpolation in barycentric and unroll version. cross refer to issue scroll-tech/ceno-recursion-verifier#6
…-tech#956) Extracted from scroll-tech#952. Observe a bottleneck on previous interpolation which contribute to most of time due to `vector.extend` operation and bunch of allocations. This PR rewrite univariate extrapolation 1. as the point to be interpolate are fixed set, we can pre-compute all stuff require field inverse 2. in-place change to avoid allocation In Ceno opcode main sumcheck part we batch different degree > 1 into one batch so this function will be used. It shows a slightly improvement (~3%) on Fibonacci 2^24 e2e | Benchmark | Median Time (s) | Median Change (%) | |----------------------------------|------------------|--------------------| | fibonacci_max_steps_1048576 | 2.3978 | +0.9805% (No significant change ) | | fibonacci_max_steps_2097152 | 4.2579 | +1.7587% (Change within noise) | | fibonacci_max_steps_4194304 | 7.7561 | -3.5338% |
benchmark shows there are quite of time spending on glibc free (drop) when object end of its scopes. Follow openvm using [jemalloc](https://github.com/openvm-org/openvm/blob/c771a213f5e7f0732e0ddbafb273e15d99c5049d/crates/vm/Cargo.toml#L56) as global allocators. and set jemalloc parameter follows https://github.com/openvm-org/openvm/blob/c771a213f5e7f0732e0ddbafb273e15d99c5049d/.github/workflows/benchmark-call.yml#L218 > I do not use jemalloc "background_thread: true" as I thought thread in background might occupied other schedule which affect cpu intensive program ### change scope - enable jemalloc by default when compiling ceno_cli - support `cargo make cli` to install ceno_cli - introduce "jemalloc" features ### benchmark benchmark on AMD EPYC 32 cores with command `JEMALLOC_SYS_WITH_MALLOC_CONF="retain:true,metadata_thp:always,thp:always,dirty_decay_ms:-1,muzzy_decay_ms:-1,abort_conf:true" cargo bench --bench fibonacci --features jemalloc --package ceno_zkvm -- --baseline opt-baseline` | Benchmark | Average Time | Improvement | Throughput (instructions/sec) | |-----------------|--------------|-------------|---------------------------| | fibonacci 2^20 | 2.0020 s | -14.74% | 523.76k | | fibonacci 2^21 | 3.5903 s | -18.89% | 584.34k | | fibonacci 2^22 | 6.6531 s | -24.69% | 630.28k | --------- Co-authored-by: Zhang Zhuo <[email protected]>
…into feat/merge_to_master
spherel
approved these changes
Jun 6, 2025
Member
spherel
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Change
This PR sync with ceno master, and rollback partial of change to assure not affect ceno mainflow benchmark
benchmark against master