-
Notifications
You must be signed in to change notification settings - Fork 22
feat: support codecv8 #54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Warning Rate limit exceeded@colinlyguo has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 6 minutes and 7 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
WalkthroughThis update introduces support for codec version 8 (CodecV8) and the "feynman" hardfork. It adds a new Changes
Sequence Diagram(s)sequenceDiagram
participant App
participant CodecSelector
participant DACodecV7
participant DACodecV8
App->>CodecSelector: Request codec for block number & timestamp
CodecSelector->>CodecSelector: Check IsFeynman(timestamp)
alt Feynman active (CodecV8)
CodecSelector->>DACodecV8: Instantiate with forcedVersion=CodecV8
CodecSelector-->>App: Return DACodecV8
else EuclidV2 active (CodecV7)
CodecSelector->>DACodecV7: Instantiate with forcedVersion=CodecV7
CodecSelector-->>App: Return DACodecV7
else
CodecSelector->>DACodecV7: Instantiate default
CodecSelector-->>App: Return DACodecV7
end
App->>Codec: CompressScrollBatchBytes(batchBytes)
alt CodecV7
Codec->>zstd: CompressScrollBatchBytesLegacy(batchBytes)
zstd-->>Codec: CompressedData or error
else CodecV8
Codec->>zstd: CompressScrollBatchBytesStandard(batchBytes)
zstd-->>Codec: CompressedData or error
end
Codec-->>App: CompressedData or error
Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🔭 Outside diff range comments (1)
encoding/codecv7.go (1)
307-318:⚠️ Potential issueBlob size estimation ignores compression branch
When
enableCompression == truewe appendcompressedPayloadBytes, but still
returnblobEnvelopeV7OffsetPayload + uint64(len(payloadBytes)).
payloadBytesis the uncompressed length, so the estimate can overshoot and
lead to gas over-estimation.- return blobEnvelopeV7OffsetPayload + uint64(len(payloadBytes)), calculatePaddedBlobSize(uint64(len(blobBytes))), nil + return blobEnvelopeV7OffsetPayload + uint64(len(blobBytes)-blobEnvelopeV7OffsetPayload), // actual payload length + calculatePaddedBlobSize(uint64(len(blobBytes))), + nil
🧹 Nitpick comments (3)
encoding/codecv8.go (1)
3-14: Add compile-time interface assertion for new codec
DACodecV8relies entirely on embeddedDACodecV7methods.
A compile-time guard makes accidental interface drift visible duringgo vet.package encoding type DACodecV8 struct { DACodecV7 } +// Compile-time safety: ensure we still satisfy the Codec interface. +var _ Codec = (*DACodecV8)(nil) + func NewDACodecV8() *DACodecV8 { v := CodecV8 return &DACodecV8{ DACodecV7: DACodecV7{ forcedVersion: &v, }, } }encoding/codecv8_test.go (2)
61-64: Run sub-tests in parallelAdd
t.Parallel()inside eacht.Runbody to let the cases execute concurrently and speed the suite up.
247-266: Minor: pre-allocate slice index instead of manualjcounterInstead of manually skipping L1 messages with an auxiliary counter, filter the slice up-front:
var l2Txs []*types.TransactionData for _, tx := range block.Transactions { if tx.Type != types.L1MessageTxType { l2Txs = append(l2Txs, tx) } } require.Equal(t, len(l2Txs), len(txDataDecoded)) for i := range l2Txs { assertEqualTransactionData(t, l2Txs[i], txDataDecoded[i]) }It is clearer and avoids off-by-one risks.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (6)
encoding/codecv7.go(5 hunks)encoding/codecv8.go(1 hunks)encoding/codecv8_test.go(1 hunks)encoding/da.go(6 hunks)encoding/interfaces.go(2 hunks)go.mod(1 hunks)
🧰 Additional context used
🧠 Learnings (4)
encoding/interfaces.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/interfaces.go:95-108
Timestamp: 2024-10-17T04:13:14.579Z
Learning: In the `CodecFromConfig` function in the Go `encoding/interfaces.go` file, if none of the chain configuration conditions match, it's acceptable to default to returning `&DACodecV0{}` because, in the current logic, we can only deduce the codec version as the function implements, and the logic is complete.
encoding/codecv8.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/interfaces.go:95-108
Timestamp: 2024-10-17T04:13:14.579Z
Learning: In the `CodecFromConfig` function in the Go `encoding/interfaces.go` file, if none of the chain configuration conditions match, it's acceptable to default to returning `&DACodecV0{}` because, in the current logic, we can only deduce the codec version as the function implements, and the logic is complete.
encoding/codecv7.go (3)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-10-18T03:40:09.800Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/interfaces.go:95-108
Timestamp: 2024-10-17T04:13:14.579Z
Learning: In the `CodecFromConfig` function in the Go `encoding/interfaces.go` file, if none of the chain configuration conditions match, it's acceptable to default to returning `&DACodecV0{}` because, in the current logic, we can only deduce the codec version as the function implements, and the logic is complete.
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:152-154
Timestamp: 2024-10-18T03:41:31.377Z
Learning: In the `daBatchV1` struct, the `BlobBytes()` method is intentionally returning `nil`.
encoding/da.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-10-18T03:40:09.800Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
🧬 Code Graph Analysis (4)
encoding/interfaces.go (1)
encoding/codecv8.go (1)
NewDACodecV8(7-14)
encoding/codecv8.go (2)
encoding/codecv7.go (1)
DACodecV7(20-22)encoding/interfaces.go (1)
CodecV8(93-93)
encoding/codecv8_test.go (2)
encoding/interfaces.go (3)
CodecFromVersion(97-120)CodecV8(93-93)DABatch(33-43)encoding/da.go (3)
Batch(119-129)Block(102-107)TxsToTxsData(378-416)
encoding/da.go (1)
encoding/interfaces.go (5)
CodecV7(92-92)CodecV8(93-93)CodecV4(89-89)CodecV5(90-90)CodecV6(91-91)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: tests
🔇 Additional comments (5)
encoding/codecv7.go (1)
20-30:forcedVersiondesign looks correctNice use of an override hook; this keeps V7 logic reusable for V8.
No functional issues spotted here.encoding/interfaces.go (2)
93-97: Remember to update any String()/Marshal helpers for CodecVersion
CodecV8is added to the enum – ensure helper methods such as
func (v CodecVersion) String() stringor any JSON marshalers include the new
case; otherwise logs/JSON will show numeric8.
122-128: Codec selection path prioritisation looks goodPlacing
IsFeynmanbeforeIsEuclidV2guarantees the newer hard-fork wins.
No further concerns.encoding/da.go (2)
769-773: Duplicated hard-fork labels
GetHardforkNamenow returns"euclidV2"for the window between Euclid-V2
and Feynman, which is fine, but note that"euclid"and"euclidV2"are now
both possible outputs. Verify downstream consumers expect this extra label.
818-840: Compression gate updated – goodIncluding
CodecV8in the compatibility-check branch keeps behaviour aligned
with V7+; change looks correct.
|
Will this PR also contain the zstd version update? |
Sounds good. will change the target branch of this PR: #51. |
* update zstd dep * turn zstd dependening to official * support 2 zstd libs * address comments * tweak comments * typo fix * add CompressScrollBatchBytes interface --------- Co-authored-by: colin <[email protected]> Co-authored-by: colinlyguo <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (2)
encoding/codecv8_test.go (2)
73-89: Tests should use public interface methods instead of casting to private typesThe test is still casting to the private
*daBatchV7type, which creates tight coupling to internal implementation details. This makes the tests brittle and could break if the embedding structure changes.Also applies to: 92-100
237-238: Incorrect use of strings.TrimRight on hex stringsThe use of
strings.TrimRightwith "0" can remove individual '0' characters, potentially creating odd-length hex strings and losing significant half-byte data.
🧹 Nitpick comments (8)
libzstd/encoder-legacy/src/lib.rs (1)
69-101: Review unsafe code usage and error handling in FFI function.The unsafe extern "C" function handles memory correctly:
- Proper slice construction from raw pointers
- Buffer size validation before writing
- Comprehensive error handling and propagation
- Correct output size reporting
However, consider these improvements:
pub unsafe extern "C" fn compress_scroll_batch_bytes_legacy( src: *const c_uchar, src_size: u64, output_buf: *mut c_uchar, output_buf_size: *mut u64, ) -> *const c_char { + // Add null pointer checks + if src.is_null() || output_buf.is_null() || output_buf_size.is_null() { + return b"compress_scroll_batch_bytes_legacy: null pointer passed\0".as_ptr() as *const c_char; + } + let buf_size = *output_buf_size; + if buf_size == 0 { + return b"compress_scroll_batch_bytes_legacy: zero output buffer size\0".as_ptr() as *const c_char; + }encoding/zstd/zstd.go (1)
21-24: Consider consistent empty input handling.Both functions return an error for empty input, but the behavior might be inconsistent with the underlying C functions. Consider whether empty input should return empty output instead, especially since some compression scenarios might legitimately have empty batches.
- if len(batchBytes) == 0 { - return nil, fmt.Errorf("input batch is empty") - } + if len(batchBytes) == 0 { + return []byte{}, nil + }Also applies to: 42-45
libzstd/encoder-standard/Makefile (1)
1-1: Consider adding a test target for completeness.While not critical, adding a
testtarget would align with standard Makefile conventions and the static analysis recommendation.-.PHONY: all clean build install +.PHONY: all clean build install test +test: + cargo testlibzstd/encoder-legacy/Makefile (1)
1-1: Consider adding a test target for consistency.Same as the standard encoder, adding a
testtarget would improve consistency with Makefile conventions.-.PHONY: all clean build install +.PHONY: all clean build install test +test: + cargo testlibzstd/encoder-standard/src/lib.rs (1)
80-82: Consider handling the expect case more gracefully.While the expect message indicates this should be infallible, consider whether a proper error return would be more robust for the FFI boundary.
- encoder.set_pledged_src_size(Some(src.len() as u64)).expect( - "compress_scroll_batch_bytes_standard: failed to set pledged src size, should be infallible", - ); + if let Err(e) = encoder.set_pledged_src_size(Some(src.len() as u64)) { + return out_as_err(&format!("failed to set pledged src size: {}", e), out); + }encoding/codecv8_test.go (1)
295-295: Fix typo in variable nameThe variable name contains a typo: "Incompressable" should be "Incompressible".
-maxAvailableBytesIncompressable := maxEffectiveBlobBytes - 5 - blobPayloadV7MinEncodedLength +maxAvailableBytesIncompressible := maxEffectiveBlobBytes - 5 - blobPayloadV7MinEncodedLengthAlso update all usages of this variable in lines 345, 351, and 357.
encoding/zstd/add_symbol_prefix.sh (2)
34-38: Prefix-existence check may yield false positives
grep -q "${PREFIX}"triggers on any symbol containing the substring, not necessarily one starting with the prefix, e.g.foo_scroll_legacy_bar.-if "$LLVM_NM" "$LIB_FILE" 2>/dev/null | grep -q "${PREFIX}"; then +if "$LLVM_NM" "$LIB_FILE" 2>/dev/null | awk '$3 ~ /^'"$PREFIX"'/ {found=1; exit} END{exit !found}'; then
71-76: Potential race & loss of file metadata duringmvRenaming
${LIB_FILE%.*}_new.aover the original loses timestamps and, if interrupted, may leave the library missing. Usemv -fto force overwrite and/orcp --preserve=timestampsfollowed bymvto reduce risk.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
libzstd/encoder-legacy/Cargo.lockis excluded by!**/*.locklibzstd/encoder-standard/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (25)
encoding/codecv0.go(1 hunks)encoding/codecv2.go(5 hunks)encoding/codecv4.go(3 hunks)encoding/codecv7.go(7 hunks)encoding/codecv8.go(1 hunks)encoding/codecv8_test.go(1 hunks)encoding/da_test.go(1 hunks)encoding/interfaces.go(4 hunks)encoding/interfaces_test.go(1 hunks)encoding/zstd/add_scroll_prefix_in_zstd_related_symbols.sh(0 hunks)encoding/zstd/add_symbol_prefix.sh(1 hunks)encoding/zstd/libscroll_zstd_darwin_arm64.go(1 hunks)encoding/zstd/libscroll_zstd_linux_amd64.go(1 hunks)encoding/zstd/libscroll_zstd_linux_arm64.go(1 hunks)encoding/zstd/zstd.go(3 hunks)libzstd/encoder-legacy/Cargo.toml(1 hunks)libzstd/encoder-legacy/Makefile(1 hunks)libzstd/encoder-legacy/src/lib.rs(1 hunks)libzstd/encoder-standard/.gitignore(1 hunks)libzstd/encoder-standard/Cargo.toml(1 hunks)libzstd/encoder-standard/Makefile(1 hunks)libzstd/encoder-standard/rust-toolchain(1 hunks)libzstd/encoder-standard/src/lib.rs(1 hunks)libzstd/encoder/src/lib.rs(0 hunks)libzstd/src/lib.rs(0 hunks)
💤 Files with no reviewable changes (3)
- encoding/zstd/add_scroll_prefix_in_zstd_related_symbols.sh
- libzstd/src/lib.rs
- libzstd/encoder/src/lib.rs
✅ Files skipped from review due to trivial changes (7)
- libzstd/encoder-standard/rust-toolchain
- libzstd/encoder-standard/.gitignore
- encoding/zstd/libscroll_zstd_darwin_arm64.go
- encoding/zstd/libscroll_zstd_linux_amd64.go
- encoding/zstd/libscroll_zstd_linux_arm64.go
- libzstd/encoder-legacy/Cargo.toml
- libzstd/encoder-standard/Cargo.toml
🚧 Files skipped from review as they are similar to previous changes (2)
- encoding/codecv7.go
- encoding/interfaces.go
🧰 Additional context used
🧠 Learnings (4)
encoding/codecv4.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-10-18T03:40:09.800Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
encoding/codecv2.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-10-18T03:40:09.800Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
encoding/codecv8_test.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-10-18T03:40:09.800Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
encoding/codecv8.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-10-18T03:40:09.800Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
🪛 checkmake (0.2.2)
libzstd/encoder-legacy/Makefile
[warning] 1-1: Missing required phony target "test"
(minphony)
libzstd/encoder-standard/Makefile
[warning] 1-1: Missing required phony target "test"
(minphony)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: tests
🔇 Additional comments (23)
encoding/codecv0.go (1)
434-436: LGTM! Correct no-op compression implementation for CodecV0.The implementation correctly provides a no-op compression method that returns the input bytes unchanged, which is appropriate for CodecV0 that doesn't perform compression. This maintains interface compatibility with other codec versions.
libzstd/encoder-legacy/src/lib.rs (3)
17-20: Verify the block size target and maximum blocks constants.The constants
N_BLOCK_SIZE_TARGET = 124 * 1024andN_MAX_BLOCKS = 10should be verified to ensure they align with the expected compression requirements for scroll batches.Please confirm these values are appropriate for the intended use case and match any existing configuration in the Go codebase.
23-50: Well-configured zstd encoder for legacy compression.The encoder configuration appropriately:
- Disables literal compression for deterministic output
- Sets window log limit for compatibility
- Configures target block size
- Disables checksum, magic bytes, and dictionary ID for minimal overhead
- Includes content size for decode-time validation
This configuration appears optimized for the scroll-tech use case.
53-64: Robust error message handling with buffer overflow protection.The error handling correctly:
- Checks buffer capacity before copying
- Provides fallback error message when buffer is too small
- Properly null-terminates C-style strings
- Uses safe slice operations
Good defensive programming practices.
encoding/interfaces_test.go (1)
27-27: Test case correctly added for CodecV8.The test case follows the established pattern and correctly validates that
CodecFromVersionreturns aDACodecV8instance forCodecV8input.encoding/da_test.go (1)
132-132: Correctly updated to use legacy compression method.The test appropriately uses
CompressScrollBatchBytesLegacywhich aligns with the compression method separation introduced in this PR.encoding/codecv4.go (3)
206-206: Correctly refactored to use codec instance method for compression.The change from direct package call to codec instance method
d.CompressScrollBatchBytes(batchBytes)enables version-specific compression behavior and improves encapsulation.
268-268: Consistent compression method refactoring.Properly updated to use the codec instance method, maintaining consistency with the compression refactoring pattern throughout the codebase.
296-296: Appropriate use of codec instance compression method.The refactoring correctly delegates compression to the codec instance, enabling proper version-specific compression handling.
encoding/codecv2.go (2)
157-157: Well-executed refactoring to instance methods.The migration from direct
zstd.CompressScrollBatchBytescalls to the instance methodd.CompressScrollBatchBytesis consistent and enables codec-specific compression implementations while maintaining the same functionality.Also applies to: 239-239, 252-252, 266-266
293-296: Clean wrapper implementation for legacy compression.The new
CompressScrollBatchBytesmethod properly encapsulates the legacy compression logic, maintaining backward compatibility for CodecV2 while enabling the architectural flexibility for different compression implementations across codec versions.encoding/zstd/zstd.go (1)
17-36: Excellent separation of legacy and standard compression.The split into two distinct functions with clear documentation enables proper codec versioning while maintaining consistent API patterns. The error handling and C interop are implemented correctly.
Also applies to: 38-57
libzstd/encoder-standard/Makefile (1)
1-57: Well-structured Makefile with proper platform support.The platform detection, build process, and installation logic are correctly implemented. The macOS deployment target setting and ranlib usage are appropriate for ensuring compatibility.
libzstd/encoder-legacy/Makefile (1)
1-57: Consistent Makefile implementation for legacy encoder.The structure and logic mirror the standard encoder Makefile appropriately, ensuring consistent build processes across both encoder libraries.
libzstd/encoder-standard/src/lib.rs (3)
23-50: Well-configured encoder initialization.The encoder configuration appropriately disables literal compression, sets the window log limit, and configures frame parameters for the CodecV8 use case. The parameter choices align well with the compression requirements.
52-64: Robust error message handling for FFI.The
out_as_errfunction properly handles error message truncation and null-termination for C-style strings, ensuring safe error reporting across FFI boundaries.
66-101: Solid FFI implementation with proper safety measures.The compression function correctly handles memory safety, buffer bounds checking, and error propagation. The unsafe code usage is appropriate and well-contained.
encoding/codecv8.go (6)
17-40: Well-documented design rationaleExcellent documentation explaining why specific methods need to be overridden due to Go's method receiver behavior. The constructor properly initializes the codec with the forced version.
42-65: Correct implementation of compression compatibility checkThe method properly implements the new compression scheme with clear documentation of the dual-mode behavior. Good error handling and debug logging.
67-142: Robust blob construction with proper compression handlingThe implementation correctly constructs blobs with optional compression, proper size validation, and challenge digest calculation. The padding strategy for challenge digest is well-documented.
144-168: Clear separation of concerns for compatibility checkingThe method correctly implements compatibility checking for batch processing with clear documentation of its purpose and behavior.
170-208: Accurate size estimation with compression awarenessThe estimation methods correctly account for compression effects while maintaining consistent interfaces for chunk and batch operations.
210-213: Clean delegation to new compression implementationThe method correctly implements the new compression scheme by delegating to
CompressScrollBatchBytesStandard.
georgehao
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
README.md (3)
22-43: Consolidate the 3-step rebuild into a singlemaketargetThe current instructions require developers to manually
cdinto two different crates and then run a separate symbol-prefixing script. This is error-prone (easy to forget step 3) and makes CI scripting harder. Consider adding a top-level Make target (e.g.make libzstd) that:
- Invokes
make installin both encoder folders.- Runs
encoding/zstd/add_symbol_prefix.shautomatically (optionally no-op on unsupported OSes).That would reduce these three bullets to a single command in the README and keeps human and CI steps in sync.
47-52: Specify a minimum Rust toolchain to avoid mismatched buildsExplicitly state the tested Rust version (
rustup default 1.75.0orrust-toolchainfile) so that future Rust releases with breaking changes don’t silently produce incompatible.afiles.
54-64: Docker snippet: add missing build commands & non-root tipThe Docker examples install build-essentials but never run the encoder builds. Readers may assume the library is produced automatically. Append:
cd libzstd/encoder-{legacy,standard} make installAlso add
--userflags or a note aboutchownso generated files aren’t root-owned when the container exits.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
README.md(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: tests
🔇 Additional comments (1)
README.md (1)
45-45:add_symbol_prefix.shmacOS-only ⇒ Linux rebuild still requires a macOS hostThe note effectively means Linux-only contributors cannot produce reproducible binaries. Either document a Linux alternative (e.g.
objcopy --prefix-symbolsflow) or flag that cross-platform contributors need macOS access. Otherwise downstream packagers may be blocked.
0cc621b to
0212dc2
Compare
Purpose or design rationale of this PR
This PR implements
codecv8.PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
Breaking change label
Does this PR have the
breaking-changelabel?Summary by CodeRabbit