-
Notifications
You must be signed in to change notification settings - Fork 22
refactor(l2geth dependency): downgrade to l2geth latest develop #31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
* refactor: move some common functions to encoding * fix golint
WalkthroughThe pull request introduces significant changes across multiple files in the Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🧹 Outside diff range and nitpick comments (24)
encoding/interfaces.go (1)
97-109: Simplify conditional logic inCodecFromConfigYou can remove the unnecessary
elsestatements afterreturnstatements to make the code more idiomatic and improve readability.Apply this diff to simplify the function:
func CodecFromConfig(chainCfg *params.ChainConfig, startBlockNumber *big.Int, startBlockTimestamp uint64) Codec { if chainCfg.IsDarwinV2(startBlockTimestamp) { return &DACodecV4{} - } else if chainCfg.IsDarwin(startBlockTimestamp) { + } + if chainCfg.IsDarwin(startBlockTimestamp) { return &DACodecV3{} - } else if chainCfg.IsCurie(startBlockNumber) { + } + if chainCfg.IsCurie(startBlockNumber) { return &DACodecV2{} - } else if chainCfg.IsBernoulli(startBlockNumber) { + } + if chainCfg.IsBernoulli(startBlockNumber) { return &DACodecV1{} - } else { + } return &DACodecV0{} }encoding/codecv3_types.go (4)
15-25: Clarify the purpose and usage ofblobBytesfieldThe
daBatchV3struct includes a fieldblobBytes []byte, which seems redundant given thatblob *kzg4844.Blobis already present. Consider clarifying the need for both fields or removingblobBytesif it's unnecessary.
27-57: Handle error propagation in constructor functionsIn the
newDABatchV3constructor, the error fromblobDataProofForPICircuitis returned directly. To provide more context, consider wrapping the error usingfmt.Errorfto include information about the constructor's operation.Apply this diff to improve error context:
if err != nil { - return nil, err + return nil, fmt.Errorf("failed to compute blob data proof: %w", err) }
105-128: Handle nil pointers more gracefullyIn
blobDataProofForPICircuit, the function returns an error ifbloborzisnil. Consider providing more detailed context in the error messages to aid debugging.Improve the error messages as follows:
if b.blob == nil { - return [2]common.Hash{}, errors.New("called blobDataProofForPICircuit with empty blob") + return [2]common.Hash{}, errors.New("blobDataProofForPICircuit: b.blob is nil") } if b.z == nil { - return [2]common.Hash{}, errors.New("called blobDataProofForPICircuit with empty z") + return [2]common.Hash{}, errors.New("blobDataProofForPICircuit: b.z is nil") }
162-191: Avoid redefining JSON serialization structuresIn the
MarshalJSONmethod, a new structdaBatchV3JSONis defined solely for JSON serialization. This approach can lead to maintenance issues if the struct fields change. Consider using struct tags ondaBatchV3fields instead.Refactor the
daBatchV3struct to include JSON tags:type daBatchV3 struct { daBatchV0 - blobVersionedHash common.Hash - lastBlockTimestamp uint64 - blobDataProof [2]common.Hash + blobVersionedHash common.Hash `json:"blob_versioned_hash"` + lastBlockTimestamp uint64 `json:"last_block_timestamp"` + blobDataProof [2]common.Hash `json:"blob_data_proof"` blob *kzg4844.Blob `json:"-"` z *kzg4844.Point `json:"-"` blobBytes []byte `json:"-"` }Then, you can remove the custom
MarshalJSONmethod.encoding/codecv2.go (2)
97-207: SimplifyconstructBlobPayloadand improve error checksThe
constructBlobPayloadfunction is lengthy and complex, making it harder to maintain and test. Consider breaking it down into smaller helper functions for readability.Also, ensure all errors are properly checked and context is provided.
162-169: Avoid magic numbers in size checksThe code uses
minCompressedDataCheckSizeas a threshold for applying the compressed data check. If this constant is a magic number, consider defining it with an explanatory comment or making it a configurable parameter.Add a comment or define the constant more clearly.
encoding/codecv4.go (1)
140-251: RefactorconstructBlobPayloadfor claritySimilar to
DACodecV2, theconstructBlobPayloadfunction is complex. Refactoring it into smaller functions could improve readability and maintainability.Consider extracting sections like metadata construction, challenge digest calculation, and blob creation into separate functions.
encoding/codecv0.go (3)
98-157: RefactorDecodeDAChunksRawTxfor Improved ReadabilityThe
DecodeDAChunksRawTxmethod is lengthy and complex, which can impact readability and maintainability. Consider breaking it down into smaller helper functions to enhance clarity and facilitate future updates.
182-184: Ensure Consistent Error WrappingIn line 182, you correctly wrap the original error using
%winfmt.Errorf. Ensure this practice is consistently applied throughout the codebase to maintain uniform error handling and facilitate error unwrapping.
406-424: Avoid Code Duplication incomputeBatchDataHashThe
computeBatchDataHashfunction appears similar in bothDACodecV0andDACodecV1. Refactor this shared logic into a common helper function to adhere to the DRY (Don't Repeat Yourself) principle and improve maintainability.encoding/codecv1.go (1)
405-458: Eliminate Duplication incomputeBatchDataHashFunctionThe
computeBatchDataHashmethod inDACodecV1duplicates code fromDACodecV0. Refactoring this shared functionality into a common utility function will reduce code duplication and simplify maintenance.encoding/da.go (2)
Line range hint
158-184: Explicitly HandleL1MessageTxTypeinconvertTxDataToRLPEncodingThe
convertTxDataToRLPEncodingfunction currently does not handletypes.L1MessageTxType. If transactions of this type are expected, consider implementing the logic to encode them or explicitly document why they are excluded to avoid confusion.
Line range hint
337-384: Enhance Validation incheckCompressedDataCompatibilityThe
checkCompressedDataCompatibilityfunction performs minimal validation on compressed data. Strengthen this function to handle all potential edge cases and ensure robust data validation, preventing security vulnerabilities or decompression errors.encoding/codecv0_test.go (2)
348-375: Use Subtests for Better Test StructureIn the loop starting at line 348, consider using subtests with
t.Run(tc.name, func(t *testing.T) { ... }). This approach improves test readability and provides clearer output when running tests with multiple cases.
643-682: Expand Test Coverage for Error ScenariosIn
TestCodecV0DecodeDAChunksRawTx, include test cases that provide invalid input data toDecodeDAChunksRawTx. Testing error pathways ensures the function handles unexpected inputs gracefully and increases overall test coverage.encoding/codecv3_test.go (5)
29-80: Refactor repeated code into helper functionsIn the
TestCodecV3BlockEncodefunction, there is repetitive code for reading blocks from JSON files, creating DA blocks, and encoding them. Consider refactoring this repetitive code into a helper function to improve readability and maintainability.Example Refactor:
Create a helper function:
func encodeAndAssertBlock(t *testing.T, codec Codec, blockFile string, expectedEncoding string) { block := readBlockFromJSON(t, blockFile) daBlock, err := codec.NewDABlock(block, 0) assert.NoError(t, err) encoded := hex.EncodeToString(daBlock.Encode()) assert.Equal(t, expectedEncoding, encoded) }Update your test cases to use the helper:
func TestCodecV3BlockEncode(t *testing.T) { codecv3, err := CodecFromVersion(CodecV3) require.NoError(t, err) encodeAndAssertBlock(t, codecv3, "testdata/blockTrace_02.json", "00000000000000020000000063807b2a0000000000000000000000000000000000000000000000000000000000001de9000355418d1e818400020000") encodeAndAssertBlock(t, codecv3, "testdata/blockTrace_03.json", "00000000000000030000000063807b2d0000000000000000000000000000000000000000000000000000000000001a2c0003546c3cbb39e500010000") // Add more test cases as needed... }
84-155: Eliminate redundancy in chunk encoding testsThe
TestCodecV3ChunkEncodefunction contains repetitive code for creating chunks, encoding them, and asserting the results. Refactor this code into a helper function to remove redundancy and enhance clarity.Example Refactor:
Create a helper function:
func encodeAndAssertChunk(t *testing.T, codec Codec, chunk *Chunk, expectedEncoding string) { daChunk, err := codec.NewDAChunk(chunk, 0) assert.NoError(t, err) encodedBytes, err := daChunk.Encode() assert.NoError(t, err) encoded := hex.EncodeToString(encodedBytes) assert.Equal(t, expectedEncoding, encoded) }Use the helper function in your test:
func TestCodecV3ChunkEncode(t *testing.T) { codecv3, err := CodecFromVersion(CodecV3) require.NoError(t, err) // Example usage block := readBlockFromJSON(t, "testdata/blockTrace_02.json") originalChunk := &Chunk{Blocks: []*Block{block}} encodeAndAssertChunk(t, codecv3, originalChunk, "0100000000000000020000000063807b2a...") // Add more test cases as needed... }
159-241: Ensure specific error assertions in invalid hash testIn the
TestCodecV3ChunkHashfunction (lines 190-193), when testing with an invalid hash, the test asserts that an error occurs usingassert.Error(t, err). To improve test precision, consider asserting the specific error message to ensure that the error is due to the invalid hash format.Suggestion:
_, err = chunk.Hash() if assert.Error(t, err) { assert.Contains(t, err.Error(), "invalid hash format") }
386-453: Adopt table-driven tests to simplifyTestCodecV3NewDABatchFromBytesThe
TestCodecV3NewDABatchFromBytesfunction manually handles multiple test cases in a verbose manner. Refactor this test to use a table-driven approach, which will make it easier to add new test cases and improve readability.Example Refactor:
func TestCodecV3NewDABatchFromBytes(t *testing.T) { codecv3, err := CodecFromVersion(CodecV3) require.NoError(t, err) testCases := []struct { name string jsonFile string }{ {"Empty Batch", ""}, {"Block 02", "testdata/blockTrace_02.json"}, // Add more test cases... } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { // Test logic... }) } }
1260-1311: Improve maintainability with table-driven tests inTestCodecV3BatchStandardTestCasesThe function
TestCodecV3BatchStandardTestCasescontains a large inline slice of test cases, which can be hard to read and maintain. Consider restructuring it into a table-driven test format to improve clarity and ease of updates.encoding/zstd/zstd.go (1)
14-15: Enhance error message for empty input inCompressScrollBatchBytesThe added error check for empty input improves input validation. However, the error message "input batch is empty" could be more descriptive or use a predefined error type to provide better context for the caller.
Suggestion:
var ErrEmptyBatch = errors.New("CompressScrollBatchBytes: input batch is empty") func CompressScrollBatchBytes(batchBytes []byte) ([]byte, error) { if len(batchBytes) == 0 { return nil, ErrEmptyBatch } // Rest of the function... }This way, callers can check for
ErrEmptyBatchspecifically if needed.Also applies to: 20-23
encoding/interfaces_test.go (1)
39-117: Consider adding edge cases for chain configuration.While the test cases cover the main upgrade scenarios, consider adding edge cases such as:
- Configurations with future block numbers
- Configurations with zero block numbers but non-zero timestamps
func TestCodecFromConfig(t *testing.T) { tests := []struct { name string config *params.ChainConfig blockNum *big.Int timestamp uint64 want Codec }{ + { + name: "Future block number", + config: ¶ms.ChainConfig{ + LondonBlock: big.NewInt(100), + }, + blockNum: big.NewInt(50), + timestamp: 0, + want: &DACodecV0{}, + }, + { + name: "Zero block with timestamp", + config: ¶ms.ChainConfig{ + DarwinTime: new(uint64), + }, + blockNum: big.NewInt(0), + timestamp: 1000, + want: &DACodecV3{}, + }, // existing test cases... }encoding/da_test.go (1)
118-136: Consider parameterizing the blob compression test.While the test is well-structured, consider:
- Using test table pattern for multiple test cases
- Adding edge cases (empty blob, max size blob)
- Testing with different compression levels
func TestBlobCompressDecompress(t *testing.T) { + tests := []struct { + name string + input string + wantErr bool + }{ + {"Valid blob", "0060e7159d...", false}, + {"Empty blob", "", false}, + {"Max size blob", strings.Repeat("ff", 131072), false}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + blobBytes, err := hex.DecodeString(tt.input) + if tt.wantErr { + assert.Error(t, err) + return + } + assert.NoError(t, err) // ... rest of the test logic + }) + } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (27)
encoding/bitmap.go(2 hunks)encoding/bitmap_test.go(1 hunks)encoding/codecv0.go(1 hunks)encoding/codecv0/codecv0.go(0 hunks)encoding/codecv0/codecv0_test.go(0 hunks)encoding/codecv0_test.go(1 hunks)encoding/codecv0_types.go(1 hunks)encoding/codecv1.go(1 hunks)encoding/codecv1/codecv1.go(0 hunks)encoding/codecv1_types.go(1 hunks)encoding/codecv2.go(1 hunks)encoding/codecv2/codecv2.go(0 hunks)encoding/codecv3.go(1 hunks)encoding/codecv3/codecv3.go(0 hunks)encoding/codecv3/codecv3_test.go(0 hunks)encoding/codecv3_test.go(1 hunks)encoding/codecv3_types.go(1 hunks)encoding/codecv4.go(1 hunks)encoding/codecv4/codecv4.go(0 hunks)encoding/codecv4/codecv4_test.go(0 hunks)encoding/codecv4_test.go(1 hunks)encoding/da.go(11 hunks)encoding/da_test.go(6 hunks)encoding/interfaces.go(1 hunks)encoding/interfaces_test.go(1 hunks)encoding/zstd/zstd.go(1 hunks)go.mod(1 hunks)
💤 Files with no reviewable changes (8)
- encoding/codecv3/codecv3_test.go
- encoding/codecv0/codecv0_test.go
- encoding/codecv4/codecv4_test.go
- encoding/codecv1/codecv1.go
- encoding/codecv2/codecv2.go
- encoding/codecv4/codecv4.go
- encoding/codecv0/codecv0.go
- encoding/codecv3/codecv3.go
🧰 Additional context used
📓 Learnings (9)
encoding/codecv3.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-11-12T12:17:31.140Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
encoding/codecv3_types.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-11-12T12:17:31.140Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
encoding/codecv1_types.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-11-12T12:17:31.140Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
encoding/da.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1.go:96-100
Timestamp: 2024-11-12T12:17:31.140Z
Learning: The function `bytesFromBlobCanonical` returns `[maxEffectiveBlobBytes]byte` and does not return an error.
encoding/codecv2.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-11-12T12:17:31.140Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
encoding/codecv4.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-11-12T12:17:31.140Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
encoding/codecv0.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-11-12T12:17:31.140Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
encoding/codecv0_types.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-11-12T12:17:31.140Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
encoding/codecv1.go (1)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-11-12T12:17:31.140Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
🔇 Additional comments (15)
encoding/bitmap.go (1)
Line range hint 10-64: LGTM!
The refactoring of the bitmap handling functions enhances encapsulation by making them private. The use of skippedL1MessageBitmapByteSize improves adaptability for varying bitmap sizes.
encoding/codecv1_types.go (2)
130-132:
Implement the BlobBytes method or remove it if unused
The BlobBytes method currently returns nil. If this method is intended to provide the blob data as bytes, please implement it. If it is not needed, consider removing it to avoid confusion.
Apply this diff to implement the method:
func (b *daBatchV1) BlobBytes() []byte {
- return nil
+ if b.blob != nil {
+ return b.blob[:]
+ }
+ return nil
}⛔ Skipped due to learnings
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:152-154
Timestamp: 2024-11-12T12:17:31.140Z
Learning: In the `daBatchV1` struct, the `BlobBytes()` method is intentionally returning `nil`.
105-116: Verify the correctness of byte offsets in Encode method
Ensure that all byte offsets used in the Encode method align with the expected structure of the encoded batch. This is crucial for accurate serialization and deserialization.
Run the following script to check if the offsets match the sizes of the fields:
✅ Verification successful
Let me gather the actual offset constants from the code to compare with our calculated offsets.
Let's try a different approach to find the offset constants, as they might be defined in a different way or in a different file.
Now I can verify the offsets in the Encode method of daBatchV1. Let me compare the actual offsets from the code with our calculated offsets:
For CodecV1:
daBatchOffsetVersion= 0 (matches)daBatchOffsetBatchIndex= 1 (matches)daBatchV1OffsetL1MessagePopped= 9 (matches)daBatchV1OffsetTotalL1MessagePopped= 17 (matches)daBatchOffsetDataHash= 25 (matches)daBatchV1OffsetBlobVersionedHash= 57 (matches)daBatchV1OffsetParentBatchHash= 89 (matches)daBatchV1OffsetSkippedL1MessageBitmap= 121 (matches)
The byte offsets in the Encode method perfectly align with the defined constants in the codebase. The offsets correctly account for the size of each field and ensure proper serialization without any gaps or overlaps. The total minimum encoded length (daBatchV1EncodedMinLength = 121) is also correct, representing the size of all fixed fields before the variable-length skippedL1MessageBitmap.
All byte offsets in the Encode method are correct and properly aligned
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that the byte offsets correspond to the sizes of the batch fields.
# Expected sizes (in bytes) of each field
declare -A field_sizes=(
["version"]=1
["batchIndex"]=8
["l1MessagePopped"]=8
["totalL1MessagePopped"]=8
["dataHash"]=32
["blobVersionedHash"]=32
["parentBatchHash"]=32
["skippedL1MessageBitmap"]="<variable_length>"
)
# Calculate cumulative offsets
offset=0
for field in version batchIndex l1MessagePopped totalL1MessagePopped dataHash blobVersionedHash parentBatchHash; do
size=${field_sizes[$field]}
echo "$field: offset $offset, size $size"
offset=$((offset + size))
done
Length of output: 1730
Script:
#!/bin/bash
# Search for offset constant declarations
rg "^const.*Offset" encoding/codecv1_types.go
Length of output: 47
Script:
#!/bin/bash
# Search for offset variables/constants in the encoding directory
rg "Offset|EncodedMinLength" encoding/
Length of output: 15140
encoding/codecv3.go (1)
78-107: Ensure proper version checks in NewDABatchFromBytes
The version check compares the codec version from the data with CodecV3. Confirm that the data length and offsets are correctly aligned with the expected version to prevent misinterpretation of the batch data.
Run the following script to check for consistency in version usage:
✅ Verification successful
Let me gather more information about the version checks and offsets.
Based on the gathered information, I can now provide a final response:
Version check and data length validation are correctly implemented
The version check in NewDABatchFromBytes is properly aligned with the data structure:
- The version byte is correctly read from
daBatchOffsetVersion(0) - The data length check uses the correct constant
daBatchV3EncodedLength(193 bytes) - All field offsets are properly defined and consistently used across the codebase
- The implementation is consistent with other codec versions (V0-V4) which follow the same pattern
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that `CodecVersion` is consistently used and compared.
# Search for version comparisons in the codebase
rg 'CodecVersion\(data\['
# Verify that all comparisons match the intended versions
rg 'if CodecVersion\(data\[.*\]')'
Length of output: 1295
Script:
#!/bin/bash
# Search for offset constants and their usage
rg "daBatchOffset|daBatchV3" encoding/
# Search for length constants
rg "EncodedLength" encoding/
Length of output: 21395
encoding/codecv3_types.go (1)
83-97: Potential index misalignment in byte slices during encoding
In the Encode method, the offsets used in copy and binary.BigEndian.PutUint64 need careful attention to avoid misalignment. Ensure that the constants like daBatchOffsetVersion, daBatchV3OffsetL1MessagePopped, etc., are correctly defined and account for the sizes of the fields.
Run the following script to verify the correctness of the offsets:
Please ensure these offsets match the actual sizes of the serialized fields.
✅ Verification successful
Let me verify the actual constant definitions in the codebase to ensure they match our expected offsets.
Let me try a different approach to find these constants.
Let me search for the offset constants specifically.
No misalignment issues found in byte slice encoding
The offset constants are correctly defined and match the field sizes in the Encode method:
- Version (1 byte):
daBatchOffsetVersion = 0 - BatchIndex (8 bytes):
daBatchOffsetBatchIndex = 1todaBatchV3OffsetL1MessagePopped = 9 - L1MessagePopped (8 bytes):
daBatchV3OffsetL1MessagePopped = 9todaBatchV3OffsetTotalL1MessagePopped = 17 - TotalL1MessagePopped (8 bytes):
daBatchV3OffsetTotalL1MessagePopped = 17todaBatchOffsetDataHash = 25 - DataHash (32 bytes):
daBatchOffsetDataHash = 25todaBatchV3OffsetBlobVersionedHash = 57 - BlobVersionedHash (32 bytes):
daBatchV3OffsetBlobVersionedHash = 57todaBatchV3OffsetParentBatchHash = 89 - ParentBatchHash (32 bytes):
daBatchV3OffsetParentBatchHash = 89todaBatchV3OffsetLastBlockTimestamp = 121 - LastBlockTimestamp (8 bytes):
daBatchV3OffsetLastBlockTimestamp = 121todaBatchV3OffsetBlobDataProof = 129 - BlobDataProof (64 bytes = 2 * 32):
daBatchV3OffsetBlobDataProof = 129todaBatchV3EncodedLength = 193
The offsets are properly aligned with no gaps, and the total encoded length of 193 bytes is correct.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check the alignment and size of fields in the daBatchV3 encoding.
# Expected sizes
expected_offsets=(
"daBatchOffsetVersion=0"
"daBatchOffsetBatchIndex=1"
"daBatchV3OffsetL1MessagePopped=9"
"daBatchV3OffsetTotalL1MessagePopped=17"
"daBatchOffsetDataHash=25"
"daBatchV3OffsetBlobVersionedHash=57"
"daBatchV3OffsetParentBatchHash=89"
"daBatchV3OffsetLastBlockTimestamp=121"
"daBatchV3OffsetBlobDataProof=129"
"daBatchV3EncodedLength=193"
)
# Output the expected offsets for verification
for offset in "${expected_offsets[@]}"; do
echo "$offset"
done
Length of output: 1462
Script:
#!/bin/bash
# Search for the constant definitions in the codebase
rg --type go "^const.*(?:daBatch.*Offset|daBatch.*Length)" -A 10
Length of output: 66
Script:
#!/bin/bash
# Search for the constant definitions with simpler patterns
rg --type go "const.*daBatch" -B 2 -A 10
# Also search in the same file as the Encode method
fd "codecv3_types.go" --exec cat {}
Length of output: 11306
Script:
#!/bin/bash
# Search for the constant definitions in the encoding package
rg --type go "^const.*(?:kzgPointByteSize|daBatch.*Offset|daBatch.*Length)" encoding/
# Also check for any const blocks that might contain these definitions
rg --type go "const \(" -A 20 encoding/
Length of output: 5089
encoding/codecv0_types.go (1)
61-75:
Check for potential integer overflow in Decode method
In the Decode method of daBlockV0, ensure that the byte slices used in SetBytes and Uint64 operations do not cause integer overflows when dealing with untrusted input data.
Run the following script to verify the length and content of byte slices:
Please ensure that input validation is performed to prevent malicious input from causing overflows.
✅ Verification successful
Let me gather more information about the byte slice lengths and offsets.
Let me try a different approach to find the constants and type definitions.
Let me search for the blockContextByteSize constant which is used in the validation check.
No integer overflow risk in Decode method
The implementation is safe from integer overflows because:
- The method validates the total input length matches
blockContextByteSize(60 bytes) upfront - The offset constants define non-overlapping ranges that perfectly fit within the 60-byte block:
numberOffset(0) totimestampOffset(8): 8 bytes for uint64timestampOffset(8) tobaseFeeOffset(16): 8 bytes for uint64baseFeeOffset(16) togasLimitOffset(48): 32 bytes for big.IntgasLimitOffset(48) tonumTransactionsOffset(56): 8 bytes for uint64numTransactionsOffset(56) tonumL1MessagesOffset(58): 2 bytes for uint16numL1MessagesOffset(58) to end (60): 2 bytes for uint16
The Go standard library's binary.BigEndian methods are safe by design and will panic if given insufficient bytes, but this can't happen here due to the length validation and proper offset calculations.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that byte slices used in decoding have expected lengths.
# Expected byte slice lengths
echo "Expected baseFee byte length: 32 bytes"
Length of output: 86
Script:
#!/bin/bash
# Search for constant definitions related to byte offsets and sizes
rg -A 10 "const.*Offset|const.*Size" encoding/codecv0_types.go
Length of output: 64
Script:
#!/bin/bash
# Get the entire content of the file to analyze the constants and struct definition
cat encoding/codecv0_types.go
Length of output: 9123
Script:
#!/bin/bash
# Search for blockContextByteSize constant definition
rg "blockContextByteSize.*=" encoding/
# Also search for any other related size constants
rg "const.*blockContext" encoding/
Length of output: 247
encoding/codecv4.go (1)
313-326: Handle potential JSON marshaling errors
In JSONFromBytes, if the batch implements json.Marshaler, the custom MarshalJSON method should handle all necessary fields. Ensure that the marshaling process includes all relevant data or consider returning a more informative error if certain fields are missing.
Test the JSON output to ensure all fields are correctly serialized.
encoding/codecv0.go (2)
218-220:
Check for Nil Pointer Before Using c
After calling c, err := kzg4844.BlobToCommitment(blob), there is no check to confirm c is not nil before using it to compute blobVersionedHash. Ensure that c is validated to prevent possible nil pointer dereferences.
Apply this diff to add a nil check:
c, err := kzg4844.BlobToCommitment(blob)
if err != nil {
return nil, common.Hash{}, nil, fmt.Errorf("failed to create blob commitment: %w", err)
+}
+if c == nil {
+ return nil, common.Hash{}, nil, fmt.Errorf("blob commitment is nil")
}
blobVersionedHash := kzg4844.CalcBlobHashV1(sha256.New(), &c)Likely invalid or redundant comment.
406-424: Verify Accurate Accumulation of totalL1MessagePoppedBeforeChunk
In the computeBatchDataHash function, totalL1MessagePoppedBeforeChunk is updated within the loop. Verify that this accumulation correctly represents the total L1 messages to ensure accurate hash computation.
✅ Verification successful
Let me gather more information about how NumL1Messages is implemented and used.
Let me try a different approach to find the implementation details.
Based on the comprehensive output, I can now provide the final response:
Accumulation of totalL1MessagePoppedBeforeChunk is Correctly Implemented
The implementation is verified to be correct for the following reasons:
- The
NumL1Messagesmethod properly tracks L1 messages at both Block and Chunk levels - The accumulation is sequential and maintains the running total correctly
- Test cases in
da_test.govalidate the L1 message counting functionality - The implementation handles both included and skipped L1 messages as documented
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify correct accumulation of totalL1MessagePoppedBeforeChunk in computeBatchDataHash functions.
# Expected: The variable accurately accumulates L1 messages across chunks.
# Command to check updates to totalL1MessagePoppedBeforeChunk:
rg 'totalL1MessagePoppedBeforeChunk\s*[\+\-]=\s*' -A 3 encoding/
Length of output: 573
Script:
#!/bin/bash
# Search for NumL1Messages implementation
ast-grep --pattern 'func $_($_ Chunk) NumL1Messages($_) $_'
# Also check for Chunk interface definition
ast-grep --pattern 'type Chunk interface {
$$$
}'
# And check for any tests related to this functionality
rg "NumL1Messages.*test" -B 2 -A 2
Length of output: 152
Script:
#!/bin/bash
# Search for NumL1Messages method implementations using ripgrep
rg "func.*NumL1Messages" -A 5
# Look for Chunk type/interface definition
rg "type.*Chunk" -A 10
# Find usages of NumL1Messages
rg "NumL1Messages\(" -B 2 -A 2
Length of output: 10005
encoding/codecv1.go (1)
94-99: Ensure Correct Usage of bytesFromBlobCanonical Return Value
The bytesFromBlobCanonical function returns an array of fixed size [maxEffectiveBlobBytes]byte. In DecodeTxsFromBlob, when slicing this array with batchBytes[:], ensure that the full byte slice is used appropriately and that any size assumptions are valid to prevent slice bounds errors.
✅ Verification successful
Based on the search results and the learnings provided, I can generate the final response now.
No Issues with bytesFromBlobCanonical Usage
The usage of bytesFromBlobCanonical in DecodeTxsFromBlob is correct. From the learnings, we know that bytesFromBlobCanonical always returns [maxEffectiveBlobBytes]byte and doesn't return an error. The slice operation batchBytes[:] is a safe way to convert the fixed-size array into a slice, and this pattern is consistently used across all codec versions (v1, v2, and v4) in the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check usage of `bytesFromBlobCanonical` to ensure correct handling of returned array.
# Expected: Functions correctly handle the fixed-size array without causing slice issues.
# Command to find usage:
rg 'bytesFromBlobCanonical' -A 3 encoding/
Length of output: 1317
encoding/bitmap_test.go (1)
11-44: Well-structured unit tests for decodeBitmap
The TestDecodeBitmap function effectively tests the decodeBitmap functionality, covering both valid and invalid scenarios. The use of the helper function isL1MessageSkipped enhances readability. Good job on comprehensive test coverage and proper error handling assertions.
go.mod (2)
6-6: Verify the security of the gomonkey dependency.
The addition of github.com/agiledragon/gomonkey/v2 for mocking in tests should be verified for security concerns.
✅ Verification successful
Let me gather more information about how this dependency is being used in the codebase.
The gomonkey dependency is safe to use in this context
The verification shows that:
- No security vulnerabilities were found for the gomonkey package in GitHub's security advisory database
- The dependency is used exclusively in test files (
*_test.go) for mocking purposes - The usage is limited to the
encodingpackage tests and follows standard mocking patterns - Version v2.12.0 is a stable release
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for security advisories for gomonkey
gh api graphql -f query='
{
securityVulnerabilities(first: 5, ecosystem: GO, package: "github.com/agiledragon/gomonkey") {
nodes {
advisory {
summary
severity
publishedAt
}
vulnerableVersionRange
firstPatchedVersion {
identifier
}
}
}
}'
Length of output: 363
Script:
#!/bin/bash
# Check for gomonkey usage in the codebase
rg -l "gomonkey" --type go
# Check go.mod for the dependency details
cat go.mod
# Look for test files using gomonkey
rg "gomonkey" --type go -A 3
Length of output: 4982
7-7: Verify compatibility with the updated go-ethereum version.
The update to github.com/scroll-tech/go-ethereum from v1.10.14-0.20240607130425-e2becce6a1a4 to v1.10.14-0.20241206090635-a5ecce2dc333 needs compatibility verification.
encoding/interfaces_test.go (1)
11-37: LGTM! Well-structured test cases for codec version mapping.
The test cases provide good coverage of both valid and invalid codec versions, with proper error handling verification.
encoding/da_test.go (1)
61-61: LGTM! Consistent method name updates.
The updates from L2GasUsed to TotalGasUsed are applied consistently across all test cases.
Also applies to: 69-69, 79-79
| // JSONFromBytes converts the bytes to a daBatchV3 and then marshals it to JSON. | ||
| func (d *DACodecV3) JSONFromBytes(data []byte) ([]byte, error) { | ||
| batch, err := d.NewDABatchFromBytes(data) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to decode DABatch from bytes: %w", err) | ||
| } | ||
|
|
||
| jsonBytes, err := json.Marshal(batch) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to marshal DABatch to JSON, version %d, hash %s: %w", batch.Version(), batch.Hash(), err) | ||
| } | ||
|
|
||
| return jsonBytes, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential errors when marshaling to JSON in JSONFromBytes
Ensure that the DABatch implementation supports JSON marshaling. If custom marshaling is required, implement the MarshalJSON method for DABatch.
Provide an example implementation:
func (b *daBatchV3) MarshalJSON() ([]byte, error) {
+ // Implement custom marshaling logic if necessary
+ type Alias daBatchV3
+ return json.Marshal(&struct {
+ *Alias
+ }{
+ Alias: (*Alias)(b),
+ })
}Committable suggestion skipped: line range outside the PR's diff.
| // newDABatchV0 is a constructor for daBatchV0. | ||
| func newDABatchV0(version CodecVersion, batchIndex, l1MessagePopped, totalL1MessagePopped uint64, dataHash, parentBatchHash common.Hash, skippedL1MessageBitmap []byte) *daBatchV0 { | ||
| return &daBatchV0{ | ||
| version: version, | ||
| batchIndex: batchIndex, | ||
| l1MessagePopped: l1MessagePopped, | ||
| totalL1MessagePopped: totalL1MessagePopped, | ||
| dataHash: dataHash, | ||
| parentBatchHash: parentBatchHash, | ||
| skippedL1MessageBitmap: skippedL1MessageBitmap, | ||
| } | ||
| } | ||
|
|
||
| // Encode serializes the DABatchV0 into bytes. | ||
| func (b *daBatchV0) Encode() []byte { | ||
| batchBytes := make([]byte, daBatchV0EncodedMinLength+len(b.skippedL1MessageBitmap)) | ||
| batchBytes[daBatchOffsetVersion] = byte(b.version) | ||
| binary.BigEndian.PutUint64(batchBytes[daBatchOffsetBatchIndex:daBatchV0OffsetL1MessagePopped], b.batchIndex) | ||
| binary.BigEndian.PutUint64(batchBytes[daBatchV0OffsetL1MessagePopped:daBatchV0OffsetTotalL1MessagePopped], b.l1MessagePopped) | ||
| binary.BigEndian.PutUint64(batchBytes[daBatchV0OffsetTotalL1MessagePopped:daBatchOffsetDataHash], b.totalL1MessagePopped) | ||
| copy(batchBytes[daBatchOffsetDataHash:daBatchV0OffsetParentBatchHash], b.dataHash[:]) | ||
| copy(batchBytes[daBatchV0OffsetParentBatchHash:daBatchV0OffsetSkippedL1MessageBitmap], b.parentBatchHash[:]) | ||
| copy(batchBytes[daBatchV0OffsetSkippedL1MessageBitmap:], b.skippedL1MessageBitmap[:]) | ||
| return batchBytes | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid potential slicing errors in Encode method
In the Encode method of daBatchV0, the code uses slicing on batchBytes to copy skippedL1MessageBitmap. Ensure that batchBytes is appropriately sized before copying to prevent runtime panics.
Apply this diff to ensure safety:
batchBytes := make([]byte, daBatchV0EncodedMinLength+len(b.skippedL1MessageBitmap))
// ...
-copy(batchBytes[daBatchV0OffsetSkippedL1MessageBitmap:], b.skippedL1MessageBitmap[:])
+copy(batchBytes[daBatchV0OffsetSkippedL1MessageBitmap:], b.skippedL1MessageBitmap)Also, consider adding checks to ensure skippedL1MessageBitmap is not nil.
Committable suggestion skipped: line range outside the PR's diff.
| // blob payload | ||
| blob, blobVersionedHash, z, _, err := d.constructBlobPayload(batch.Chunks, d.MaxNumChunksPerBatch()) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to construct blob payload, index: %d, err: %w", batch.Index, err) | ||
| } | ||
|
|
||
| if totalL1MessagePoppedAfter < batch.TotalL1MessagePoppedBefore { | ||
| return nil, fmt.Errorf("batch index: %d, totalL1MessagePoppedAfter (%d) is less than batch.TotalL1MessagePoppedBefore (%d)", batch.Index, totalL1MessagePoppedAfter, batch.TotalL1MessagePoppedBefore) | ||
| } | ||
| l1MessagePopped := totalL1MessagePoppedAfter - batch.TotalL1MessagePoppedBefore | ||
|
|
||
| daBatch := newDABatchV1( | ||
| CodecV2, // version | ||
| batch.Index, // batchIndex | ||
| l1MessagePopped, // l1MessagePopped | ||
| totalL1MessagePoppedAfter, // totalL1MessagePopped | ||
| dataHash, // dataHash | ||
| blobVersionedHash, // blobVersionedHash | ||
| batch.ParentBatchHash, // parentBatchHash | ||
| skippedL1MessageBitmap, // skippedL1MessageBitmap | ||
| blob, // blob | ||
| z, // z | ||
| ) | ||
|
|
||
| return daBatch, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential nil pointer dereference
In NewDABatch, the error handling for computeBatchDataHash and constructSkippedBitmap should ensure that none of the returned values are nil before proceeding.
Check for nil values after function calls:
dataHash, err := d.computeBatchDataHash(batch.Chunks, batch.TotalL1MessagePoppedBefore)
if err != nil {
return nil, fmt.Errorf("failed to compute batch data hash, index: %d, err: %w", batch.Index, err)
+} else if dataHash == nil {
+ return nil, fmt.Errorf("dataHash is nil after computeBatchDataHash")
}Repeat similar checks for other variables as needed.
Committable suggestion skipped: line range outside the PR's diff.
| enableCompression, err := d.CheckBatchCompressedDataCompatibility(batch) | ||
| if err != nil { | ||
| return nil, err | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check return values when determining compression compatibility
In NewDABatch, the CheckBatchCompressedDataCompatibility method returns a boolean and an error. Currently, if an error occurs, it's returned, but if the boolean is false (i.e., compression is not compatible), the code proceeds without handling this case. Ensure that both return values are appropriately handled.
Update the code to handle the enableCompression flag properly.
| if rawBytes[0] == 0x1 { | ||
| batchBytes, err := decompressScrollBlobToBatch(append(zstdMagicNumber, rawBytes[1:]...)) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| return decodeTxsFromBytes(batchBytes, chunks, d.MaxNumChunksPerBatch()) | ||
| } else { | ||
| return decodeTxsFromBytes(rawBytes[1:], chunks, d.MaxNumChunksPerBatch()) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure proper handling of compressed and uncompressed data
In DecodeTxsFromBlob, the function checks the first byte to determine if the data is compressed. Ensure that the first byte is validated properly to prevent incorrect parsing or potential security issues.
Add validation for the first byte:
if rawBytes[0] == 0x1 {
+ // Ensure there is data after the compression flag
+ if len(rawBytes) <= 1 {
+ return errors.New("compressed data flag set but no data follows")
+ }
batchBytes, err := decompressScrollBlobToBatch(append(zstdMagicNumber, rawBytes[1:]...))
// ...Similarly, handle the case for uncompressed data.
Committable suggestion skipped: line range outside the PR's diff.
| dataHash, // dataHash | ||
| blobVersionedHash, // blobVersionedHash | ||
| batch.ParentBatchHash, // parentBatchHash | ||
| skippedL1MessageBitmap, // skippedL1MessageBitmap | ||
| blob, // blob | ||
| z, // z | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle Errors Appropriately When Constructing Blob Payload
In the constructBlobPayload method, ensure that all possible errors, especially those related to blob size calculations and index manipulations, are properly handled to prevent runtime panics or out-of-bounds errors.
| // getNextTx parses blob bytes to find length of payload of next Tx and decode it | ||
| func getNextTx(bytes []byte, index int) (*types.Transaction, int, error) { | ||
| var nextIndex int | ||
| length := len(bytes) | ||
| if length < index+1 { | ||
| return nil, 0, errSmallLength | ||
| } | ||
| var txBytes []byte | ||
| if bytes[index] <= 0x7f { | ||
| // the first byte is transaction type, rlp encoding begins from next byte | ||
| txBytes = append(txBytes, bytes[index]) | ||
| index++ | ||
| } | ||
| if length < index+1 { | ||
| return nil, 0, errSmallLength | ||
| } | ||
| if bytes[index] >= 0xc0 && bytes[index] <= 0xf7 { | ||
| // length of payload is simply bytes[index] - 0xc0 | ||
| payloadLen := int(bytes[index] - 0xc0) | ||
| if length < index+1+payloadLen { | ||
| return nil, 0, errSmallLength | ||
| } | ||
| txBytes = append(txBytes, bytes[index:index+1+payloadLen]...) | ||
| nextIndex = index + 1 + payloadLen | ||
| } else if bytes[index] > 0xf7 { | ||
| // the length of payload is encoded in next bytes[index] - 0xf7 bytes | ||
| // length of bytes representation of length of payload | ||
| lenPayloadLen := int(bytes[index] - 0xf7) | ||
| if length < index+1+lenPayloadLen { | ||
| return nil, 0, errSmallLength | ||
| } | ||
| lenBytes := bytes[index+1 : index+1+lenPayloadLen] | ||
| for len(lenBytes) < 8 { | ||
| lenBytes = append([]byte{0x0}, lenBytes...) | ||
| } | ||
| payloadLen := binary.BigEndian.Uint64(lenBytes) | ||
|
|
||
| if length < index+1+lenPayloadLen+int(payloadLen) { | ||
| return nil, 0, errSmallLength | ||
| } | ||
| txBytes = append(txBytes, bytes[index:index+1+lenPayloadLen+int(payloadLen)]...) | ||
| nextIndex = index + 1 + lenPayloadLen + int(payloadLen) | ||
| } else { | ||
| return nil, 0, fmt.Errorf("incorrect format of rlp encoding") | ||
| } | ||
| tx := &types.Transaction{} | ||
| err := tx.UnmarshalBinary(txBytes) | ||
| if err != nil { | ||
| return nil, 0, fmt.Errorf("failed to unmarshal tx, err: %w", err) | ||
| } | ||
| return tx, nextIndex, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent Potential Integer Overflows in getNextTx
When parsing the payload length from untrusted data in getNextTx, there is a risk of integer overflows, which could lead to panics or unexpected behavior. Add checks to validate the payload length and ensure it does not exceed the remaining bytes.
Apply this diff to add overflow checks:
lenPayloadLen := int(bytes[index] - 0xf7)
if length < index+1+lenPayloadLen {
return nil, 0, errSmallLength
}
lenBytes := bytes[index+1 : index+1+lenPayloadLen]
for len(lenBytes) < 8 {
lenBytes = append([]byte{0x0}, lenBytes...)
}
payloadLen := binary.BigEndian.Uint64(lenBytes)
+if payloadLen > uint64(len(bytes)-(index+1+lenPayloadLen)) {
+ return nil, 0, fmt.Errorf("payload length exceeds available bytes")
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // getNextTx parses blob bytes to find length of payload of next Tx and decode it | |
| func getNextTx(bytes []byte, index int) (*types.Transaction, int, error) { | |
| var nextIndex int | |
| length := len(bytes) | |
| if length < index+1 { | |
| return nil, 0, errSmallLength | |
| } | |
| var txBytes []byte | |
| if bytes[index] <= 0x7f { | |
| // the first byte is transaction type, rlp encoding begins from next byte | |
| txBytes = append(txBytes, bytes[index]) | |
| index++ | |
| } | |
| if length < index+1 { | |
| return nil, 0, errSmallLength | |
| } | |
| if bytes[index] >= 0xc0 && bytes[index] <= 0xf7 { | |
| // length of payload is simply bytes[index] - 0xc0 | |
| payloadLen := int(bytes[index] - 0xc0) | |
| if length < index+1+payloadLen { | |
| return nil, 0, errSmallLength | |
| } | |
| txBytes = append(txBytes, bytes[index:index+1+payloadLen]...) | |
| nextIndex = index + 1 + payloadLen | |
| } else if bytes[index] > 0xf7 { | |
| // the length of payload is encoded in next bytes[index] - 0xf7 bytes | |
| // length of bytes representation of length of payload | |
| lenPayloadLen := int(bytes[index] - 0xf7) | |
| if length < index+1+lenPayloadLen { | |
| return nil, 0, errSmallLength | |
| } | |
| lenBytes := bytes[index+1 : index+1+lenPayloadLen] | |
| for len(lenBytes) < 8 { | |
| lenBytes = append([]byte{0x0}, lenBytes...) | |
| } | |
| payloadLen := binary.BigEndian.Uint64(lenBytes) | |
| if length < index+1+lenPayloadLen+int(payloadLen) { | |
| return nil, 0, errSmallLength | |
| } | |
| txBytes = append(txBytes, bytes[index:index+1+lenPayloadLen+int(payloadLen)]...) | |
| nextIndex = index + 1 + lenPayloadLen + int(payloadLen) | |
| } else { | |
| return nil, 0, fmt.Errorf("incorrect format of rlp encoding") | |
| } | |
| tx := &types.Transaction{} | |
| err := tx.UnmarshalBinary(txBytes) | |
| if err != nil { | |
| return nil, 0, fmt.Errorf("failed to unmarshal tx, err: %w", err) | |
| } | |
| return tx, nextIndex, nil | |
| } | |
| // getNextTx parses blob bytes to find length of payload of next Tx and decode it | |
| func getNextTx(bytes []byte, index int) (*types.Transaction, int, error) { | |
| var nextIndex int | |
| length := len(bytes) | |
| if length < index+1 { | |
| return nil, 0, errSmallLength | |
| } | |
| var txBytes []byte | |
| if bytes[index] <= 0x7f { | |
| // the first byte is transaction type, rlp encoding begins from next byte | |
| txBytes = append(txBytes, bytes[index]) | |
| index++ | |
| } | |
| if length < index+1 { | |
| return nil, 0, errSmallLength | |
| } | |
| if bytes[index] >= 0xc0 && bytes[index] <= 0xf7 { | |
| // length of payload is simply bytes[index] - 0xc0 | |
| payloadLen := int(bytes[index] - 0xc0) | |
| if length < index+1+payloadLen { | |
| return nil, 0, errSmallLength | |
| } | |
| txBytes = append(txBytes, bytes[index:index+1+payloadLen]...) | |
| nextIndex = index + 1 + payloadLen | |
| } else if bytes[index] > 0xf7 { | |
| // the length of payload is encoded in next bytes[index] - 0xf7 bytes | |
| // length of bytes representation of length of payload | |
| lenPayloadLen := int(bytes[index] - 0xf7) | |
| if length < index+1+lenPayloadLen { | |
| return nil, 0, errSmallLength | |
| } | |
| lenBytes := bytes[index+1 : index+1+lenPayloadLen] | |
| for len(lenBytes) < 8 { | |
| lenBytes = append([]byte{0x0}, lenBytes...) | |
| } | |
| payloadLen := binary.BigEndian.Uint64(lenBytes) | |
| if payloadLen > uint64(len(bytes)-(index+1+lenPayloadLen)) { | |
| return nil, 0, fmt.Errorf("payload length exceeds available bytes") | |
| } | |
| if length < index+1+lenPayloadLen+int(payloadLen) { | |
| return nil, 0, errSmallLength | |
| } | |
| txBytes = append(txBytes, bytes[index:index+1+lenPayloadLen+int(payloadLen)]...) | |
| nextIndex = index + 1 + lenPayloadLen + int(payloadLen) | |
| } else { | |
| return nil, 0, fmt.Errorf("incorrect format of rlp encoding") | |
| } | |
| tx := &types.Transaction{} | |
| err := tx.UnmarshalBinary(txBytes) | |
| if err != nil { | |
| return nil, 0, fmt.Errorf("failed to unmarshal tx, err: %w", err) | |
| } | |
| return tx, nextIndex, nil | |
| } |
| encodedBytes, err := daChunkV0.Encode() | ||
| assert.NoError(t, err) | ||
| encoded := hex.EncodeToString(encodedBytes) | ||
| assert.Equal(t, "01000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", encoded) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check for Nil Slice Before Encoding
Before calling hex.EncodeToString(encodedBytes), ensure that encodedBytes is not nil to prevent a potential panic: runtime error: invalid memory address or nil pointer dereference.
| func TestCodecV4BlockEncode(t *testing.T) { | ||
| codecv4, err := CodecFromVersion(CodecV4) | ||
| require.NoError(t, err) | ||
|
|
||
| block := &daBlockV0{} | ||
| encoded := hex.EncodeToString(block.Encode()) | ||
| assert.Equal(t, "000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", encoded) | ||
|
|
||
| block2 := readBlockFromJSON(t, "testdata/blockTrace_02.json") | ||
| daBlock, err := codecv4.NewDABlock(block2, 0) | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(daBlock.Encode()) | ||
| assert.Equal(t, "00000000000000020000000063807b2a0000000000000000000000000000000000000000000000000000000000001de9000355418d1e818400020000", encoded) | ||
|
|
||
| block3 := readBlockFromJSON(t, "testdata/blockTrace_03.json") | ||
| daBlock, err = codecv4.NewDABlock(block3, 0) | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(daBlock.Encode()) | ||
| assert.Equal(t, "00000000000000030000000063807b2d0000000000000000000000000000000000000000000000000000000000001a2c0003546c3cbb39e500010000", encoded) | ||
|
|
||
| block4 := readBlockFromJSON(t, "testdata/blockTrace_04.json") | ||
| daBlock, err = codecv4.NewDABlock(block4, 0) | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(daBlock.Encode()) | ||
| assert.Equal(t, "000000000000000d00000000646b6e13000000000000000000000000000000000000000000000000000000000000000000000000007a1200000c000b", encoded) | ||
|
|
||
| block5 := readBlockFromJSON(t, "testdata/blockTrace_05.json") | ||
| daBlock, err = codecv4.NewDABlock(block5, 0) | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(daBlock.Encode()) | ||
| assert.Equal(t, "000000000000001100000000646b6ed0000000000000000000000000000000000000000000000000000000000000000000000000007a1200002a002a", encoded) | ||
|
|
||
| block6 := readBlockFromJSON(t, "testdata/blockTrace_06.json") | ||
| daBlock, err = codecv4.NewDABlock(block6, 0) | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(daBlock.Encode()) | ||
| assert.Equal(t, "000000000000001100000000646b6ed0000000000000000000000000000000000000000000000000000000000000000000000000007a1200000a000a", encoded) | ||
|
|
||
| block7 := readBlockFromJSON(t, "testdata/blockTrace_07.json") | ||
| daBlock, err = codecv4.NewDABlock(block7, 0) | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(daBlock.Encode()) | ||
| assert.Equal(t, "000000000000001100000000646b6ed0000000000000000000000000000000000000000000000000000000000000000000000000007a120001010101", encoded) | ||
|
|
||
| codecv0, err := CodecFromVersion(CodecV0) | ||
| require.NoError(t, err) | ||
|
|
||
| // sanity check: v0 and v4 block encodings are identical | ||
| for _, trace := range []*Block{block2, block3, block4, block5, block6, block7} { | ||
| blockv0, err := codecv0.NewDABlock(trace, 0) | ||
| assert.NoError(t, err) | ||
| encodedv0 := hex.EncodeToString(blockv0.Encode()) | ||
|
|
||
| blockv4, err := codecv4.NewDABlock(trace, 0) | ||
| assert.NoError(t, err) | ||
| encodedv4 := hex.EncodeToString(blockv4.Encode()) | ||
|
|
||
| assert.Equal(t, encodedv0, encodedv4) | ||
| } | ||
| } | ||
|
|
||
| func TestCodecV4ChunkEncode(t *testing.T) { | ||
| codecv4, err := CodecFromVersion(CodecV4) | ||
| require.NoError(t, err) | ||
|
|
||
| // chunk with a single empty block | ||
| daBlock := &daBlockV0{} | ||
| daChunkV1 := &daChunkV1{blocks: []DABlock{daBlock}, transactions: [][]*types.TransactionData{nil}} | ||
|
|
||
| encodedBytes, err := daChunkV1.Encode() | ||
| assert.NoError(t, err) | ||
| encoded := hex.EncodeToString(encodedBytes) | ||
| assert.Equal(t, "01000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", encoded) | ||
|
|
||
| // transactions are not part of the encoding | ||
| daChunkV1.transactions[0] = append(daChunkV1.transactions[0], &types.TransactionData{Type: types.L1MessageTxType}, &types.TransactionData{Type: types.DynamicFeeTxType}) | ||
| encodedBytes, err = daChunkV1.Encode() | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(encodedBytes) | ||
| assert.Equal(t, "01000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", encoded) | ||
|
|
||
| block := readBlockFromJSON(t, "testdata/blockTrace_02.json") | ||
| originalChunk := &Chunk{Blocks: []*Block{block}} | ||
| daChunk, err := codecv4.NewDAChunk(originalChunk, 0) | ||
| assert.NoError(t, err) | ||
| encodedBytes, err = daChunk.Encode() | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(encodedBytes) | ||
| assert.Equal(t, "0100000000000000020000000063807b2a0000000000000000000000000000000000000000000000000000000000001de9000355418d1e818400020000", encoded) | ||
|
|
||
| block = readBlockFromJSON(t, "testdata/blockTrace_03.json") | ||
| originalChunk = &Chunk{Blocks: []*Block{block}} | ||
| daChunk, err = codecv4.NewDAChunk(originalChunk, 0) | ||
| assert.NoError(t, err) | ||
| encodedBytes, err = daChunk.Encode() | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(encodedBytes) | ||
| assert.Equal(t, "0100000000000000030000000063807b2d0000000000000000000000000000000000000000000000000000000000001a2c0003546c3cbb39e500010000", encoded) | ||
|
|
||
| block = readBlockFromJSON(t, "testdata/blockTrace_04.json") | ||
| originalChunk = &Chunk{Blocks: []*Block{block}} | ||
| daChunk, err = codecv4.NewDAChunk(originalChunk, 0) | ||
| assert.NoError(t, err) | ||
| encodedBytes, err = daChunk.Encode() | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(encodedBytes) | ||
| assert.Equal(t, "01000000000000000d00000000646b6e13000000000000000000000000000000000000000000000000000000000000000000000000007a1200000c000b", encoded) | ||
|
|
||
| block = readBlockFromJSON(t, "testdata/blockTrace_05.json") | ||
| originalChunk = &Chunk{Blocks: []*Block{block}} | ||
| daChunk, err = codecv4.NewDAChunk(originalChunk, 0) | ||
| assert.NoError(t, err) | ||
| encodedBytes, err = daChunk.Encode() | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(encodedBytes) | ||
| assert.Equal(t, "01000000000000001100000000646b6ed0000000000000000000000000000000000000000000000000000000000000000000000000007a1200002a002a", encoded) | ||
|
|
||
| block = readBlockFromJSON(t, "testdata/blockTrace_06.json") | ||
| originalChunk = &Chunk{Blocks: []*Block{block}} | ||
| daChunk, err = codecv4.NewDAChunk(originalChunk, 0) | ||
| assert.NoError(t, err) | ||
| encodedBytes, err = daChunk.Encode() | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(encodedBytes) | ||
| assert.Equal(t, "01000000000000001100000000646b6ed0000000000000000000000000000000000000000000000000000000000000000000000000007a1200000a000a", encoded) | ||
|
|
||
| block = readBlockFromJSON(t, "testdata/blockTrace_07.json") | ||
| originalChunk = &Chunk{Blocks: []*Block{block}} | ||
| daChunk, err = codecv4.NewDAChunk(originalChunk, 0) | ||
| assert.NoError(t, err) | ||
| encodedBytes, err = daChunk.Encode() | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(encodedBytes) | ||
| assert.Equal(t, "01000000000000001100000000646b6ed0000000000000000000000000000000000000000000000000000000000000000000000000007a120001010101", encoded) | ||
| } | ||
|
|
||
| func TestCodecV4ChunkHash(t *testing.T) { | ||
| codecv4, err := CodecFromVersion(CodecV4) | ||
| require.NoError(t, err) | ||
|
|
||
| // chunk with a single empty block | ||
| daBlock := &daBlockV0{} | ||
| chunk := &daChunkV1{blocks: []DABlock{daBlock}, transactions: [][]*types.TransactionData{nil}} | ||
| hash, err := chunk.Hash() | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, "0x7cdb9d7f02ea58dfeb797ed6b4f7ea68846e4f2b0e30ed1535fc98b60c4ec809", hash.Hex()) | ||
|
|
||
| // L1 transactions are part of the hash | ||
| chunk.transactions[0] = append(chunk.transactions[0], &types.TransactionData{Type: types.L1MessageTxType, TxHash: "0x0000000000000000000000000000000000000000000000000000000000000000"}) | ||
| hash, err = chunk.Hash() | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, "0xdcb42a70c54293e75a19dd1303d167822182d78b361dd7504758c35e516871b2", hash.Hex()) | ||
|
|
||
| // L2 transactions are not part of the hash | ||
| chunk.transactions[0] = append(chunk.transactions[0], &types.TransactionData{Type: types.DynamicFeeTxType, TxHash: "0x0000000000000000000000000000000000000000000000000000000000000000"}) | ||
| hash, err = chunk.Hash() | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, "0xdcb42a70c54293e75a19dd1303d167822182d78b361dd7504758c35e516871b2", hash.Hex()) | ||
|
|
||
| // numL1Messages are not part of the hash | ||
| daBlock = chunk.blocks[0].(*daBlockV0) | ||
| daBlock.numL1Messages = 1 | ||
| chunk.blocks[0] = daBlock | ||
|
|
||
| hash, err = chunk.Hash() | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, "0xdcb42a70c54293e75a19dd1303d167822182d78b361dd7504758c35e516871b2", hash.Hex()) | ||
|
|
||
| // invalid hash | ||
| chunk.transactions[0] = append(chunk.transactions[0], &types.TransactionData{Type: types.L1MessageTxType, TxHash: "0xg"}) | ||
| _, err = chunk.Hash() | ||
| assert.Error(t, err) | ||
|
|
||
| block := readBlockFromJSON(t, "testdata/blockTrace_02.json") | ||
| originalChunk := &Chunk{Blocks: []*Block{block}} | ||
| daChunk, err := codecv4.NewDAChunk(originalChunk, 0) | ||
| assert.NoError(t, err) | ||
| hash, err = daChunk.Hash() | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, "0x820f25d806ddea0ccdbfa463ee480da5b6ea3906e8a658417fb5417d0f837f5c", hash.Hex()) | ||
|
|
||
| block = readBlockFromJSON(t, "testdata/blockTrace_03.json") | ||
| originalChunk = &Chunk{Blocks: []*Block{block}} | ||
| daChunk, err = codecv4.NewDAChunk(originalChunk, 0) | ||
| assert.NoError(t, err) | ||
| hash, err = daChunk.Hash() | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, "0x4620b3900e8454133448b677cbb2054c5dd61d467d7ebf752bfb12cffff90f40", hash.Hex()) | ||
|
|
||
| block = readBlockFromJSON(t, "testdata/blockTrace_04.json") | ||
| originalChunk = &Chunk{Blocks: []*Block{block}} | ||
| daChunk, err = codecv4.NewDAChunk(originalChunk, 0) | ||
| assert.NoError(t, err) | ||
| hash, err = daChunk.Hash() | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, "0x059c6451e83012b405c7e1a38818369012a4a1c87d7d699366eac946d0410d73", hash.Hex()) | ||
|
|
||
| block = readBlockFromJSON(t, "testdata/blockTrace_05.json") | ||
| originalChunk = &Chunk{Blocks: []*Block{block}} | ||
| daChunk, err = codecv4.NewDAChunk(originalChunk, 0) | ||
| assert.NoError(t, err) | ||
| hash, err = daChunk.Hash() | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, "0x854fc3136f47ce482ec85ee3325adfa16a1a1d60126e1c119eaaf0c3a9e90f8e", hash.Hex()) | ||
|
|
||
| block = readBlockFromJSON(t, "testdata/blockTrace_06.json") | ||
| originalChunk = &Chunk{Blocks: []*Block{block}} | ||
| daChunk, err = codecv4.NewDAChunk(originalChunk, 0) | ||
| assert.NoError(t, err) | ||
| hash, err = daChunk.Hash() | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, "0x2aa220ca7bd1368e59e8053eb3831e30854aa2ec8bd3af65cee350c1c0718ba6", hash.Hex()) | ||
|
|
||
| block = readBlockFromJSON(t, "testdata/blockTrace_07.json") | ||
| originalChunk = &Chunk{Blocks: []*Block{block}} | ||
| daChunk, err = codecv4.NewDAChunk(originalChunk, 0) | ||
| assert.NoError(t, err) | ||
| hash, err = daChunk.Hash() | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, "0xb65521bea7daff75838de07951c3c055966750fb5a270fead5e0e727c32455c3", hash.Hex()) | ||
| } | ||
|
|
||
| func TestCodecV4BatchEncode(t *testing.T) { | ||
| codecv4, err := CodecFromVersion(CodecV4) | ||
| require.NoError(t, err) | ||
|
|
||
| // empty daBatch | ||
| daBatchV3 := &daBatchV3{ | ||
| daBatchV0: daBatchV0{ | ||
| version: CodecV4, | ||
| }, | ||
| } | ||
| encoded := hex.EncodeToString(daBatchV3.Encode()) | ||
| assert.Equal(t, "04000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", encoded) | ||
|
|
||
| block2 := readBlockFromJSON(t, "testdata/blockTrace_02.json") | ||
| chunk2 := &Chunk{Blocks: []*Block{block2}} | ||
| batch := &Batch{Chunks: []*Chunk{chunk2}} | ||
| daBatch, err := codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(daBatch.Encode()) | ||
| assert.Equal(t, "040000000000000000000000000000000000000000000000009f81f6879f121da5b7a37535cdb21b3d53099266de57b1fdf603ce32100ed54101e5c897e0f98f6addd6c99bb51ff927cde93851b0d407aae3d7d5de75a31f2900000000000000000000000000000000000000000000000000000000000000000000000063807b2a26451ed31542ed15543973f8bc8c3b6382ba0cba5650a7faf14625377029203c1b6db22aa24613cb68dee10ca50bbbc88fc15b8a6abf9dcf3ad382a2642e480d", encoded) | ||
|
|
||
| block3 := readBlockFromJSON(t, "testdata/blockTrace_03.json") | ||
| chunk3 := &Chunk{Blocks: []*Block{block3}} | ||
| batch = &Batch{Chunks: []*Chunk{chunk3}} | ||
| daBatch, err = codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(daBatch.Encode()) | ||
| assert.Equal(t, "04000000000000000000000000000000000000000000000000d46d19f6d48083dc7905a68e6a20ea6a8fbcd445d56b549b324a8485b5b574a601ad8c8eee24cc98ab1ca9c0a4c92bf20f488f06dedbc22f1312bd389df7105000000000000000000000000000000000000000000000000000000000000000000000000063807b2d30702c0ea39553a0601a9c6fc5b27c076ddfc1044001fb0a8ad1fd9016304a61233de2770e0fb9a5578e5f633846ef9fa4c2ab8b80b8f9a30f09be07cda8d725", encoded) | ||
|
|
||
| block4 := readBlockFromJSON(t, "testdata/blockTrace_04.json") | ||
| chunk4 := &Chunk{Blocks: []*Block{block4}} | ||
| batch = &Batch{Chunks: []*Chunk{chunk4}} | ||
| daBatch, err = codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(daBatch.Encode()) | ||
| assert.Equal(t, "040000000000000000000000000000000b000000000000000bcaece1705bf2ce5e94154469d910ffe8d102419c5eb3152c0c6d237cf35c885f01c6a9a7d06425dbfad42697e4ce5bc8562d7c5ffe1f62d57fcb51240e33af93000000000000000000000000000000000000000000000000000000000000000000000000646b6e1338122423f3cebb92645f9ac93c8ee50edb75ea93a951f278007e721a7b9f995824895b00195499dfe77d201cf3627050d866abb2685f87e10466c4fcaf3a8588", encoded) | ||
|
|
||
| block5 := readBlockFromJSON(t, "testdata/blockTrace_05.json") | ||
| chunk5 := &Chunk{Blocks: []*Block{block5}} | ||
| batch = &Batch{Chunks: []*Chunk{chunk5}} | ||
| daBatch, err = codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(daBatch.Encode()) | ||
| assert.Equal(t, "040000000000000000000000000000002a000000000000002a93255aa24dd468c5645f1e6901b8131a7a78a0eeb2a17cbb09ba64688a8de6b4016ac24dabb9e1bbb3ec3c65b50a829564c2f56160ba92fbdb03ed7e4a0c439a000000000000000000000000000000000000000000000000000000000000000000000000646b6ed004e124536a56f650b0994e58647e59087bf99ecadbd7bc730ad6290f229fb0715885a06aad250ef3594c65a7a6a0e282175b1ad4d8b4063dac48e282bb5a9213", encoded) | ||
|
|
||
| block6 := readBlockFromJSON(t, "testdata/blockTrace_06.json") | ||
| chunk6 := &Chunk{Blocks: []*Block{block6}} | ||
| batch = &Batch{Chunks: []*Chunk{chunk6}} | ||
| daBatch, err = codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(daBatch.Encode()) | ||
| assert.Equal(t, "040000000000000000000000000000000a000000000000000ac7bcc8da943dd83404e84d9ce7e894ab97ce4829df4eb51ebbbe13c90b5a3f4d016ac24dabb9e1bbb3ec3c65b50a829564c2f56160ba92fbdb03ed7e4a0c439a000000000000000000000000000000000000000000000000000000000000000000000000646b6ed004e124536a56f650b0994e58647e59087bf99ecadbd7bc730ad6290f229fb0715885a06aad250ef3594c65a7a6a0e282175b1ad4d8b4063dac48e282bb5a9213", encoded) | ||
|
|
||
| block7 := readBlockFromJSON(t, "testdata/blockTrace_07.json") | ||
| chunk7 := &Chunk{Blocks: []*Block{block7}} | ||
| batch = &Batch{Chunks: []*Chunk{chunk7}} | ||
| daBatch, err = codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(daBatch.Encode()) | ||
| assert.Equal(t, "04000000000000000000000000000001010000000000000101899a411a3309c6491701b7b955c7b1115ac015414bbb71b59a0ca561668d5208016ac24dabb9e1bbb3ec3c65b50a829564c2f56160ba92fbdb03ed7e4a0c439a000000000000000000000000000000000000000000000000000000000000000000000000646b6ed004e124536a56f650b0994e58647e59087bf99ecadbd7bc730ad6290f229fb0715885a06aad250ef3594c65a7a6a0e282175b1ad4d8b4063dac48e282bb5a9213", encoded) | ||
|
|
||
| batch = &Batch{Chunks: []*Chunk{chunk2, chunk3, chunk4, chunk5}} | ||
| daBatch, err = codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(daBatch.Encode()) | ||
| assert.Equal(t, "040000000000000000000000000000002a000000000000002ae7740182b0948139505b6b296d0c6c6f7717708323e6e687917acad823b559d80113ba3d5c53a035f4b4ec6f8a2ba9ab521bccab9f90e3a713ab5fffc0adec57000000000000000000000000000000000000000000000000000000000000000000000000646b6ed012e49b70b64652e5cab5dfdd1f58958d863de1d7fcb959e09f147a98b0b895171560f81b17ec3a2fe1c8ed2d308ca5bf002d7e3c18db9682a8d0f5379bf213aa", encoded) | ||
|
|
||
| chunk8 := &Chunk{Blocks: []*Block{block2, block3, block4}} | ||
| chunk9 := &Chunk{Blocks: []*Block{block5}} | ||
| batch = &Batch{Chunks: []*Chunk{chunk8, chunk9}} | ||
| daBatch, err = codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| encoded = hex.EncodeToString(daBatch.Encode()) | ||
| assert.Equal(t, "040000000000000000000000000000002a000000000000002a9b0f37c563d27d9717ab16d47075df996c54fe110130df6b11bfd7230e1347670121388d141bd439af8447db5d00bacbfe1587fea6581f795e98588d95ba7f26000000000000000000000000000000000000000000000000000000000000000000000000646b6ed046aedf214a661b6b37b9c325fef4484ff3613a6fb52719609bf02a66bc7ba23b6e9b7bcbe3be0ba95654f16f715bf7e39ef87a84199340423f6487cf56058085", encoded) | ||
| } | ||
|
|
||
| func TestCodecV4BatchHash(t *testing.T) { | ||
| codecv4, err := CodecFromVersion(CodecV4) | ||
| require.NoError(t, err) | ||
|
|
||
| // empty daBatch | ||
| daBatchV3 := &daBatchV3{ | ||
| daBatchV0: daBatchV0{ | ||
| version: CodecV4, | ||
| }, | ||
| } | ||
| assert.Equal(t, common.HexToHash("0xdaf0827d02b32d41458aea0d5796dd0072d0a016f9834a2cb1a964d2c6ee135c"), daBatchV3.Hash()) | ||
|
|
||
| block2 := readBlockFromJSON(t, "testdata/blockTrace_02.json") | ||
| chunk2 := &Chunk{Blocks: []*Block{block2}} | ||
| batch := &Batch{Chunks: []*Chunk{chunk2}} | ||
| daBatch, err := codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, common.HexToHash("0x53d6da35c9b6f0413b6ebb80f4a8c19b0e3279481ddf602398a54d3b4e5d4f2c"), daBatch.Hash()) | ||
|
|
||
| block3 := readBlockFromJSON(t, "testdata/blockTrace_03.json") | ||
| chunk3 := &Chunk{Blocks: []*Block{block3}} | ||
| batch = &Batch{Chunks: []*Chunk{chunk3}} | ||
| daBatch, err = codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, common.HexToHash("0x08feefdb19215bb0f51f85a3b02a0954ac7da67681e274db49b9102f4c6e0857"), daBatch.Hash()) | ||
|
|
||
| block4 := readBlockFromJSON(t, "testdata/blockTrace_04.json") | ||
| chunk4 := &Chunk{Blocks: []*Block{block4}} | ||
| batch = &Batch{Chunks: []*Chunk{chunk4}} | ||
| daBatch, err = codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, common.HexToHash("0xc56c5e51993342232193d1d93124bae30a5b1444eebf49b2dd5f2c5962d4d54d"), daBatch.Hash()) | ||
|
|
||
| block5 := readBlockFromJSON(t, "testdata/blockTrace_05.json") | ||
| chunk5 := &Chunk{Blocks: []*Block{block5}} | ||
| batch = &Batch{Chunks: []*Chunk{chunk5}} | ||
| daBatch, err = codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, common.HexToHash("0x2c32177c8b4c6289d977361c7fd0f1a6ea15add64da2eb8caf0420ac9b35231e"), daBatch.Hash()) | ||
|
|
||
| block6 := readBlockFromJSON(t, "testdata/blockTrace_06.json") | ||
| chunk6 := &Chunk{Blocks: []*Block{block6}} | ||
| batch = &Batch{Chunks: []*Chunk{chunk6}} | ||
| daBatch, err = codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, common.HexToHash("0x909bebbebdbf5ba9c85c6894e839c0b044d2878c457c4942887e3d64469ad342"), daBatch.Hash()) | ||
|
|
||
| block7 := readBlockFromJSON(t, "testdata/blockTrace_07.json") | ||
| chunk7 := &Chunk{Blocks: []*Block{block7}} | ||
| batch = &Batch{Chunks: []*Chunk{chunk7}} | ||
| daBatch, err = codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, common.HexToHash("0x53765a37bbd72655df586b530d79cb4ad0fb814d72ddc95e01e0ede579f45117"), daBatch.Hash()) | ||
|
|
||
| batch = &Batch{Chunks: []*Chunk{chunk2, chunk3, chunk4, chunk5}} | ||
| daBatch, err = codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, common.HexToHash("0x74ccf9cc265f423cc6e6e53ed294000637a832cdc93c76485855289bebb6764a"), daBatch.Hash()) | ||
|
|
||
| chunk8 := &Chunk{Blocks: []*Block{block2, block3, block4}} | ||
| chunk9 := &Chunk{Blocks: []*Block{block5}} | ||
| batch = &Batch{Chunks: []*Chunk{chunk8, chunk9}} | ||
| daBatch, err = codecv4.NewDABatch(batch) | ||
| assert.NoError(t, err) | ||
| assert.Equal(t, common.HexToHash("0x8d5ee00a80d7dbdc083d0cdedd35c2cb722e5944f9d88f7450c9186f3ef3da44"), daBatch.Hash()) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor to avoid duplication between codecv3_test.go and codecv4_test.go
The test files for CodecV3 and CodecV4 contain nearly identical code, differing mainly in the codec version used. To reduce duplication and improve maintainability, consider parameterizing the tests to run for both codec versions or abstracting common functionality into shared helper functions.
Suggestion:
- Create a generic test function that accepts the codec version as a parameter.
- Use subtests to run the same tests for different codec versions.
Example:
func TestCodecBlockEncode(t *testing.T) {
codecs := []struct {
name string
codec Codec
}{
{"CodecV3", codecv3},
{"CodecV4", codecv4},
}
for _, c := range codecs {
t.Run(c.name, func(t *testing.T) {
// Common test logic using c.codec
})
}
}|
Wrong base, opened #32 instead. |
Purpose or design rationale of this PR
This PR downgrades to l2geth latest develop and fixes all incompatibilities.
PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
Breaking change label
Does this PR have the
breaking-changelabel?Summary by CodeRabbit
New Features
DABlock,DAChunk, andDABatchto standardize data handling.Bug Fixes
Tests
Chores
go.modfile to ensure compatibility and introduce new functionalities.