Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The EigenDA repo is organized as a monorepo, with each project adhering to the "

The same pattern is used for intra-project and inter-project dependencies. For instance, the folder `indexer/indexer` contains implementations of the interfaces in `core` which depend on the `indexer` project.

In general, the `core` project contains implementation all of the important business logic responsible for the security guarantees of the EigenDA protocol, while the other projects add the networking layers needed to run the distributed system.
In general, the `core` project contains implementation of all the important business logic responsible for the security guarantees of the EigenDA protocol, while the other projects add the networking layers needed to run the distributed system.


# Directory structure
Expand All @@ -17,7 +17,7 @@ In general, the `core` project contains implementation all of the important busi
┌── <a href="./core">core</a>: Core logic of the EigenDA protocol
├── <a href="./disperser">disperser</a>: Disperser service
├── <a href="./docs">docs</a>: Documentation and specification
── <a href="./indexer">indexer</a>: A simple indexer for efficently tracking chain state and maintaining accumulators
── <a href="./indexer">indexer</a>: A simple indexer for efficiently tracking chain state and maintaining accumulators
├── <a href="./node">node</a>: DA node service
├── <a href="./pkg">pkg</a>
| ├── <a href="./pkg/encoding">encoding</a>: Core encoding/decoding functionality and multiproof generation
Expand Down
8 changes: 4 additions & 4 deletions docs/design/assignment.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ and any $U_a \subseteq U_q$ such
$$ \sum_{i \in U_a} S_i \le \alpha \sum_{i \in O}S_i$$


we need to be able to reconstuct from $U_q \setminus U_a$. But we can see that the total stake held by this group will satisfy
we need to be able to reconstruct from $U_q \setminus U_a$. But we can see that the total stake held by this group will satisfy

$$
\sum_{i \in U_q \setminus U_a} S_i = \sum_{i \in U_q}S_i - \sum_{i \in U_a}S_i \ge (\beta-\alpha)\sum_{i \in O}S_i = \gamma \sum_{i \in O}S_i.
Expand All @@ -55,7 +55,7 @@ $$\max_{\{S_j:j\in O\}} \gamma\frac{B_i - \tilde{B}_i}{B} \le 1/n.$$

### 3. Minimizes encoding complexity

The system should minimize coding and verification computational complexity for both the disperser and operators. The computational complexity roughly scales with the number of chunks (or more specifcally, inversely with the chunk size) [clarification required]. Thus, the system should minimize the number of chunks, subject to requirements 1 and 2.
The system should minimize coding and verification computational complexity for both the disperser and operators. The computational complexity roughly scales with the number of chunks (or more specifically, inversely with the chunk size) [clarification required]. Thus, the system should minimize the number of chunks, subject to requirements 1 and 2.

## Proposed solution

Expand Down Expand Up @@ -122,6 +122,6 @@ Moreover, the optimization routing described for finding $m$ will serve only to

## FAQs

Q1. Can increasing the number of parity symbols increase the total degree of the polynomial, resulting in greator coding complexity.
Q1. Can increasing the number of parity symbols increase the total degree of the polynomial, resulting in greater coding complexity.

A1. This seems like a possibility. In general, interactions with constraints of the proving system are not covered here. However, if this is a concern it should be possible to adjust block size constraints accordingly to avoid pushing over some limit.
A1. This seems like a possibility. In general, interactions with constraints of the proving system are not covered here. However, if this is a concern it should be possible to adjust block size constraints accordingly to avoid pushing over some limit.
4 changes: 2 additions & 2 deletions docs/design/encoding.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ We will also highlight the additional constraints on the Encoding interface whic

## Deriving the polynomial coefficients and commitment

As described in the [Encoding Module Specification](../spec/protocol-modules/storage/encoding.md), given a blob of data, we convert the blob to a polynomial $p(X) = \sum_{i=0}^{m-1} c_iX^i$ by simply slicing the data into a string of symbols, and interpretting this list of symbols as the tuple $(c_i)_{i=0}^{m-1}$.
As described in the [Encoding Module Specification](../spec/protocol-modules/storage/encoding.md), given a blob of data, we convert the blob to a polynomial $p(X) = \sum_{i=0}^{m-1} c_iX^i$ by simply slicing the data into a string of symbols, and interpreting this list of symbols as the tuple $(c_i)_{i=0}^{m-1}$.

In the case of the KZG-FFT encoder, the polynomial lives on the field associated with the BN-254 elliptic curve, which as order [TODO: fill in order].

Expand Down Expand Up @@ -39,7 +39,7 @@ As the encoding interface calls for the construction of `NumChunks` Chunks of le

The construction of the multireveal proofs can also be performed using a DFT (as in [“Fast Amortized Kate Proofs”](./https://eprint.iacr.org/2023/033.pdf)). Leaving the full details of this process to the referenced document, we describe here only 1) the index-assignment the scheme used by the amortized multiproof generation approach and 2) the constraints that this creates for the overall encoder interface.

Given the group $S$ corresponding to the indices of the polynomial evalutions and a cyclic group $C$ which is a subgroup of $S$, the cosets of $C$ in $S$ are given by
Given the group $S$ corresponding to the indices of the polynomial evaluations and a cyclic group $C$ which is a subgroup of $S$, the cosets of $C$ in $S$ are given by

$$
s+C = \{g+c : c \in C\} \text{ for } s \in S.
Expand Down
4 changes: 2 additions & 2 deletions docs/spec/components/indexer.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ An accumulator can optionally define a custom method for initializing the accumu

The indexer is one of the only stateful components of the operator. To avoid reindexing on restarts, the state of the indexer is stored in a database. We will use a schemaless db to avoid migrations.

The indexer must also support reorg resistence. We can achieve simple reorg resilience in the following way:
The indexer must also support reorg resistance. We can achieve simple reorg resilience in the following way:
- For every accumulator, we make sure to store history long enough that we always have access to a finalized state.
- In the event reorg is detected, we can revert to most recent finalized state, and then reindex to head.
- In the event reorg is detected, we can revert to the most recent finalized state, and then reindex to head.

The indexer needs to accommodate upgrades to the smart contract interfaces. Contract upgrades can have the following effects on interfaces:
- Addition, removal, modification of events
Expand Down
4 changes: 2 additions & 2 deletions docs/spec/components/node.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ When the `StoreChunks` method is called, the node performs the following checks:
1. Check that all payments are correct (See [Payment Constraints](./node-payments.md)).
2. Check that its own chunks are correct (See [Blob Encoding Constraints](./node-encoding.md))

Provided that both checks are successful, the node will sign the concatentation of the paymentRoot and blobRoot using the BLS key registered with the `BLSRegistry` and then return the signature.
Provided that both checks are successful, the node will sign the concatenation of the paymentRoot and blobRoot using the BLS key registered with the `BLSRegistry` and then return the signature.



Expand All @@ -43,4 +43,4 @@ The DA Node utilizes an adapter on top of the [Indexer](./indexer.md) interface
type IndexerAdapter interface{
GetStateView(ctx context.Context, blockNumber uint32) (*StateView, error)
}
```
```
4 changes: 2 additions & 2 deletions docs/spec/definitions.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

## System Components

**DA Node**. The DA Node is an off-chain component which is run by an EigenLayer operator. EigenDA operators are responsible for accepting data from the disperser, certifying its availability, and following a protocol for distributing the data to registered retreivers. It is assumed that honest DA nodes are delegated a threshold proportion of the stake from EigenLayer restakers, where this threshold may be defined per DA end-user.
**DA Node**. The DA Node is an off-chain component which is run by an EigenLayer operator. EigenDA operators are responsible for accepting data from the disperser, certifying its availability, and following a protocol for distributing the data to registered retrievers. It is assumed that honest DA nodes are delegated a threshold proportion of the stake from EigenLayer restakers, where this threshold may be defined per DA end-user.

**Disperser**. The Disperser is an off-chain component which is responsible for packaging data blobs in a specific way, distributing their data among the DA nodes, aggregating certifications from the nodes, and then pushing the aggregated certificate to the chain. The disperser is an untrusted system component.

Expand All @@ -19,4 +19,4 @@

## Staking Concepts

**Quorum**
**Quorum**
4 changes: 2 additions & 2 deletions docs/spec/flows/dispersal.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# Dispersal

Data is made available on EigenDA through the following flow:
1. The [Disperser](./disperer.md) encodes the data in accordance with the [storage module](./protocol-modules/storage/overview.md) requirements, constructs the approprriate header, and sends the chunks to the DA nodes.
1. The [Disperser](./disperer.md) encodes the data in accordance with the [storage module](./protocol-modules/storage/overview.md) requirements, constructs the appropriate header, and sends the chunks to the DA nodes.
2. Upon receiving signatures from the DA nodes, the disperser aggregates these signatures.
3. Next, the disperser sends the aggregated signatures and header to the `confirmBatch` method of the `ServiceManager`
4. Once retrievers see the confirmed Batch on chain, they can request to download the associated chunks from a set of DA nodes, in accordance with the [retrieval module](./protocol-modules/retrieval/retrieval.md) of the protocol.
4. Once retrievers see the confirmed Batch on chain, they can request to download the associated chunks from a set of DA nodes, in accordance with the [retrieval module](./protocol-modules/retrieval/retrieval.md) of the protocol.
4 changes: 2 additions & 2 deletions docs/spec/integrations/disperser.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,10 +39,10 @@ The disperser returns to each requester the KZG commitment to the `overallPoly`

### Encoding

The disperser encodes the `overallPoly` for each quorum among all of the `BlobStoreRequests`. The disperser generates its encoding parameters for each quroum relative to the highest `AdversaryThresholdBPs` and highest `QuorumThresholdBPs` for each quorum among all of the `BlobStoreRequests`.
The disperser encodes the `overallPoly` for each quorum among all of the `BlobStoreRequests`. The disperser generates its encoding parameters for each quorum relative to the highest `AdversaryThresholdBPs` and highest `QuorumThresholdBPs` for each quorum among all of the `BlobStoreRequests`.

[TODO: @bxue-l2]

### Aggregation

## Confirmation
## Confirmation
20 changes: 10 additions & 10 deletions docs/spec/integrations/rollups.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@
Rollups need to define the quorums they want to sign off on the availability of their data and their trust assumptions on each of those quorums. The rollups need to define:

1. `AdversaryThresholdBPs`. This is the maximum ratio (in basis points) that can be adversarial in the quorum.
2. `QuorumThresholdBPs`. This is the minimum ratio (in basis points) that can need to sign on the availbility of the rollups data for the rollup's contracts to consider the data available.
2. `QuorumThresholdBPs`. This is the minimum ratio (in basis points) that can need to sign on the availability of the rollups data for the rollup's contracts to consider the data available.

## Requests to Store Data

When the rollup has data that they want to make availbale, they construct a request to the disperser of the form [`BlobStoreRequest`](./types.md#blobstorerequest) and receive a response of the form [`BlobStoreResponse`](./types.md#blobstoreresponse).
When the rollup has data that they want to make available, they construct a request to the disperser of the form [`BlobStoreRequest`](./types.md#blobstorerequest) and receive a response of the form [`BlobStoreResponse`](./types.md#blobstoreresponse).

This flow is detailed [here](./disperser.md#requests-to-store-data).

Expand All @@ -21,11 +21,11 @@ When making state claims, validators of the optimistic rollup should resubmit th

### Revealing Data Onchain during Fraud Proofs (Optimistic rollups)

To keep the interface between EIP-4844 and EigenDA the same, optimistic rollups need to reveal data onchain against the commitment to their own data instead of the concatenated data of of all of the `BlobStoreRequests` in the batch. To prove the rollups own data commitment against the batched (concatenated) commitment the was posted onchain in the dataStore header, rollups generate the following proof.
To keep the interface between EIP-4844 and EigenDA the same, optimistic rollups need to reveal data onchain against the commitment to their own data instead of the concatenated data of all of the `BlobStoreRequests` in the batch. To prove the rollups own data commitment against the batched (concatenated) commitment the was posted onchain in the dataStore header, rollups generate the following proof.

A challenger retrieves the data corresponding to the KZG commitment pointed to by validators of their rollup and parses their rollup's data from the claimed start and end degree. They then prove to a smart contract the commitment to the rollups data, along with their fraud proof, via the [subcommitment proofs](#subcommitment-proof). Note that the rollup's smart contract will implement the verifier described in the proof.

[TODO: Explain how the powers of tau are put on chain (use logarthimic adding)]
[TODO: Explain how the powers of tau are put on chain (use logarithmic adding)]

## ZK Rollups

Expand All @@ -52,12 +52,12 @@ The challenger can generate a proof of the commitment to $b(x)$, $B \in \mathbb{
- Calculate $\pi = \pi_F + \beta \pi_B + \beta^2 \pi_G + \beta^3 \pi_C$


The prover then submits to the verifier $F, B, G, L_F, L_B, L_G, \pi, f(\gamma), b(\gamma), g(\gamma), c(\gamma)$ along with $C$ from the dataStore header of the blob in question. The verifer then verifies:
The prover then submits to the verifier $F, B, G, L_F, L_B, L_G, \pi, f(\gamma), b(\gamma), g(\gamma), c(\gamma)$ along with $C$ from the dataStore header of the blob in question. The verifier then verifies:

- $e(F, [x^{\text{max degree} - n}]_2) = e(L_F, [1]_2)$. This verfies the low degreeness of $F$.
- $e(B, [x^{\text{max degree} - (m - n)}]_2) = e(L_B, [1]_2)$. This verfies the low degreeness of $B$.
- $e(G, [x^{\text{max degree} - (\text{degree} - m)}]_2) = e(L_G, [1]_2)$. This verfies the low degreeness of $G$.
- $e(F, [x^{\text{max degree} - n}]_2) = e(L_F, [1]_2)$. This verifies the low degreeness of $F$.
- $e(B, [x^{\text{max degree} - (m - n)}]_2) = e(L_B, [1]_2)$. This verifies the low degreeness of $B$.
- $e(G, [x^{\text{max degree} - (\text{degree} - m)}]_2) = e(L_G, [1]_2)$. This verifies the low degreeness of $G$.
- Calculate $\gamma = keccak256(C, F, B, G)$.
- Calculate $\beta = keccak256(\gamma, \pi_F, \pi_B, \pi_G, \pi_C)$
- $e(F - [f(\gamma)]_1 + \beta(B - [b(\gamma)]_1) + \beta^2(G - [g(\gamma)]_1) + \beta^3(C - [c(\gamma)]_1), [1]_2) = e(\pi, [x-\gamma]_2)$. This verifies a random openning of all of the claimed polynomials at the same x-coordinate.
- $c(\gamma) = f(\gamma) + \gamma^nb(\gamma) + \gamma^mg(\gamma)$. This verifies that the polynomials have the claimed shifted relationship with $c(x)$.
- $e(F - [f(\gamma)]_1 + \beta(B - [b(\gamma)]_1) + \beta^2(G - [g(\gamma)]_1) + \beta^3(C - [c(\gamma)]_1), [1]_2) = e(\pi, [x-\gamma]_2)$. This verifies a random opening of all of the claimed polynomials at the same x-coordinate.
- $c(\gamma) = f(\gamma) + \gamma^nb(\gamma) + \gamma^mg(\gamma)$. This verifies that the polynomials have the claimed shifted relationship with $c(x)$.
4 changes: 2 additions & 2 deletions docs/spec/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Two important aspects of a DA system are

## EigenLayer Quorums

Most baseline EigenDA security guarantees are derived under a Byzantine model which stipulates that a maximum percentage of validators will behave adversarially at any given moment in time. As an EigenLayer AVS, EigenDA makes use of the validator set represented by validators who have restaked Ether or other staking assets via the EigenLayer plaftorm. Consequently, all constraints on adversarial behavior per the Byzantine modeling approach take the form of a maximum amount of stake which can be held by adversarial agents.
Most baseline EigenDA security guarantees are derived under a Byzantine model which stipulates that a maximum percentage of validators will behave adversarially at any given moment in time. As an EigenLayer AVS, EigenDA makes use of the validator set represented by validators who have restaked Ether or other staking assets via the EigenLayer platform. Consequently, all constraints on adversarial behavior per the Byzantine modeling approach take the form of a maximum amount of stake which can be held by adversarial agents.

An important aspect of restaking on EigenLayer is the notion of a quorum. EigenLayer supports restaking of various types of assets, from natively staked Ether and Liquid Staking Tokens (LSTs) to the wrapped assets of other protocols such as Ethereum rollups. Since these different categories of assets can have arbitrary and variable exchange rates, EigenLayer supports the quorum as a means for the users of protocols such as EigenDA to specify the nominal level of security that each staking token is taken to provide. In practice, a quorum is a vector specifying the relative weight of each staking strategy supported by EigenLayer.

Expand All @@ -20,7 +20,7 @@ An important aspect of restaking on EigenLayer is the notion of a quorum. EigenL

### EigenDA Security

When an end user posts a blob of data to EigenDA, they can specify a list of [security parameters](./data-model.md#quorum-information), each of which consists of a `QuorumID` identifying a particular quorum registered with EigenDA and an `AdversaryThreshold` which specifies the Byztanine adversarial tolerance that the user expects the blobs availability to respect.
When an end user posts a blob of data to EigenDA, they can specify a list of [security parameters](./data-model.md#quorum-information), each of which consists of a `QuorumID` identifying a particular quorum registered with EigenDA and an `AdversaryThreshold` which specifies the Byzantine adversarial tolerance that the user expects the blobs availability to respect.

For such a blob accepted by the system (See [Dispersal Flow](./flows/dispersal.md)), EigenDA delivers the following security guarantee: Unless more than an `AdversaryThreshold` of stakers acts adversarially in every quorum associated with the blob, the blob will be available to any honest retriever. How this guarantee is supported is discussed further in [The Modules of Data Availability](./protocol-modules/overview.md)

Expand Down
6 changes: 3 additions & 3 deletions docs/spec/protocol-modules/attestation/attestation.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ The `confirmBatch` interface upholds the following system properties:
2. Reorg behavior: On chain confirmations behave properly under chain reorgs or forks.
3. Confirmer permissioning: Only permissioned dispersers can confirm blobs.

**Operator registration gaurds**
The `register` and `deregister` interfaces uphold the folloiwng system properties:
**Operator registration guards**
The `register` and `deregister` interfaces uphold the following system properties:
1. DA nodes cannot register if their delegated stake is insufficient.
2. DA nodes cannot deregister if they are still responsible for storing data.

Expand Down Expand Up @@ -51,6 +51,6 @@ Whenever the `confirmBatch` method of the [ServiceMananger.sol](../contracts-ser
TODO: Specify how confirmer is permissioned


## Operator registration gaurds
## Operator registration guards

TODO: Describe these guards
Loading