-
Notifications
You must be signed in to change notification settings - Fork 1
interactive tx construction, applied to v2 openchannel #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
713ec38
to
3118549
Compare
a54a705
to
b21ed96
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few additional random notes
- This proposal doesn't have domain separation in the PoDLE proofs (like prefixing the challenge hash with some string). I understand that this is to be compatible with Joinmarket.
- Hashing P2 to get the final commitment is probably unnecessary, as we can assume the decisional diffie helmann (DDH) problem to hold. This prevents some tricks like adding additional info to the commitment such as a timestamp which would allow to blacklist outputs only for a certain time. For what it's worth I don't think that's necessary. This could be done by signing the timestamp with P2 and generator J.
- It's possible to use 32 byte points instead of 33 bytes.
- It would be better if opening of the commitment contained the J-index, such that the verifier doesn't have to try with each of them (one podle verification is about 2 schnorr verifications).
- It seems like this scheme could be 10% faster for single commitments and batch verifiable at the cost of 32 bytes more communication. The opening would be
R=kG
,R2=kJ
ands
(instead ofe
ands
) and verification would bee <- hash(R, R2, P, P2, nodeid) sG ?= R + e*P sJ ?= R2 + e*P2
- `P2` is the compressed point representation of the point `J` by `x`, or `xJ` | ||
- `node_id` is the 33-byte id of the node receiving this proof | ||
|
||
`sig` is the signature, `k + x * e` (over the secp256k1 finite field) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(over the secp256k1 finite field)
That's ambiguous because it's not supposed to refer to the finite field secp256k1 is defined over. Better "(modulo the secp256k1 curve order)".
|
||
The signature can be verified by the recipient as follows: | ||
- `kG = sG - eP`, where `s` is the received `sig` value | ||
- `kJ = sJ - eP2` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be helpful to mention here what to do if the prover choses k = 0
such that kG
and kJ
are the point at infinity. There's a (rather obscure) compressed serialization for the point at infinity (the 0 byte) but the joinmarket implementation for example fails verification in that case (which is slightly better because then the serialization is always 33 bytes).
|
||
The signature can be verified by the recipient as follows: | ||
- `kG = sG - eP`, where `s` is the received `sig` value | ||
- `kJ = sJ - eP2` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the provided e
or s
are >= the curve order these scalar multiplications are not defined. More common than using the values modulo the curve order is to reject high values (lightning-rfc handling of x coordinates, bip-schnorr).
- if the utxo `prevtx_scriptpubkey` is NOT locked to by the provided `pubkey`. (This may | ||
require computing the hash160 of the pubkey if the script type is P2WPKH): | ||
- MUST error | ||
- if the signature is incorrect, or `sha256(kG||kJ||P||P2||node_id) != e`: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I can see this is missing that the verifier is supposed to try with both J
's and that a verifier has to check that (EDIT: that's actually mentioned above).podle_h2 == hash(P2)
I think this would be more clear (and more idiomatic) if the "Proof of Discrete Log Equivalence" section would specify three algorithms:
- GenParams (defines how the J's are derived)
- Commit: (
inputs: x, J, node_id; outputs: podle_h2 , P2, e
) - OpenCommitment (
inputs: P, P2, podle_h2, e, s, node_id, J; output 1 or 0
)
Then in the rationale some properties of the poodle commitment scheme could be mentioned - that this is a special commitment scheme that commits to a pubkey, requires the corresponding secret key to create and is deterministic.
EDIT: This may actually be better described as a unique signature than a commitment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these comments are great, thanks! i'll get this updated this week.
* Rename all the 'varint' to 'bigsize'. Having both is confusing; we chose the name bigsize, so use it explicitly. Signed-off-by: Rusty Russell <[email protected]> * BOLT 7: use `byte` instead of `u8`. `u8` isn't a type; see BOLT #1 "Fundamental Types". Signed-off-by: Rusty Russell <[email protected]> * BOLT 1: promote bigsize to a Fundamental Type. Signed-off-by: Rusty Russell <[email protected]>
b21ed96
to
91bcc94
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good! Sorry if my review is a bit nitty, my background on this is likely weaker than what other reviewer's will be but I'm hoping to re-use a lot of this protocol for constructing DLC funding transactions!
02-peer-protocol.md
Outdated
- The `nLocktime` for the transaction has already been communicated, or is | ||
established via convention. | ||
- The transaction version is 2. | ||
- The each input's sequence number will be set to 0xFDFFFFFF (little endian), which signals RBF. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about inputs that spend OP_CSV outputs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Allowing inputs that have time-based* spending restrictions means that you'd need to know the 1) block the input's outpoint was included in and 2) the current block-time/block-height in order to know if that's currently spendable.
- is trivial enough to figure out but for light-clients 1) is a bit prohibitive.
so by disallowing the sequence number we remove one vector for producing a (presently) un-mineable** transaction.
The real question here, then, are we ok allowing the interactive tx protocol to produce un-mineable transactions? I lean towards "no", as this protocol is meant to be general enough to allow for building cooperative close and splicing transactions -- you wouldn't want to have a 100 block wait on a splicing transactions particularly.
The obvious downside to this is that with the introduction of anchor outputs, the following lightning protocol transactions would need to be spent elsewhere before being eligible for use in an interactive tx exchange:
- unilaterally closed tx's to_local_anchor and to_remote_anchor outputs
- unilaterally closed tx's to_remote outputs
- non-revoked htlc-success txs
- non-revoked htlc-timeout txs
*block height or block-time
**actually i think the real problem is the mempool rejects it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On further consideration, given that the lightning spec itself will now be generating outputs with sequence numbers and that this protocol is intended to be versatile enough to handle several different transaction construction scenarios, it's a good idea to include a field for the sequence number.
02-peer-protocol.md
Outdated
|
||
### Fee Responsibility | ||
|
||
The initiator is responsible for paying the fees for the following fields, to be referred to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious what the reason for the initiator paying these fees instead of splitting them in half between the two parties is?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
simplicity.
established via convention. | ||
- The transaction version is 2. | ||
- The each input's sequence number will be set to 0xFDFFFFFF (little endian), which signals RBF. | ||
- The initiator has furnished a PoDLE commitment, if required. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that since this is the Set-Up and Vocabulary section it might be nice to spell out what PoDLE is and link to its definition since it is used and referenced pretty extensively before it is defined
`serial_id` is a randomly chosen number which uniquely identifies this input. | ||
Inputs in the final transaction will be sorted by `serial_id`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any privacy concerns around letting users pick these numbers as opposed to say using a hash of information that is less in the users' control? I figured I'd ask but I think the answer is no since a malicious party can always just leak this information to the public if they wanted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The motivation for allowing for selection of serial_ids is that it allows a side to (potentially, there's definitely edge cases where it's not possible) spend a SIGHASH_SINGLE sig inputs, because in some cases it allows you to line up with an output.
Using a produced hash would produce a random ordering, which wouldn't allow for this.
02-peer-protocol.md
Outdated
`serial_id` is a randomly chosen number which uniquely identifies this input. | ||
Inputs in the final transaction will be sorted by `serial_id`. | ||
|
||
`prevtx_tx` is the serialized transaction whose output this node is spending. Must be sent to verify that output is in fact a SegWit output. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't another reason that this is needed to protect against a BIP 143 vulnerability? Because otherwise one could just share the redeem script/pubkey which would be smaller than the whole prevtx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this requirement can be removed for v1 witness script outputs, as the message digest for those signatures commits to the scriptPubkey and amounts.
- MUST add all received outputs to the collaborative transaction | ||
- MUST fail the transaction collaboration if: | ||
- it receives a duplicate `serial_id` | ||
- it receives a `serial_id` from the peer with the incorrect parity |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this parity requirement is missing from the sender's requirement list above
- MAY omit this message | ||
|
||
The sending node: | ||
- MUST add all sent outputs to the collaborative transaction |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't there only one? Same question applies to the other three sent/received input/output places where it says "all"
### The `tx_complete` Message | ||
|
||
This message signals the conclusion of a peer's transaction | ||
contributions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In practice (e.g. the initiator
and contributor
contributions diagram above) this seems more like a tx_ack
than a message signaling "the conclusion of a peer's transaction contributions"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
complete
here means "i have completed my currently known set of updates for this transaction"; two sides being "completed" in turn marks the end of the collaborative construction phase.
02-peer-protocol.md
Outdated
- any `serial_id` is duplicated | ||
- the total satoshis of the inputs is less than the outputs | ||
- any output value is less than the `dust_limit`. | ||
- the peer's paid feerate does not meet or exceed the agreed `feerate` | ||
- the `initiator`'s fees do not cover the `common` fields | ||
- any input is duplicated (same txid:index) | ||
- any input is non-Segwit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little confused as to why there are requirements on serial_id
, dust_limit
and inputs when none of these things should be possible to be added to the tx. Though I suppose extra validation can't hurt
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why there are requirements on serial_id
duplicate serial_id
s would result in an undefined sort order, potentially causing a mismatch in resulting txs; ditto on having a different idea of what is 'dust_limit' compatible as in theory you would trim any sub-dust_limit outputs. i believe that including sub-dust_limit outputs makes your transaction non-mempool eligible.
@@ -610,6 +619,96 @@ at each bucket is a prefix of the desired index. | |||
|
|||
# Appendix A: Expected Weights | |||
|
|||
## Expected Weight of the Funding Transaction (v2 Channel Establishment) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: New sections here haven't yet been added to table of contents
This commit extends the specification with a new commitment format that adds two anchor outputs to the commitment transaction. Anchor outputs are a safety feature that allows a channel party to unilaterally increase the fee of the commitment transaction using CPFP and ensure timely confirmation on the chain. There is no cooperation required from the remote party.
Anchor outputs
Many channels use a value below 6, which is really insecure (there are more than 2k such channels on mainnet). While less risky, there are more than 7k channels with a value below 12. This indicates that the spec should probably make the risks a bit more clear to help guide node operators.
This commit adds the interactive transaction construction protcol, as well as the first practical example of using it, v2 of channel establishment. Note that for v2 we also update the channel_id, which now uses the hash of the revocation_basepoints. We move away from using the funding transaction id, as the introduction of RBF* makes it such that a single channel may have many funding transaction id's over the course of its lifetime. *Later, also splicing
We use a modified version of the PoDLE implemented and used by JoinMarket for lightning opens (and optionally any other collaborative transaction construction protocol). The original PoDLE implementation and implementation notes are available at https://github.com/JoinMarket-Org/joinmarket-clientserver/blob/4bf1f50d4e4226b00fd5a8bd39673faceac9da51/jmclient/jmclient/podle.py#L92-L118 The NUMS J point original implementation can be found at https://github.com/JoinMarket-Org/joinmarket-clientserver/blob/4bf1f50d4e4226b00fd5a8bd39673faceac9da51/jmclient/jmclient/podle.py#L203-L235
In order to effectively deter bad behavior, we gossip bad behavior P2 commitments via a blacklist_podle message
Suggested-By: Nadav Kohen @nkohen
1aa5dbe
to
5fcbda5
Compare
To prevent infinite loops, we set the max additions/subtractions to 2^12 (4,096). If each side sends the maximum allowed messages, this is a total of 32,768 messages (2^12 * 4 * 2). (The end result of this exchange would necessarily be an empty tx, as the number of adds would equal the number of removals). In practice, this bounds the total inputs/outputs to 2*13 (8,192) each.
This is the 'final' draft for the interactive protocol.
@rustyrussell you'll note that I haven't updated the podle broadcast to be "as soon as you get a valid H2 proof". The amount of network spam that's going to generate makes me a bit queasy even though it's the most safest way to prevent mass phishing attacks. Instead it's still "in the case that this is a bad open, then tell everyone". Which should ideally cut down on the amount of gossip a good deal, the trade-off being that you're a bit more likely to get scanned.
still has one todo left for the 'signed transaction' generation; i changed the amounts so i need to regenerate the signature.
hoping to get to implementing this tomorrow.
the included podle tests have been added to my podle branch, also.