Post
π
Original date posted:2021-09-16
π Original message:Hi there,
I'm writing to propose a set of mempool policy changes to enable package
validation (in preparation for package relay) in Bitcoin Core. These would
not
be consensus or P2P protocol changes. However, since mempool policy
significantly affects transaction propagation, I believe this is relevant
for
the mailing list.
My proposal enables packages consisting of multiple parents and 1 child. If
you
develop software that relies on specific transaction relay assumptions
and/or
are interested in using package relay in the future, I'm very interested to
hear
your feedback on the utility or restrictiveness of these package policies
for
your use cases.
A draft implementation of this proposal can be found in [Bitcoin Core
PR#22290][1].
An illustrated version of this post can be found at
gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
I have also linked the images below.
## Background
Feel free to skip this section if you are already familiar with mempool
policy
and package relay terminology.
### Terminology Clarifications
* Package = an ordered list of related transactions, representable by a
Directed
Acyclic Graph.
* Package Feerate = the total modified fees divided by the total virtual
size of
all transactions in the package.
- Modified fees = a transaction's base fees + fee delta applied by the
user
with `prioritisetransaction`. As such, we expect this to vary across
mempools.
- Virtual Size = the maximum of virtual sizes calculated using [BIP141
virtual size][2] and sigop weight. [Implemented here in Bitcoin
Core][3].
- Note that feerate is not necessarily based on the base fees and
serialized
size.
* Fee-Bumping = user/wallet actions that take advantage of miner incentives
to
boost a transaction's candidacy for inclusion in a block, including Child
Pays
for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention in
mempool policy is to recognize when the new transaction is more economical
to
mine than the original one(s) but not open DoS vectors, so there are some
limitations.
### Policy
The purpose of the mempool is to store the best (to be most
incentive-compatible
with miners, highest feerate) candidates for inclusion in a block. Miners
use
the mempool to build block templates. The mempool is also useful as a cache
for
boosting block relay and validation performance, aiding transaction relay,
and
generating feerate estimations.
Ideally, all consensus-valid transactions paying reasonable fees should
make it
to miners through normal transaction relay, without any special
connectivity or
relationships with miners. On the other hand, nodes do not have unlimited
resources, and a P2P network designed to let any honest node broadcast their
transactions also exposes the transaction validation engine to DoS attacks
from
malicious peers.
As such, for unconfirmed transactions we are considering for our mempool, we
apply a set of validation rules in addition to consensus, primarily to
protect
us from resource exhaustion and aid our efforts to keep the highest fee
transactions. We call this mempool _policy_: a set of (configurable,
node-specific) rules that transactions must abide by in order to be accepted
into our mempool. Transaction "Standardness" rules and mempool restrictions
such
as "too-long-mempool-chain" are both examples of policy.
### Package Relay and Package Mempool Accept
In transaction relay, we currently consider transactions one at a time for
submission to the mempool. This creates a limitation in the node's ability
to
determine which transactions have the highest feerates, since we cannot take
into account descendants (i.e. cannot use CPFP) until all the transactions
are
in the mempool. Similarly, we cannot use a transaction's descendants when
considering it for RBF. When an individual transaction does not meet the
mempool
minimum feerate and the user isn't able to create a replacement transaction
directly, it will not be accepted by mempools.
This limitation presents a security issue for applications and users
relying on
time-sensitive transactions. For example, Lightning and other protocols
create
UTXOs with multiple spending paths, where one counterparty's spending path
opens
up after a timelock, and users are protected from cheating scenarios as
long as
they redeem on-chain in time. A key security assumption is that all parties'
transactions will propagate and confirm in a timely manner. This assumption
can
be broken if fee-bumping does not work as intended.
The end goal for Package Relay is to consider multiple transactions at the
same
time, e.g. a transaction with its high-fee child. This may help us better
determine whether transactions should be accepted to our mempool,
especially if
they don't meet fee requirements individually or are better RBF candidates
as a
package. A combination of changes to mempool validation logic, policy, and
transaction relay allows us to better propagate the transactions with the
highest package feerates to miners, and makes fee-bumping tools more
powerful
for users.
The "relay" part of Package Relay suggests P2P messaging changes, but a
large
part of the changes are in the mempool's package validation logic. We call
this
*Package Mempool Accept*.
### Previous Work
* Given that mempool validation is DoS-sensitive and complex, it would be
dangerous to haphazardly tack on package validation logic. Many efforts
have
been made to make mempool validation less opaque (see [#16400][4],
[#21062][5],
[#22675][6], [#22796][7]).
* [#20833][8] Added basic capabilities for package validation, test accepts
only
(no submission to mempool).
* [#21800][9] Implemented package ancestor/descendant limit checks for
arbitrary
packages. Still test accepts only.
* Previous package relay proposals (see [#16401][10], [#19621][11]).
### Existing Package Rules
These are in master as introduced in [#20833][8] and [#21800][9]. I'll
consider
them as "given" in the rest of this document, though they can be changed,
since
package validation is test-accept only right now.
1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
`MAX_PACKAGE_SIZE=101KvB` total size [8]
*Rationale*: This is already enforced as mempool ancestor/descendant
limits.
Presumably, transactions in a package are all related, so exceeding this
limit
would mean that the package can either be split up or it wouldn't pass this
mempool policy.
2. Packages must be topologically sorted: if any dependencies exist between
transactions, parents must appear somewhere before children. [8]
3. A package cannot have conflicting transactions, i.e. none of them can
spend
the same inputs. This also means there cannot be duplicate transactions. [8]
4. When packages are evaluated against ancestor/descendant limits in a test
accept, the union of all of their descendants and ancestors is considered.
This
is essentially a "worst case" heuristic where every transaction in the
package
is treated as each other's ancestor and descendant. [8]
Packages for which ancestor/descendant limits are accurately captured by
this
heuristic: [19]
There are also limitations such as the fact that CPFP carve out is not
applied
to package transactions. #20833 also disables RBF in package validation;
this
proposal overrides that to allow packages to use RBF.
## Proposed Changes
The next step in the Package Mempool Accept project is to implement
submission
to mempool, initially through RPC only. This allows us to test the
submission
logic before exposing it on P2P.
### Summary
- Packages may contain already-in-mempool transactions.
- Packages are 2 generations, Multi-Parent-1-Child.
- Fee-related checks use the package feerate. This means that wallets can
create a package that utilizes CPFP.
- Parents are allowed to RBF mempool transactions with a set of rules
similar
to BIP125. This enables a combination of CPFP and RBF, where a
transaction's descendant fees pay for replacing mempool conflicts.
There is a draft implementation in [#22290][1]. It is WIP, but feedback is
always welcome.
### Details
#### Packages May Contain Already-in-Mempool Transactions
A package may contain transactions that are already in the mempool. We
remove
("deduplicate") those transactions from the package for the purposes of
package
mempool acceptance. If a package is empty after deduplication, we do
nothing.
*Rationale*: Mempools vary across the network. It's possible for a parent
to be
accepted to the mempool of a peer on its own due to differences in policy
and
fee market fluctuations. We should not reject or penalize the entire
package for
an individual transaction as that could be a censorship vector.
#### Packages Are Multi-Parent-1-Child
Only packages of a specific topology are permitted. Namely, a package is
exactly
1 child with all of its unconfirmed parents. After deduplication, the
package
may be exactly the same, empty, 1 child, 1 child with just some of its
unconfirmed parents, etc. Note that it's possible for the parents to be
indirect
descendants/ancestors of one another, or for parent and child to share a
parent,
so we cannot make any other topology assumptions.
*Rationale*: This allows for fee-bumping by CPFP. Allowing multiple parents
makes it possible to fee-bump a batch of transactions. Restricting packages
to a
defined topology is also easier to reason about and simplifies the
validation
logic greatly. Multi-parent-1-child allows us to think of the package as
one big
transaction, where:
- Inputs = all the inputs of parents + inputs of the child that come from
confirmed UTXOs
- Outputs = all the outputs of the child + all outputs of the parents that
aren't spent by other transactions in the package
Examples of packages that follow this rule (variations of example A show
some
possibilities after deduplication): ![image][15]
#### Fee-Related Checks Use Package Feerate
Package Feerate = the total modified fees divided by the total virtual size
of
all transactions in the package.
To meet the two feerate requirements of a mempool, i.e., the pre-configured
minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
feerate, the
total package feerate is used instead of the individual feerate. The
individual
transactions are allowed to be below feerate requirements if the package
meets
the feerate requirements. For example, the parent(s) in the package can
have 0
fees but be paid for by the child.
*Rationale*: This can be thought of as "CPFP within a package," solving the
issue of a parent not meeting minimum fees on its own. This allows L2
applications to adjust their fees at broadcast time instead of overshooting
or
risking getting stuck/pinned.
We use the package feerate of the package *after deduplication*.
*Rationale*: It would be incorrect to use the fees of transactions that are
already in the mempool, as we do not want a transaction's fees to be
double-counted for both its individual RBF and package RBF.
Examples F and G [14] show the same package, but P1 is submitted
individually before
the package in example G. In example F, we can see that the 300vB package
pays
an additional 200sat in fees, which is not enough to pay for its own
bandwidth
(BIP125#4). In example G, we can see that P1 pays enough to replace M1, but
using P1's fees again during package submission would make it look like a
300sat
increase for a 200vB package. Even including its fees and size would not be
sufficient in this example, since the 300sat looks like enough for the 300vB
package. The calculcation after deduplication is 100sat increase for a
package
of size 200vB, which correctly fails BIP125#4. Assume all transactions have
a
size of 100vB.
#### Package RBF
If a package meets feerate requirements as a package, the parents in the
transaction are allowed to replace-by-fee mempool transactions. The child
cannot
replace mempool transactions. Multiple transactions can replace the same
transaction, but in order to be valid, none of the transactions can try to
replace an ancestor of another transaction in the same package (which would
thus
make its inputs unavailable).
*Rationale*: Even if we are using package feerate, a package will not
propagate
as intended if RBF still requires each individual transaction to meet the
feerate requirements.
We use a set of rules slightly modified from BIP125 as follows:
##### Signaling (Rule #1)
All mempool transactions to be replaced must signal replaceability.
*Rationale*: Package RBF signaling logic should be the same for package RBF
and
single transaction acceptance. This would be updated if single transaction
validation moves to full RBF.
##### New Unconfirmed Inputs (Rule #2)
A package may include new unconfirmed inputs, but the ancestor feerate of
the
child must be at least as high as the ancestor feerates of every transaction
being replaced. This is contrary to BIP125#2, which states "The replacement
transaction may only include an unconfirmed input if that input was
included in
one of the original transactions. (An unconfirmed input spends an output
from a
currently-unconfirmed transaction.)"
*Rationale*: The purpose of BIP125#2 is to ensure that the replacement
transaction has a higher ancestor score than the original transaction(s)
(see
[comment][13]). Example H [16] shows how adding a new unconfirmed input can
lower the
ancestor score of the replacement transaction. P1 is trying to replace M1,
and
spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat, and M2
pays
100sat. Assume all transactions have a size of 100vB. While, in isolation,
P1
looks like a better mining candidate than M1, it must be mined with M2, so
its
ancestor feerate is actually 4.5sat/vB. This is lower than M1's ancestor
feerate, which is 6sat/vB.
In package RBF, the rule analogous to BIP125#2 would be "none of the
transactions in the package can spend new unconfirmed inputs." Example J
[17] shows
why, if any of the package transactions have ancestors, package feerate is
no
longer accurate. Even though M2 and M3 are not ancestors of P1 (which is the
replacement transaction in an RBF), we're actually interested in the entire
package. A miner should mine M1 which is 5sat/vB instead of M2, M3, P1, P2,
and
P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to only
allow
the child to have new unconfirmed inputs, either, because it can still
cause us
to overestimate the package's ancestor score.
However, enforcing a rule analogous to BIP125#2 would not only make Package
RBF
less useful, but would also break Package RBF for packages with parents
already
in the mempool: if a package parent has already been submitted, it would
look
like the child is spending a "new" unconfirmed input. In example K [18],
we're
looking to replace M1 with the entire package including P1, P2, and P3. We
must
consider the case where one of the parents is already in the mempool (in
this
case, P2), which means we must allow P3 to have new unconfirmed inputs.
However,
M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not replace M1
with this package.
Thus, the package RBF rule regarding new unconfirmed inputs is less strict
than
BIP125#2. However, we still achieve the same goal of requiring the
replacement
transactions to have a ancestor score at least as high as the original
ones. As
a result, the entire package is required to be a higher feerate mining
candidate
than each of the replaced transactions.
Another note: the [comment][13] above the BIP125#2 code in the original RBF
implementation suggests that the rule was intended to be temporary.
##### Absolute Fee (Rule #3)
The package must increase the absolute fee of the mempool, i.e. the total
fees
of the package must be higher than the absolute fees of the mempool
transactions
it replaces. Combined with the CPFP rule above, this differs from BIP125
Rule #3
- an individual transaction in the package may have lower fees than the
transaction(s) it is replacing. In fact, it may have 0 fees, and the child
pays for RBF.
##### Feerate (Rule #4)
The package must pay for its own bandwidth; the package feerate must be
higher
than the replaced transactions by at least minimum relay feerate
(`incrementalRelayFee`). Combined with the CPFP rule above, this differs
from
BIP125 Rule #4 - an individual transaction in the package can have a lower
feerate than the transaction(s) it is replacing. In fact, it may have 0
fees,
and the child pays for RBF.
##### Total Number of Replaced Transactions (Rule #5)
The package cannot replace more than 100 mempool transactions. This is
identical
to BIP125 Rule #5.
### Expected FAQs
1. Is it possible for only some of the package to make it into the mempool?
Yes, it is. However, since we evict transactions from the mempool by
descendant score and the package child is supposed to be sponsoring the
fees of
its parents, the most common scenario would be all-or-nothing. This is
incentive-compatible. In fact, to be conservative, package validation should
begin by trying to submit all of the transactions individually, and only
use the
package mempool acceptance logic if the parents fail due to low feerate.
2. Should we allow packages to contain already-confirmed transactions?
No, for practical reasons. In mempool validation, we actually aren't
able to
tell with 100% confidence if we are looking at a transaction that has
already
confirmed, because we look up inputs using a UTXO set. If we have historical
block data, it's possible to look for it, but this is inefficient, not
always
possible for pruning nodes, and unnecessary because we're not going to do
anything with the transaction anyway. As such, we already have the
expectation
that transaction relay is somewhat "stateful" i.e. nobody should be relaying
transactions that have already been confirmed. Similarly, we shouldn't be
relaying packages that contain already-confirmed transactions.
[1]: github.com/bitcoin/bitcoin/pull/22290
[2]:
github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
[3]:
github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
[4]: github.com/bitcoin/bitcoin/pull/16400
[5]: github.com/bitcoin/bitcoin/pull/21062
[6]: github.com/bitcoin/bitcoin/pull/22675
[7]: github.com/bitcoin/bitcoin/pull/22796
[8]: github.com/bitcoin/bitcoin/pull/20833
[9]: github.com/bitcoin/bitcoin/pull/21800
[10]: github.com/bitcoin/bitcoin/pull/16401
[11]: github.com/bitcoin/bitcoin/pull/19621
[12]: github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
[13]:
github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
[14]:
[15]:
[16]:
[17]:
[18]:
[19]:
[20]:
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20210916/d71208a2/attachment-0001.html>
π
Original date posted:2021-09-19
π Original message:Hi Gloria,
> A package may contain transactions that are already in the mempool. We
> remove
> ("deduplicate") those transactions from the package for the purposes of
> package
> mempool acceptance. If a package is empty after deduplication, we do
> nothing.
IIUC, you have a package A+B+C submitted for acceptance and A is already in
your mempool. You trim out A from the package and then evaluate B+C.
I think this might be an issue if A is the higher-fee element of the ABC
package. B+C package fees might be under the mempool min fee and will be
rejected, potentially breaking the acceptance expectations of the package
issuer ?
Further, I think the dedup should be done on wtxid, as you might have
multiple valid witnesses. Though with varying vsizes and as such offering
different feerates.
E.g you're going to evaluate the package A+B and A' is already in your
mempool with a bigger valid witness. You trim A based on txid, then you
evaluate A'+B, which fails the fee checks. However, evaluating A+B would
have been a success.
AFAICT, the dedup rationale would be to save on CPU time/IO disk, to avoid
repeated signatures verification and parent UTXOs fetches ? Can we achieve
the same goal by bypassing tx-level checks for already-in txn while
conserving the package integrity for package-level checks ?
> Note that it's possible for the parents to be
> indirect
> descendants/ancestors of one another, or for parent and child to share a
> parent,
> so we cannot make any other topology assumptions.
I'm not clearly understanding the accepted topologies. By "parent and child
to share a parent", do you mean the set of transactions A, B, C, where B is
spending A and C is spending A and B would be correct ?
If yes, is there a width-limit introduced or we fallback on
MAX_PACKAGE_COUNT=25 ?
IIRC, one rationale to come with this topology limitation was to lower the
DoS risks when potentially deploying p2p packages.
Considering the current Core's mempool acceptance rules, I think CPFP
batching is unsafe for LN time-sensitive closure. A malicious tx-relay
jamming successful on one channel commitment transaction would contamine
the remaining commitments sharing the same package.
E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
transactions and E a shared CPFP. If a malicious A' transaction has a
better feerate than A, the whole package acceptance will fail. Even if A'
confirms in the following block,
the propagation and confirmation of B+C+D have been delayed. This could
carry on a loss of funds.
That said, if you're broadcasting commitment transactions without
time-sensitive HTLC outputs, I think the batching is effectively a fee
saving as you don't have to duplicate the CPFP.
IMHO, I'm leaning towards deploying during a first phase 1-parent/1-child.
I think it's the most conservative step still improving second-layer safety.
> *Rationale*: It would be incorrect to use the fees of transactions that
are
> already in the mempool, as we do not want a transaction's fees to be
> double-counted for both its individual RBF and package RBF.
I'm unsure about the logical order of the checks proposed.
If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats and
A' pays 100 sats. If we apply the individual RBF on A, A+B acceptance
fails. For this reason I think the individual RBF should be bypassed and
only the package RBF apply ?
Note this situation is plausible, with current LN design, your counterparty
can have a commitment transaction with a better fee just by selecting a
higher `dust_limit_satoshis` than yours.
> Examples F and G [14] show the same package, but P1 is submitted
> individually before
> the package in example G. In example F, we can see that the 300vB package
> pays
> an additional 200sat in fees, which is not enough to pay for its own
> bandwidth
> (BIP125#4). In example G, we can see that P1 pays enough to replace M1,
but
> using P1's fees again during package submission would make it look like a
> 300sat
> increase for a 200vB package. Even including its fees and size would not
be
> sufficient in this example, since the 300sat looks like enough for the
300vB
> package. The calculcation after deduplication is 100sat increase for a
> package
> of size 200vB, which correctly fails BIP125#4. Assume all transactions
have
> a
> size of 100vB.
What problem are you trying to solve by the package feerate *after* dedup
rule ?
My understanding is that an in-package transaction might be already in the
mempool. Therefore, to compute a correct RBF penalty replacement, the vsize
of this transaction could be discarded lowering the cost of package RBF.
If we keep a "safe" dedup mechanism (see my point above), I think this
discount is justified, as the validation cost of node operators is paid for
?
> The child cannot replace mempool transactions.
Let's say you issue package A+B, then package C+B', where B' is a child of
both A and C. This rule fails the acceptance of C+B' ?
I think this is a footgunish API, as if a package issuer send the
multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
Then try to broadcast the higher-feerate C'+D' package, it should be
rejected. So it's breaking the naive broadcaster assumption that a
higher-feerate/higher-fee package always replaces ? And it might be unsafe
in protocols where states are symmetric. E.g a malicious counterparty
broadcasts first S+A, then you honestly broadcast S+B, where B pays better
fees.
> All mempool transactions to be replaced must signal replaceability.
I think this is unsafe for L2s if counterparties have malleability of the
child transaction. They can block your package replacement by opting-out
from RBF signaling. IIRC, LN's "anchor output" presents such an ability.
I think it's better to either fix inherited signaling or move towards
full-rbf.
> if a package parent has already been submitted, it would
> look
>like the child is spending a "new" unconfirmed input.
I think this is an issue brought by the trimming during the dedup phase. If
we preserve the package integrity, only re-using the tx-level checks
results of already in-mempool transactions to gain in CPU time we won't
have this issue. Package childs can add unconfirmed inputs as long as
they're in-package, the bip125 rule2 is only evaluated against parents ?
> However, we still achieve the same goal of requiring the
> replacement
> transactions to have a ancestor score at least as high as the original
> ones.
I'm not sure if this holds...
Let's say you have in-mempool A, B where A pays 10 sat/vb for 100 vbytes
and B pays 10 sat/vb for 100 vbytes. You have the candidate replacement D
spending both A and C where D pays 15sat/vb for 100 vbytes and C pays 1
sat/vb for 1000 vbytes.
Package A + B ancestor score is 10 sat/vb.
D has a higher feerate/absolute fee than B.
Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's 1000
sats + D's 1500 sats) /
A's 100 vb + C's 1000 vb + D's 100 vb)
Overall, this is a review through the lenses of LN requirements. I think
other L2 protocols/applications
could be candidates to using package accept/relay such as:
* github.com/lightninglabs/pool
* github.com/discreetlogcontracts/dlcspecs
* github.com/bitcoin-teleport/teleport-transactions
* github.com/sapio-lang/sapio
* github.com/commerceblock/mercury/blob/master/doc/statechains.md
* github.com/revault/practical-revault
Thanks for rolling forward the ball on this subject.
Antoine
Le jeu. 16 sept. 2021 Γ 03:55, Gloria Zhao via bitcoin-dev <
bitcoin-dev at lists.linuxfoundation.org> a Γ©crit :
> Hi there,
>
> I'm writing to propose a set of mempool policy changes to enable package
> validation (in preparation for package relay) in Bitcoin Core. These would
> not
> be consensus or P2P protocol changes. However, since mempool policy
> significantly affects transaction propagation, I believe this is relevant
> for
> the mailing list.
>
> My proposal enables packages consisting of multiple parents and 1 child.
> If you
> develop software that relies on specific transaction relay assumptions
> and/or
> are interested in using package relay in the future, I'm very interested
> to hear
> your feedback on the utility or restrictiveness of these package policies
> for
> your use cases.
>
> A draft implementation of this proposal can be found in [Bitcoin Core
> PR#22290][1].
>
> An illustrated version of this post can be found at
> gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
> I have also linked the images below.
>
> ## Background
>
> Feel free to skip this section if you are already familiar with mempool
> policy
> and package relay terminology.
>
> ### Terminology Clarifications
>
> * Package = an ordered list of related transactions, representable by a
> Directed
> Acyclic Graph.
> * Package Feerate = the total modified fees divided by the total virtual
> size of
> all transactions in the package.
> - Modified fees = a transaction's base fees + fee delta applied by the
> user
> with `prioritisetransaction`. As such, we expect this to vary across
> mempools.
> - Virtual Size = the maximum of virtual sizes calculated using [BIP141
> virtual size][2] and sigop weight. [Implemented here in Bitcoin
> Core][3].
> - Note that feerate is not necessarily based on the base fees and
> serialized
> size.
>
> * Fee-Bumping = user/wallet actions that take advantage of miner
> incentives to
> boost a transaction's candidacy for inclusion in a block, including
> Child Pays
> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention in
> mempool policy is to recognize when the new transaction is more economical
> to
> mine than the original one(s) but not open DoS vectors, so there are some
> limitations.
>
> ### Policy
>
> The purpose of the mempool is to store the best (to be most
> incentive-compatible
> with miners, highest feerate) candidates for inclusion in a block. Miners
> use
> the mempool to build block templates. The mempool is also useful as a
> cache for
> boosting block relay and validation performance, aiding transaction relay,
> and
> generating feerate estimations.
>
> Ideally, all consensus-valid transactions paying reasonable fees should
> make it
> to miners through normal transaction relay, without any special
> connectivity or
> relationships with miners. On the other hand, nodes do not have unlimited
> resources, and a P2P network designed to let any honest node broadcast
> their
> transactions also exposes the transaction validation engine to DoS attacks
> from
> malicious peers.
>
> As such, for unconfirmed transactions we are considering for our mempool,
> we
> apply a set of validation rules in addition to consensus, primarily to
> protect
> us from resource exhaustion and aid our efforts to keep the highest fee
> transactions. We call this mempool _policy_: a set of (configurable,
> node-specific) rules that transactions must abide by in order to be
> accepted
> into our mempool. Transaction "Standardness" rules and mempool
> restrictions such
> as "too-long-mempool-chain" are both examples of policy.
>
> ### Package Relay and Package Mempool Accept
>
> In transaction relay, we currently consider transactions one at a time for
> submission to the mempool. This creates a limitation in the node's ability
> to
> determine which transactions have the highest feerates, since we cannot
> take
> into account descendants (i.e. cannot use CPFP) until all the transactions
> are
> in the mempool. Similarly, we cannot use a transaction's descendants when
> considering it for RBF. When an individual transaction does not meet the
> mempool
> minimum feerate and the user isn't able to create a replacement transaction
> directly, it will not be accepted by mempools.
>
> This limitation presents a security issue for applications and users
> relying on
> time-sensitive transactions. For example, Lightning and other protocols
> create
> UTXOs with multiple spending paths, where one counterparty's spending path
> opens
> up after a timelock, and users are protected from cheating scenarios as
> long as
> they redeem on-chain in time. A key security assumption is that all
> parties'
> transactions will propagate and confirm in a timely manner. This
> assumption can
> be broken if fee-bumping does not work as intended.
>
> The end goal for Package Relay is to consider multiple transactions at the
> same
> time, e.g. a transaction with its high-fee child. This may help us better
> determine whether transactions should be accepted to our mempool,
> especially if
> they don't meet fee requirements individually or are better RBF candidates
> as a
> package. A combination of changes to mempool validation logic, policy, and
> transaction relay allows us to better propagate the transactions with the
> highest package feerates to miners, and makes fee-bumping tools more
> powerful
> for users.
>
> The "relay" part of Package Relay suggests P2P messaging changes, but a
> large
> part of the changes are in the mempool's package validation logic. We call
> this
> *Package Mempool Accept*.
>
> ### Previous Work
>
> * Given that mempool validation is DoS-sensitive and complex, it would be
> dangerous to haphazardly tack on package validation logic. Many efforts
> have
> been made to make mempool validation less opaque (see [#16400][4],
> [#21062][5],
> [#22675][6], [#22796][7]).
> * [#20833][8] Added basic capabilities for package validation, test
> accepts only
> (no submission to mempool).
> * [#21800][9] Implemented package ancestor/descendant limit checks for
> arbitrary
> packages. Still test accepts only.
> * Previous package relay proposals (see [#16401][10], [#19621][11]).
>
> ### Existing Package Rules
>
> These are in master as introduced in [#20833][8] and [#21800][9]. I'll
> consider
> them as "given" in the rest of this document, though they can be changed,
> since
> package validation is test-accept only right now.
>
> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
> `MAX_PACKAGE_SIZE=101KvB` total size [8]
>
> *Rationale*: This is already enforced as mempool ancestor/descendant
> limits.
> Presumably, transactions in a package are all related, so exceeding this
> limit
> would mean that the package can either be split up or it wouldn't pass this
> mempool policy.
>
> 2. Packages must be topologically sorted: if any dependencies exist between
> transactions, parents must appear somewhere before children. [8]
>
> 3. A package cannot have conflicting transactions, i.e. none of them can
> spend
> the same inputs. This also means there cannot be duplicate transactions.
> [8]
>
> 4. When packages are evaluated against ancestor/descendant limits in a test
> accept, the union of all of their descendants and ancestors is considered.
> This
> is essentially a "worst case" heuristic where every transaction in the
> package
> is treated as each other's ancestor and descendant. [8]
> Packages for which ancestor/descendant limits are accurately captured by
> this
> heuristic: [19]
>
> There are also limitations such as the fact that CPFP carve out is not
> applied
> to package transactions. #20833 also disables RBF in package validation;
> this
> proposal overrides that to allow packages to use RBF.
>
> ## Proposed Changes
>
> The next step in the Package Mempool Accept project is to implement
> submission
> to mempool, initially through RPC only. This allows us to test the
> submission
> logic before exposing it on P2P.
>
> ### Summary
>
> - Packages may contain already-in-mempool transactions.
> - Packages are 2 generations, Multi-Parent-1-Child.
> - Fee-related checks use the package feerate. This means that wallets can
> create a package that utilizes CPFP.
> - Parents are allowed to RBF mempool transactions with a set of rules
> similar
> to BIP125. This enables a combination of CPFP and RBF, where a
> transaction's descendant fees pay for replacing mempool conflicts.
>
> There is a draft implementation in [#22290][1]. It is WIP, but feedback is
> always welcome.
>
> ### Details
>
> #### Packages May Contain Already-in-Mempool Transactions
>
> A package may contain transactions that are already in the mempool. We
> remove
> ("deduplicate") those transactions from the package for the purposes of
> package
> mempool acceptance. If a package is empty after deduplication, we do
> nothing.
>
> *Rationale*: Mempools vary across the network. It's possible for a parent
> to be
> accepted to the mempool of a peer on its own due to differences in policy
> and
> fee market fluctuations. We should not reject or penalize the entire
> package for
> an individual transaction as that could be a censorship vector.
>
> #### Packages Are Multi-Parent-1-Child
>
> Only packages of a specific topology are permitted. Namely, a package is
> exactly
> 1 child with all of its unconfirmed parents. After deduplication, the
> package
> may be exactly the same, empty, 1 child, 1 child with just some of its
> unconfirmed parents, etc. Note that it's possible for the parents to be
> indirect
> descendants/ancestors of one another, or for parent and child to share a
> parent,
> so we cannot make any other topology assumptions.
>
> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple parents
> makes it possible to fee-bump a batch of transactions. Restricting
> packages to a
> defined topology is also easier to reason about and simplifies the
> validation
> logic greatly. Multi-parent-1-child allows us to think of the package as
> one big
> transaction, where:
>
> - Inputs = all the inputs of parents + inputs of the child that come from
> confirmed UTXOs
> - Outputs = all the outputs of the child + all outputs of the parents that
> aren't spent by other transactions in the package
>
> Examples of packages that follow this rule (variations of example A show
> some
> possibilities after deduplication): ![image][15]
>
> #### Fee-Related Checks Use Package Feerate
>
> Package Feerate = the total modified fees divided by the total virtual
> size of
> all transactions in the package.
>
> To meet the two feerate requirements of a mempool, i.e., the pre-configured
> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
> feerate, the
> total package feerate is used instead of the individual feerate. The
> individual
> transactions are allowed to be below feerate requirements if the package
> meets
> the feerate requirements. For example, the parent(s) in the package can
> have 0
> fees but be paid for by the child.
>
> *Rationale*: This can be thought of as "CPFP within a package," solving the
> issue of a parent not meeting minimum fees on its own. This allows L2
> applications to adjust their fees at broadcast time instead of
> overshooting or
> risking getting stuck/pinned.
>
> We use the package feerate of the package *after deduplication*.
>
> *Rationale*: It would be incorrect to use the fees of transactions that
> are
> already in the mempool, as we do not want a transaction's fees to be
> double-counted for both its individual RBF and package RBF.
>
> Examples F and G [14] show the same package, but P1 is submitted
> individually before
> the package in example G. In example F, we can see that the 300vB package
> pays
> an additional 200sat in fees, which is not enough to pay for its own
> bandwidth
> (BIP125#4). In example G, we can see that P1 pays enough to replace M1, but
> using P1's fees again during package submission would make it look like a
> 300sat
> increase for a 200vB package. Even including its fees and size would not be
> sufficient in this example, since the 300sat looks like enough for the
> 300vB
> package. The calculcation after deduplication is 100sat increase for a
> package
> of size 200vB, which correctly fails BIP125#4. Assume all transactions
> have a
> size of 100vB.
>
> #### Package RBF
>
> If a package meets feerate requirements as a package, the parents in the
> transaction are allowed to replace-by-fee mempool transactions. The child
> cannot
> replace mempool transactions. Multiple transactions can replace the same
> transaction, but in order to be valid, none of the transactions can try to
> replace an ancestor of another transaction in the same package (which
> would thus
> make its inputs unavailable).
>
> *Rationale*: Even if we are using package feerate, a package will not
> propagate
> as intended if RBF still requires each individual transaction to meet the
> feerate requirements.
>
> We use a set of rules slightly modified from BIP125 as follows:
>
> ##### Signaling (Rule #1)
>
> All mempool transactions to be replaced must signal replaceability.
>
> *Rationale*: Package RBF signaling logic should be the same for package
> RBF and
> single transaction acceptance. This would be updated if single transaction
> validation moves to full RBF.
>
> ##### New Unconfirmed Inputs (Rule #2)
>
> A package may include new unconfirmed inputs, but the ancestor feerate of
> the
> child must be at least as high as the ancestor feerates of every
> transaction
> being replaced. This is contrary to BIP125#2, which states "The replacement
> transaction may only include an unconfirmed input if that input was
> included in
> one of the original transactions. (An unconfirmed input spends an output
> from a
> currently-unconfirmed transaction.)"
>
> *Rationale*: The purpose of BIP125#2 is to ensure that the replacement
> transaction has a higher ancestor score than the original transaction(s)
> (see
> [comment][13]). Example H [16] shows how adding a new unconfirmed input
> can lower the
> ancestor score of the replacement transaction. P1 is trying to replace M1,
> and
> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat, and M2
> pays
> 100sat. Assume all transactions have a size of 100vB. While, in isolation,
> P1
> looks like a better mining candidate than M1, it must be mined with M2, so
> its
> ancestor feerate is actually 4.5sat/vB. This is lower than M1's ancestor
> feerate, which is 6sat/vB.
>
> In package RBF, the rule analogous to BIP125#2 would be "none of the
> transactions in the package can spend new unconfirmed inputs." Example J
> [17] shows
> why, if any of the package transactions have ancestors, package feerate is
> no
> longer accurate. Even though M2 and M3 are not ancestors of P1 (which is
> the
> replacement transaction in an RBF), we're actually interested in the entire
> package. A miner should mine M1 which is 5sat/vB instead of M2, M3, P1,
> P2, and
> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to only
> allow
> the child to have new unconfirmed inputs, either, because it can still
> cause us
> to overestimate the package's ancestor score.
>
> However, enforcing a rule analogous to BIP125#2 would not only make
> Package RBF
> less useful, but would also break Package RBF for packages with parents
> already
> in the mempool: if a package parent has already been submitted, it would
> look
> like the child is spending a "new" unconfirmed input. In example K [18],
> we're
> looking to replace M1 with the entire package including P1, P2, and P3. We
> must
> consider the case where one of the parents is already in the mempool (in
> this
> case, P2), which means we must allow P3 to have new unconfirmed inputs.
> However,
> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not replace
> M1
> with this package.
>
> Thus, the package RBF rule regarding new unconfirmed inputs is less strict
> than
> BIP125#2. However, we still achieve the same goal of requiring the
> replacement
> transactions to have a ancestor score at least as high as the original
> ones. As
> a result, the entire package is required to be a higher feerate mining
> candidate
> than each of the replaced transactions.
>
> Another note: the [comment][13] above the BIP125#2 code in the original RBF
> implementation suggests that the rule was intended to be temporary.
>
> ##### Absolute Fee (Rule #3)
>
> The package must increase the absolute fee of the mempool, i.e. the total
> fees
> of the package must be higher than the absolute fees of the mempool
> transactions
> it replaces. Combined with the CPFP rule above, this differs from BIP125
> Rule #3
> - an individual transaction in the package may have lower fees than the
> transaction(s) it is replacing. In fact, it may have 0 fees, and the
> child
> pays for RBF.
>
> ##### Feerate (Rule #4)
>
> The package must pay for its own bandwidth; the package feerate must be
> higher
> than the replaced transactions by at least minimum relay feerate
> (`incrementalRelayFee`). Combined with the CPFP rule above, this differs
> from
> BIP125 Rule #4 - an individual transaction in the package can have a lower
> feerate than the transaction(s) it is replacing. In fact, it may have 0
> fees,
> and the child pays for RBF.
>
> ##### Total Number of Replaced Transactions (Rule #5)
>
> The package cannot replace more than 100 mempool transactions. This is
> identical
> to BIP125 Rule #5.
>
> ### Expected FAQs
>
> 1. Is it possible for only some of the package to make it into the mempool?
>
> Yes, it is. However, since we evict transactions from the mempool by
> descendant score and the package child is supposed to be sponsoring the
> fees of
> its parents, the most common scenario would be all-or-nothing. This is
> incentive-compatible. In fact, to be conservative, package validation
> should
> begin by trying to submit all of the transactions individually, and only
> use the
> package mempool acceptance logic if the parents fail due to low feerate.
>
> 2. Should we allow packages to contain already-confirmed transactions?
>
> No, for practical reasons. In mempool validation, we actually aren't
> able to
> tell with 100% confidence if we are looking at a transaction that has
> already
> confirmed, because we look up inputs using a UTXO set. If we have
> historical
> block data, it's possible to look for it, but this is inefficient, not
> always
> possible for pruning nodes, and unnecessary because we're not going to do
> anything with the transaction anyway. As such, we already have the
> expectation
> that transaction relay is somewhat "stateful" i.e. nobody should be
> relaying
> transactions that have already been confirmed. Similarly, we shouldn't be
> relaying packages that contain already-confirmed transactions.
>
> [1]: github.com/bitcoin/bitcoin/pull/22290
> [2]:
> github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
> [3]:
> github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
> [4]: github.com/bitcoin/bitcoin/pull/16400
> [5]: github.com/bitcoin/bitcoin/pull/21062
> [6]: github.com/bitcoin/bitcoin/pull/22675
> [7]: github.com/bitcoin/bitcoin/pull/22796
> [8]: github.com/bitcoin/bitcoin/pull/20833
> [9]: github.com/bitcoin/bitcoin/pull/21800
> [10]: github.com/bitcoin/bitcoin/pull/16401
> [11]: github.com/bitcoin/bitcoin/pull/19621
> [12]: github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
> [13]:
> github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
> [14]:
>
> [15]:
>
> [16]:
>
> [17]:
>
> [18]:
>
> [19]:
>
> [20]:
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev at lists.linuxfoundation.org
> lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20210919/f11e1976/attachment-0001.html>