Fail HTLCs from late counterparty commitment updates in ChannelMonitor #4434
Conversation
|
👋 Thanks for assigning @wpaulino as a reviewer! |
2d36c03 to
cce8ccb
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #4434 +/- ##
==========================================
+ Coverage 85.97% 86.05% +0.08%
==========================================
Files 159 159
Lines 104722 105759 +1037
Branches 104722 105759 +1037
==========================================
+ Hits 90030 91013 +983
- Misses 12191 12227 +36
- Partials 2501 2519 +18
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
👋 The first review has been submitted! Do you think this PR is ready for a second reviewer? If so, click here to assign a second reviewer. |
cce8ccb to
ad3036e
Compare
TheBlueMatt
left a comment
There was a problem hiding this comment.
Hmmmmmmmmm, its really not quite so simple. If a monitor update is InProgress we generally expect the in-memory ChannelMonitor to have actually seen the new state update (though we expect that on restart there's a good chance it won't have that information and it'll be replayed). That means the monitor is theoretically "in charge" of resolving the HTLCs in question. While I think this patch is correct, I'm a bit hesitant to end up with two things in charge of resolving a specific HTLC. ISTM it also wouldn't be so complicated to immediately mark HTLCs as failed in the monitor upon receiving updates that try to add new HTLCs after a channel has been closed.
Isn't the problem with this that the monitor can't see whether the commitment was already sent to the peer or blocked by async persistence? |
|
Right, the monitor has to assume it may have been sent to the peer. That's fine, though, it should be able to fail the HTLC(s) once the commitment transaction is locked. |
|
Hmm, it seems that the original issue isn't an issue then. I confirmed that mining the force close commitment tx indeed generates the payment failed event. Or is there still something missing? |
|
In the general case ( |
|
The one that I identified wasn't a crash case. An assertion in the fuzzer identified that a payment was lost, but with current understanding I think it's just that the fuzzer didn't push the force-close forward enough (mine). Will close it for now and can look again if the payment remains lost even with confirmation of the commit tx. Unit test did confirm that it should just work. |
|
Reopening as this fix is now required for #4351. |
Now that we've decided that for deferred writes we no longer want to expect the in-memory |
6421a28 to
8dac2fa
Compare
8dac2fa to
da66b8e
Compare
TheBlueMatt
left a comment
There was a problem hiding this comment.
Now that we've decided that for deferred writes we no longer want to expect the in-memory ChannelMonitor to have seen the changes, I think the above doesn't apply anymore, and we might have to accept that two things are in charge for resolving a specific HTLC.
I don't think so. Instead, what we'll need to do is detect a new local counterparty state ChannelMontiorUpdate after the channel has close and track it to fail the HTLCs from it once the commitment transaction has 6 confs.
|
Oh actually no I don't think we need to wait for 6 confs since we never sent the message, ie we can fail it immediately. |
Does the monitor know that a message was never sent? #4434 (comment) suggests that we should assume we did send. I steered Claude to a potential fix: main...joostjager:rust-lightning:chain-mon-internal-deferred-writes-fix-contract. I based it on the deferred writes branch so that tests can get delayed in-memory updates. Is this the direction you have in mind? It is unfortunate that we need to add this extra logic to a system that is already complicated. |
56c813f to
25c9ca3
Compare
|
🔔 1st Reminder Hey @wpaulino! This PR has been waiting for your review. |
|
🔔 2nd Reminder Hey @wpaulino! This PR has been waiting for your review. |
|
🔔 3rd Reminder Hey @wpaulino! This PR has been waiting for your review. |
wpaulino
left a comment
There was a problem hiding this comment.
LGTM, though CI is sad likely due to the failed_back_htlc_ids change.
baa58e4 to
f1364ba
Compare
|
The round-trip check failed because |
| None => { | ||
| debug_assert!(false, "flush count exceeded queue length"); | ||
| return; | ||
| }, |
There was a problem hiding this comment.
In release builds, this silently returns without logging when count exceeds the queue length. This can happen if flush is called concurrently (e.g., from TestChainMonitor::release_pending_monitor_events auto-flush racing with the background processor's explicit flush). A log_error! before the return would make such races observable in production, rather than silently under-flushing:
| None => { | |
| debug_assert!(false, "flush count exceeded queue length"); | |
| return; | |
| }, | |
| None => { | |
| debug_assert!(false, "flush count exceeded queue length"); | |
| log_error!(logger, "flush count exceeded queue length"); | |
| return; | |
| }, |
|
|
|
I think feel free to squash so we can land this. Claude had one comment that might be worth fixing, up to you. |
9999385 to
337bef1
Compare
|
Squashed, but needs to be rebased after #4351. Added the Claude suggestion to also log excessive flush count. |
| }); | ||
| } else { | ||
| // Funding spend still awaiting ANTI_REORG_DELAY: queue the failure. | ||
| let (txid, transaction, height, block_hash) = pending_spend_entry.clone().unwrap(); |
There was a problem hiding this comment.
Nit/perf: pending_spend_entry.clone().unwrap() is called inside the per-HTLC loop, cloning the Option<Transaction> each iteration. For a commitment with many HTLCs, this does N clones of potentially large transaction data. Consider destructuring once before the loop:
| let (txid, transaction, height, block_hash) = pending_spend_entry.clone().unwrap(); | |
| let pending_data = pending_spend_entry.clone(); | |
| for (source, payment_hash, amount_msat) in htlcs { | |
| if is_source_known(source) { | |
| continue; | |
| } | |
| if self.counterparty_fulfilled_htlcs.get(&SentHTLCId::from_source(source)).is_some() { | |
| continue; | |
| } | |
| let htlc_value_satoshis = Some(amount_msat / 1000); | |
| let logger = WithContext::from(logger, None, None, Some(payment_hash)); | |
| // Defensively mark the HTLC as failed back so the expiry-based failure | |
| // path in `block_connected` doesn't generate a duplicate `HTLCUpdate` | |
| // event for the same source. | |
| self.failed_back_htlc_ids.insert(SentHTLCId::from_source(source)); | |
| if let Some(confirmed_txid) = self.funding_spend_confirmed { |
Or alternatively, extract the pending_data tuple once and clone only transaction per entry (Txid and BlockHash are Copy).
There was a problem hiding this comment.
Not sure if worth it, the tx is the large part of the clone anyway.
| // Check HTLC sources against all previously-known commitments to find truly new | ||
| // ones. After the update has been applied, `prev_counterparty_commitment_txid` holds | ||
| // what was `current` before this update, so it represents the already-known | ||
| // counterparty state. HTLCs already present in any of these will be handled by | ||
| // `fail_unbroadcast_htlcs` when the spending transaction confirms. | ||
| let is_source_known = |source: &HTLCSource| { | ||
| if let Some(ref txid) = self.funding.prev_counterparty_commitment_txid { | ||
| if let Some(htlc_list) = self.funding.counterparty_claimable_outpoints.get(txid) { | ||
| if htlc_list.iter().any(|(_, s)| s.as_ref().map(|s| s.as_ref()) == Some(source)) | ||
| { | ||
| return true; | ||
| } | ||
| } | ||
| } | ||
| // Note that we don't care about the case where a counterparty sent us a fresh local commitment transaction | ||
| // post-closure (with the `ChannelManager` still operating the channel). First of all we only care about | ||
| // resolving outbound HTLCs, which fundamentally have to be initiated by us. However we also don't mind | ||
| // looking at the current holder commitment transaction's HTLCs as any fresh outbound HTLCs will have to | ||
| // first come in a locally-initiated update to the counterparty's commitment transaction which we can, by | ||
| // refusing to apply the update, prevent the counterparty from ever seeing (as no messages can be sent until | ||
| // the monitor is updated). Thus, the HTLCs we care about can never appear in the holder commitment | ||
| // transaction. | ||
| if holder_commitment_htlcs!(self, CURRENT_WITH_SOURCES).any(|(_, s)| s == Some(source)) | ||
| { | ||
| return true; | ||
| } | ||
| if let Some(mut iter) = holder_commitment_htlcs!(self, PREV_WITH_SOURCES) { | ||
| if iter.any(|(_, s)| s == Some(source)) { | ||
| return true; | ||
| } | ||
| } | ||
| false | ||
| }; |
There was a problem hiding this comment.
The is_source_known closure only checks prev_counterparty_commitment_txid (what was current before this update), not all entries in counterparty_claimable_outpoints. The comment at line 4413 explains the reasoning, and the holder commitment fallback covers additional cases.
Just want to confirm a subtle assumption: this correctness relies on the fact that the monitor always holds the immediately-prior counterparty commitment in prev after the update is applied (line 3502). If a single ChannelMonitorUpdate ever contained two LatestCounterpartyCommitment steps, the second step's iteration here would check prev (which is now the first step's commitment, not the original pre-batch state), and HTLCs from the first step would appear "known" even if they were truly new. The code is correct as long as each ChannelMonitorUpdate contains at most one counterparty commitment step — is that guaranteed at the ChannelManager level?
There was a problem hiding this comment.
Analysis shows it isn't possible to have two LatestCounterpartyCommitment in the same update. Interesting find though.
When a ChannelMonitorUpdate containing a new counterparty commitment is dispatched (e.g. via deferred writes) before a channel force-closes but only applied to the in-memory monitor after the commitment transaction has already confirmed on-chain, the outbound HTLCs in that update must be failed back. Add fail_htlcs_from_update_after_funding_spend to ChannelMonitorImpl which detects this race condition during update_monitor. When a LatestCounterpartyCommitmentTXInfo or LatestCounterpartyCommitment update is applied and the funding output has already been spent, the function iterates all outbound HTLCs from the update and creates OnchainEvent::HTLCUpdate entries for those that need to be failed back. These entries mature after ANTI_REORG_DELAY blocks, giving time for the peer to potentially broadcast the newer commitment. HTLCs that appear as non-dust outputs in the confirmed commitment (whether counterparty or holder) are skipped, as they will be resolved on-chain via the normal HTLC timeout/success path. HTLCs already fulfilled by the counterparty (tracked in counterparty_fulfilled_htlcs) are also skipped. Duplicate failures from previously-known counterparty commitments are handled gracefully by the ChannelManager. AI tools were used in preparing this commit.
337bef1 to
3e1a18c
Compare
|
Rebased after merge of #4351 |
| // Note that we don't care about the case where a counterparty sent us a fresh local commitment transaction | ||
| // post-closure (with the `ChannelManager` still operating the channel). First of all we only care about | ||
| // resolving outbound HTLCs, which fundamentally have to be initiated by us. However we also don't mind | ||
| // looking at the current holder commitment transaction's HTLCs as any fresh outbound HTLCs will have to | ||
| // first come in a locally-initiated update to the counterparty's commitment transaction which we can, by | ||
| // refusing to apply the update, prevent the counterparty from ever seeing (as no messages can be sent until | ||
| // the monitor is updated). Thus, the HTLCs we care about can never appear in the holder commitment | ||
| // transaction. |
There was a problem hiding this comment.
Nit: The comment concludes "the HTLCs we care about can never appear in the holder commitment transaction" — yet lines 4443-4451 proceed to check them anyway. Could you rephrase to say this is a defensive/belt-and-suspenders check? Currently it reads like the check below shouldn't be needed at all.
| let is_source_known = |source: &HTLCSource| { | ||
| if let Some(ref txid) = self.funding.prev_counterparty_commitment_txid { | ||
| if let Some(htlc_list) = self.funding.counterparty_claimable_outpoints.get(txid) { | ||
| if htlc_list.iter().any(|(_, s)| s.as_ref().map(|s| s.as_ref()) == Some(source)) | ||
| { | ||
| return true; | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Minor: is_source_known checks self.funding.prev_counterparty_commitment_txid but not self.pending_funding[*].prev_counterparty_commitment_txid. In a splice scenario where the confirmed funding spend is for a pending funding output, an HTLC that was already known in that pending funding's previous counterparty commitment would be incorrectly treated as "new" and failed. Worth a TODO or a check against pending funding scopes as well?
| // When the counterparty commitment confirms, FundingSpendConfirmation matures | ||
| // immediately (no CSV delay), so funding_spend_confirmed is set. The new payment's | ||
| // commitment update then triggers immediate HTLC failure, generating payment events | ||
| // alongside the channel close event. |
There was a problem hiding this comment.
"FundingSpendConfirmation matures immediately (no CSV delay)" is misleading — FundingSpendConfirmation still requires ANTI_REORG_DELAY blocks to mature (the on_local_output_csv only adds additional delay beyond ANTI_REORG_DELAY). It appears that the payment failure events here actually come from the ChannelManager's force-close logic (failing the in-flight HTLC when CommitmentTxConfirmed is processed), not from the monitor's immediate HTLC failure path. Consider clarifying the comment.
TheBlueMatt
left a comment
There was a problem hiding this comment.
Thanks. Changes since @wpaulino's ACK are trivial so gonna land.
Closes #4431