Skip to content

Commit

Permalink
A0-0000: Request missing blocks much more aggressively (#1890)
Browse files Browse the repository at this point in the history
# Description

We couldn't do that before, as substrate sync reacted poorly to many
requests. Should help with the desync problems between versions 13 and
14 during updates.

## Type of change

- Bug fix (non-breaking change which fixes an issue)

# Checklist:

<!-- delete when not applicable to your PR -->

- I have made corresponding changes to the existing documentation
  • Loading branch information
timorleph authored Dec 19, 2024
1 parent 24d2673 commit ea1e23b
Showing 1 changed file with 34 additions and 18 deletions.
52 changes: 34 additions & 18 deletions finality-aleph/src/data_io/data_store.rs
Original file line number Diff line number Diff line change
Expand Up @@ -100,8 +100,8 @@ impl Default for DataStoreConfig {
max_proposals_pending: 80_000,
max_messages_pending: 40_000,
available_proposals_cache_capacity: NonZeroUsize::new(8000).unwrap(),
periodic_maintenance_interval: Duration::from_secs(25),
request_block_after: Duration::from_secs(20),
periodic_maintenance_interval: Duration::from_secs(10),
request_block_after: Duration::from_secs(5),
}
}
}
Expand Down Expand Up @@ -151,7 +151,8 @@ impl Display for Error {
// In case any of them is not, the message `m` receives a fresh `MessageId` and the message and all the pending
// proposals are added to our "pending list". The dependencies between the message and the proposals it waits for
// are also tracked. At the very first moment when the last pending proposal of the message becomes available, the
// message is removed from the pending list and is output on "the other side" of DataStore.
// message is removed from the pending list and is output on "the other side" of DataStore. We
// also immediately request any missing blocks.
// 3) It is crucial for DataStore to use a bounded amount of memory, which is perhaps the hardest challenge when implementing it.
// There are constants in the `DataStoreConfig` that determine maximum possible amounts of messages and proposals that
// can be pending at the same time. When any of the limits is exceeded, we keep dropping messages (starting from
Expand Down Expand Up @@ -304,6 +305,31 @@ where
self.on_block_finalized(highest_finalized);
}

fn acquire_highest_block(
&mut self,
proposal: &AlephProposal<H::Unverified>,
time_waiting: &Duration,
) -> bool {
let header = proposal.top_block_header();
let block_id = proposal.top_block();
if self.chain_info_provider.is_block_imported(&block_id) {
return false;
}
debug!(
target: LOG_TARGET,
"Requesting a block {:?} after it has been missing for {:?} secs.",
block_id,
time_waiting.as_secs()
);
if let Err(e) = self.block_requester.request_block(header) {
warn!(
target: LOG_TARGET,
"Error requesting block {:?}, {}.", block_id, e
);
}
true
}

fn run_maintenance(&mut self) {
self.update_highest_finalized();

Expand Down Expand Up @@ -343,21 +369,9 @@ where
_ => continue,
};

let header = proposal.top_block_header();
let block_id = proposal.top_block();
if !self.chain_info_provider.is_block_imported(&block_id) {
debug!(
target: LOG_TARGET,
"Requesting a block {:?} after it has been missing for {:?} secs.",
block_id,
time_waiting.as_secs()
);
if let Err(e) = self.block_requester.request_block(header) {
warn!(
target: LOG_TARGET,
"Error requesting block {:?}, {}.", block_id, e
);
}
if self.acquire_highest_block(&proposal, &time_waiting) {
// We need the highest block in this proposal and have just now sent a request for
// it.
continue;
}
// The top block (thus the whole branch, in the honest case) has been imported. What's holding us
Expand Down Expand Up @@ -565,6 +579,8 @@ where
let status = self.check_proposal_availability(proposal, None);
match &status {
Pending(PendingTopBlock) => {
// Immediately request the missing block.
self.acquire_highest_block(proposal, &Duration::ZERO);
self.pending_proposals
.insert(proposal.clone(), PendingProposalInfo::new(status));
self.register_block_import_trigger(proposal, &proposal.top_block());
Expand Down

0 comments on commit ea1e23b

Please sign in to comment.