Skip to content
This repository has been archived by the owner on Jan 22, 2025. It is now read-only.

Graph-based Replay Algorithm and Other Latency/Throughput Changes #27720

Closed
wants to merge 13 commits into from

Conversation

buffalu
Copy link
Contributor

@buffalu buffalu commented Sep 12, 2022

Original discussion: #27632
Metrics of similar code running on 1.10: https://metrics.solana.com:3000/d/KCLhfAbMz/replay?orgId=1&var-datasource=InfluxDB_main-beta&var-testnet=mainnet-beta&var-hostid=DuSw81G5jUYQ4Z3ujYmoY2ULbqwKYBLnNdwPLw4S2wSM&from=now-15m&to=now

Problem

given the following txs that write lock accounts A-H:

tx1: AB
tx2: BC
tx3: CD
tx4: EF
tx5: FG
tx6: GH

the current algorithm will replay in the following order sequentially: [AB], [BC], [CD], [EF], [FG], [GH]. we should be able to execute these one at a time and determine if any transactions become unblocked.

Summary of Changes

  • Uses a transaction dependency graph and multi-threaded executor to sped 25-100% less time execution transactions.
  • When replaying a bank, continue to call confirm_slot as long as you're receiving entries. This results in ~200ms-400ms lower latency for slot processing.

Proof

yellow is a leader setting a new bank in maybe_start_leader, red is the improved replay algorithm and the confirm_slot change, and blue is default. this is all on 1.10.

image

DuS running this modified code on 1.10
Screen Shot 2022-09-12 at 12 29 14 AM

DuS constantly beating the unmodified validator by ~200ms on average
image

@mergify mergify bot added the community Community contribution label Sep 12, 2022
@mergify mergify bot requested a review from a team September 12, 2022 04:33
}
}

impl Replayer {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@apfitzge i think we can probably use this for banking stage since it's pretty generic

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah looks like a lot of this can be reused for banking. Still working in incremental steps to get banking stage into a state where we can do it 😄

let mut timings = ExecuteTimings::default();

let txs = vec![tx_and_idx.0];
let mut batch =
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

expecting some comments on this :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ha, this does seem odd. I'm guessing that you've left it as a batch here for a few reasons:

  1. that's how it's executed/committed
  2. this will be easier to re-use in the generic scheduler

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah.. right now im assuming the scheduler algorithm is perfect, doesn't have any lock failures. it seems like that's okay, but its a pretty big change from what the system does now

/// A handle to the replayer. Each replayer handle has a separate channel that responses are sent on.
/// This means multiple threads can have a handle to the replayer and results are sent back to the
/// correct replayer handle each time.
impl ReplayerHandle {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thoughts on naming wrt this and struct below?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think they are fine. Only real critique would be that the file is named executor.rs so its a bit odd these aren't named similarly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had this in my mind for past few days, not a strong opinion, but ReplayExecutor or something a long those lines is preferable to me

@buffalu buffalu changed the title Massively improve replay throughput and latency Graph-based Replay Algorithm and Other Latency/Throughput Changes Sep 12, 2022
@apfitzge apfitzge self-requested a review September 12, 2022 14:45
transactions_indices_to_schedule.extend(
transactions
.iter()
.cloned()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we did an into_iter() previously, not sure if that would be better than cloning here.

.for_each(
|ReplayResponse {
result,
timing: _,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we remove this if we aren't going to use it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah lemme figure out where it was used and add back in!

timing: _,
batch_idx,
}| {
let batch_idx = batch_idx.unwrap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we always unwrap these from the result, can we remove the Option?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the idea here was that if we decide to use it for banking stage, we might not have a batch index. but ill go ahead and make it a usize and we can circle back to banking stage stuff later

if let Some((_, err)) = transaction_results
.iter()
.enumerate()
.find(|(_, r)| r.is_err())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This iteration seems unncessary. Could we just have a mut bool seen_error = false; and set it to true at line 185 if there's an error?

If I'm missing something, and we can't then we should at least remove the enumerate() that isn't being used.

.enumerate()
.find(|(_, r)| r.is_err())
{
while processing_state
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In contrast to above comment, I think this is probably fine since we'll only do this iteration if ew hit an error.


let pre_process_units: u64 = aggregate_total_execution_units(timings);

let (tx_results, balances) = batch.bank().load_execute_and_commit_transactions(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: seems odd to me that we use the passed Arc<Bank> everywhere else, but then the bank on the batch for this.
There are no situations where these should be different, so better to be consistent imo.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yeah, lemme use bank here

let mut timings = ExecuteTimings::default();

let txs = vec![tx_and_idx.0];
let mut batch =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ha, this does seem odd. I'm guessing that you've left it as a batch here for a few reasons:

  1. that's how it's executed/committed
  2. this will be easier to re-use in the generic scheduler

balances,
token_balances,
rent_debits,
transaction_indexes.to_vec(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this already a vec?

// build a map whose key is a pubkey + value is a sorted vector of all indices that
// lock that account
let mut indices_read_locking_account = HashMap::new();
let mut indicies_write_locking_account = HashMap::new();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let mut indicies_write_locking_account = HashMap::new();
let mut indices_write_locking_account = HashMap::new();

will also have to change the below references to the variable.

.map(|(idx, account_locks)| {
// user measured value from mainnet; rarely see more than 30 conflicts or so
let mut dep_graph = HashSet::with_capacity(DEFAULT_CONFLICT_SET_SIZE);
let readlock_accs = account_locks.writable.iter();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get what's going on, but I think the naming of these readlock_accs and writelock_accs can be confusing at a glance. Still trying to come up with a better name though...

Comment on lines +363 to +369
if let Some(err) = tx_account_locks_results.iter().find(|r| r.is_err()) {
err.clone()?;
}
let transaction_locks: Vec<_> = tx_account_locks_results
.iter()
.map(|r| r.as_ref().unwrap())
.collect();
Copy link

@auterium auterium Sep 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can avoid the double iteration & the unwrap() by collecting Vec<Result<T, E>> into Result<Vec<T>, E>

Suggested change
if let Some(err) = tx_account_locks_results.iter().find(|r| r.is_err()) {
err.clone()?;
}
let transaction_locks: Vec<_> = tx_account_locks_results
.iter()
.map(|r| r.as_ref().unwrap())
.collect();
let transaction_locks = tx_account_locks_results
.iter()
.map(|r| r.as_ref())
.collect::<std::result::Result<Vec<&TransactionAccountLocks>, _>>()
.map_err(|e| e.clone())?;

Copy link

@auterium auterium Sep 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps taking it a step further and skipping these 2 full iterations entirely by changing the format of the next one as:

    for (idx, tx_account_locks) in transaction_locks.iter().enumerate() {
        let tx_account_locks = tx_account_locks.as_ref().map_err(|e| e.clone())?;
        
        for account in &tx_account_locks.readonly {
            // ...
        }
        for account in &tx_account_locks.writable {
            // ...
        }
    }

// more entries may have been received while replaying this slot.
// looping over this ensures that slots will be processed as fast as possible with the
// lowest latency.
while did_process_entries {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this give preference to the current bank over other banks?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems ideal? you nede to finish bank before you can vote and work on next one

@offerm
Copy link

offerm commented Sep 14, 2022

@buffalu
Copy link
Contributor Author

buffalu commented Sep 14, 2022

@buffalu doesn't this node looks even better ?

https://metrics.solana.com:3000/d/KCLhfAbMz/replay?orgId=1&var-datasource=InfluxDB_main-beta&var-testnet=mainnet-beta&var-hostid=9mmFWANHuRb76VqCkEERNDu1Pu9osxUtaAE2k6GheEm7&from=now-15m&to=now

See Store Timings and Load/store/execute

that one also looks good, but hard to know what kinda hardware they're running. my setup is running on the same hardware in same data center

@offerm
Copy link

offerm commented Sep 14, 2022

That is my node so it is easy to know that.

image

What is your setup?

@buffalu
Copy link
Contributor Author

buffalu commented Sep 14, 2022

That is my node so it is easy to know that.

image

What is your setup?

7502P
512GB RAM

accounts on nvme.

i think the accounts store is higher bc im not batch writing like yours might be during replay. its only writing one tx worth of accounts at a time compared to multiple (potentially in parallel).

https://www.cpubenchmark.net/compare/AMD-EPYC-7443P-vs-AMD-EPYC-7502P/4391vs3538

are you running stock 1.10?

@buffalu
Copy link
Contributor Author

buffalu commented Sep 14, 2022

im not running it from 1.10 on any servers right now, but lemme spin one up and compare for slots at the same time; will send graph over later today.

@offerm
Copy link

offerm commented Sep 14, 2022

That is my node so it is easy to know that.
image
What is your setup?

7502P 512GB RAM

accounts on nvme.

i think the accounts store is higher bc im not batch writing like yours might be during replay. its only writing one tx worth of accounts at a time compared to multiple (potentially in parallel).

https://www.cpubenchmark.net/compare/AMD-EPYC-7443P-vs-AMD-EPYC-7502P/4391vs3538

are you running stock 1.10?

running 1.10.38 plus a very small and risk less patch. I plan to issue a PR in a day or two

If you share WS endpoint I can run some compare utils

@buffalu
Copy link
Contributor Author

buffalu commented Sep 14, 2022

nice! related to the blockstore signal issue? i noticed your blockstore wait elapsed aren't as striped as mine are
Screen Shot 2022-09-14 at 1 31 46 PM

@buffalu
Copy link
Contributor Author

buffalu commented Sep 14, 2022

9mm vs. our node. seems like we spend less time replaying, but the blockstore signal would make it even faster
Screen Shot 2022-09-14 at 3 26 09 PM

Copy link
Contributor

@apfitzge apfitzge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some higher level comments that the previous round.

@@ -367,6 +380,7 @@ fn bench_process_entries(randomize_txs: bool, bencher: &mut Bencher) {
&keypairs,
initial_lamports,
num_accounts,
&replayer_handle,
);
});
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

think I made a similar comment on the previous PR, but it'd be good if we could join the replayer threads here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sg! will def do before cleaning up, would like to get high-level comments out of the way before going in and fixing/adding tests jjust to make sure on right path

) -> Vec<ReplaySlotFromBlockstore> {
// Make mutable shared structures thread safe.
let progress = RwLock::new(progress);
let longest_replay_time_us = AtomicU64::new(0);

let bank_slots_replayers: Vec<_> = active_bank_slots
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

naming nit: makes it seem like there are multiple replayers, but these are actually just different handles

/// A handle to the replayer. Each replayer handle has a separate channel that responses are sent on.
/// This means multiple threads can have a handle to the replayer and results are sent back to the
/// correct replayer handle each time.
impl ReplayerHandle {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had this in my mind for past few days, not a strong opinion, but ReplayExecutor or something a long those lines is preferable to me

.collect()
}

pub fn join(self) -> thread::Result<()> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is unused but should be in several places.

// get_max_thread_count to match number of threads in the old code.
// see: https://github.com/solana-labs/solana/pull/24853
lazy_static! {
static ref PAR_THREAD_POOL: ThreadPool = rayon::ThreadPoolBuilder::new()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trying to get a clear summary of the different threads involved in the old replay and the new one.
Old:

  1. PAR_THREAD_POOLS - this has num_cpus::get() threads. Used for rebatching and executing transactions.
  2. main threads that use the thread_pool to execute transactions.

New:

  1. PAR_THREAD_POOL - this has num_cpus::get() threads. It is used for building the dependency graphs in slots.
  2. Replayer.threads - depends on passed parameter, but seems to be get_thread_count() consistently, which is num_cpus::get() / 2. These do the actual work of executing transactions during replay.
  3. main threads that call into the generate_dependency_graph.

Let me know if the above summary is incorrect.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes that seems correct

let txs = vec![tx_and_idx.0];
let mut batch =
TransactionBatch::new(vec![Ok(())], &bank, Cow::Borrowed(&txs));
batch.set_needs_unlock(false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple of questions on this as I'm reading through again:

  1. We have a bank, why not grab the account locks for this account? Definitely performance wise not grabbing is more efficient, but it'd probably be better to do this replay separation, then removing locking in separate changes. Once we're convinced we don't need the locks anymore.
  2. I'm not entirely convinced this can actually get rid of the locks. If we are replaying active banks concurrently, is there something I'm not aware of that prevents those from touching the same account(s)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, we can do that. if its locked, perhaps we should continually spin until its unlocked, perhaps with timeout or assert?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alternatively, could grab the lock in caller?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel it's more intuitive to have the executor do the locking. Yeah we could either just spin until it's unlocked, or even add it to some queue of blocked work that is rechecked if unblocked every loop.

I imagine spinning is probably fine though, since it sounds like you've not hit this yet and is probably a rare edge case.

cost_capacity_meter: Arc<RwLock<BlockCostCapacityMeter>>,
tx_cost: u64,
tx_costs: &[u64],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this isn't your name, but would be nice if changed this to tx_costs_without_bpf to make it clear this does not have bpf costs.

// build a resource-based dependency graph
let tx_account_locks_results: Vec<Result<_>> = transactions_indices_to_schedule
.iter()
.map(|(tx, _)| tx.get_account_locks(MAX_TX_ACCOUNT_LOCKS))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we use bank.get_transaction_account_lock_limit() here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes

@buffalu
Copy link
Contributor Author

buffalu commented Sep 14, 2022

9mm vs. our node. seems like we spend less time replaying, but the blockstore signal would make it even faster Screen Shot 2022-09-14 at 3 26 09 PM

major time differences probably related to cpu type, although #27786 seems like its on the right track

@buffalu
Copy link
Contributor Author

buffalu commented Sep 25, 2022

this is running on a similar PR to this on 1.10

$ solana-ledger-tool -l /solana/ledger verify --accounts-db-skip-shrink --skip-poh-verify

[2022-09-25T16:24:01.919942961Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309402, last root slot=152309401 slots=22 slots/s=11.0 txs/s=24338
[2022-09-25T16:24:03.988717424Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309426, last root slot=152309425 slots=24 slots/s=12.0 txs/s=25624
[2022-09-25T16:24:06.026790354Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309445, last root slot=152309444 slots=18 slots/s=9.0 txs/s=20035
[2022-09-25T16:24:08.094518133Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309467, last root slot=152309466 slots=22 slots/s=11.0 txs/s=25188
[2022-09-25T16:30:11.337902809Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309494, last root slot=152309493 slots=23 slots/s=0.063360885 txs/s=126.62534
[2022-09-25T16:30:13.394006052Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309505, last root slot=152309504 slots=11 slots/s=5.5 txs/s=11710
[2022-09-25T16:30:15.430229244Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309520, last root slot=152309519 slots=14 slots/s=7.0 txs/s=15394.5
[2022-09-25T16:30:17.432532698Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309538, last root slot=152309537 slots=18 slots/s=9.0 txs/s=19554
[2022-09-25T16:30:19.433093969Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309559, last root slot=152309558 slots=20 slots/s=10.0 txs/s=22360
[2022-09-25T16:30:25.106557398Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309586, last root slot=152309585 slots=27 slots/s=5.4 txs/s=11055.6
[2022-09-25T16:30:27.181299191Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309611, last root slot=152309610 slots=25 slots/s=12.5 txs/s=28852.5
[2022-09-25T16:30:29.320044840Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309637, last root slot=152309636 slots=26 slots/s=13.0 txs/s=28246.5
[2022-09-25T16:30:31.402811378Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309650, last root slot=152309649 slots=13 slots/s=6.5 txs/s=13711
[2022-09-25T16:30:33.418032712Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309667, last root slot=152309666 slots=13 slots/s=6.5 txs/s=14188.5
[2022-09-25T16:30:38.639550394Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309681, last root slot=152309680 slots=14 slots/s=2.8 txs/s=6042.4
[2022-09-25T16:30:40.676487039Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309700, last root slot=152309699 slots=19 slots/s=9.5 txs/s=19836
[2022-09-25T16:30:42.739981822Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309721, last root slot=152309720 slots=21 slots/s=10.5 txs/s=23521.5
[2022-09-25T16:30:44.869618049Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309745, last root slot=152309744 slots=24 slots/s=12.0 txs/s=26302
[2022-09-25T16:30:46.884860687Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309766, last root slot=152309765 slots=21 slots/s=10.5 txs/s=22643
[2022-09-25T16:30:51.996034653Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309786, last root slot=152309785 slots=20 slots/s=4.0 txs/s=8631.6
[2022-09-25T16:30:54.081147673Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309809, last root slot=152309808 slots=23 slots/s=11.5 txs/s=23685.5
[2022-09-25T16:30:56.112071190Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309834, last root slot=152309833 slots=25 slots/s=12.5 txs/s=27075.5
[2022-09-25T16:30:58.121677452Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309856, last root slot=152309855 slots=21 slots/s=10.5 txs/s=22782.5
[2022-09-25T16:31:00.147473833Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309882, last root slot=152309881 slots=26 slots/s=13.0 txs/s=26778.5
[2022-09-25T16:31:05.233958658Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309904, last root slot=152309903 slots=17 slots/s=3.4 txs/s=6751.2
[2022-09-25T16:31:07.262726628Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309925, last root slot=152309924 slots=21 slots/s=10.5 txs/s=23799.5
[2022-09-25T16:31:09.330421790Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309949, last root slot=152309948 slots=24 slots/s=12.0 txs/s=24801
[2022-09-25T16:31:11.366256114Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152309974, last root slot=152309973 slots=25 slots/s=12.5 txs/s=25663.5
[2022-09-25T16:31:13.431940966Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310000, last root slot=152309999 slots=26 slots/s=13.0 txs/s=27907
[2022-09-25T16:31:18.476717888Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310019, last root slot=152310018 slots=19 slots/s=3.8 txs/s=8362.2
[2022-09-25T16:31:20.489008177Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310041, last root slot=152310040 slots=22 slots/s=11.0 txs/s=24172.5
[2022-09-25T16:31:22.546791392Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310058, last root slot=152310057 slots=16 slots/s=8.0 txs/s=17342
[2022-09-25T16:31:24.562035930Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310081, last root slot=152310080 slots=20 slots/s=10.0 txs/s=18755.5
[2022-09-25T16:31:26.606234669Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310105, last root slot=152310104 slots=24 slots/s=12.0 txs/s=25972.5
[2022-09-25T16:31:31.567822565Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310128, last root slot=152310127 slots=23 slots/s=5.75 txs/s=11936.5
[2022-09-25T16:31:33.628225514Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310151, last root slot=152310150 slots=23 slots/s=11.5 txs/s=25317
[2022-09-25T16:31:35.743172901Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310177, last root slot=152310176 slots=24 slots/s=12.0 txs/s=22780
[2022-09-25T16:31:37.750731471Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310200, last root slot=152310199 slots=23 slots/s=11.5 txs/s=24424
[2022-09-25T16:31:39.792182110Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310225, last root slot=152310224 slots=25 slots/s=12.5 txs/s=27818.5
[2022-09-25T16:31:44.926141580Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310247, last root slot=152310246 slots=22 slots/s=4.4 txs/s=9680.2
[2022-09-25T16:31:46.997430936Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310269, last root slot=152310268 slots=22 slots/s=11.0 txs/s=22569
[2022-09-25T16:31:49.007510327Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310289, last root slot=152310288 slots=18 slots/s=9.0 txs/s=20205.5
[2022-09-25T16:31:51.049999637Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310311, last root slot=152310310 slots=21 slots/s=10.5 txs/s=22898
[2022-09-25T16:31:53.107218319Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310333, last root slot=152310332 slots=22 slots/s=11.0 txs/s=22492.5
[2022-09-25T16:31:58.096919653Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310359, last root slot=152310358 slots=22 slots/s=5.5 txs/s=10780.25
[2022-09-25T16:32:00.133493306Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310379, last root slot=152310378 slots=20 slots/s=10.0 txs/s=20802.5
[2022-09-25T16:32:02.164536498Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310397, last root slot=152310396 slots=18 slots/s=9.0 txs/s=19293
[2022-09-25T16:32:04.178412131Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310418, last root slot=152310417 slots=20 slots/s=10.0 txs/s=20534
[2022-09-25T16:32:06.292382799Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310434, last root slot=152310433 slots=15 slots/s=7.5 txs/s=17463.5
[2022-09-25T16:32:11.124309183Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310457, last root slot=152310456 slots=23 slots/s=5.75 txs/s=12815.25
[2022-09-25T16:32:13.142989039Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310481, last root slot=152310480 slots=24 slots/s=12.0 txs/s=25581.5
[2022-09-25T16:32:15.172635155Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310502, last root slot=152310501 slots=21 slots/s=10.5 txs/s=22550.5
[2022-09-25T16:32:17.200056741Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310525, last root slot=152310524 slots=23 slots/s=11.5 txs/s=25648.5
[2022-09-25T16:32:19.266411799Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310550, last root slot=152310549 slots=25 slots/s=12.5 txs/s=26734.5
[2022-09-25T16:32:24.268971778Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310572, last root slot=152310571 slots=22 slots/s=4.4 txs/s=9189.2
[2022-09-25T16:32:26.372900754Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310597, last root slot=152310596 slots=21 slots/s=10.5 txs/s=22341.5
[2022-09-25T16:32:28.394909193Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310618, last root slot=152310617 slots=21 slots/s=10.5 txs/s=22512.5
[2022-09-25T16:32:30.472856256Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310643, last root slot=152310642 slots=21 slots/s=10.5 txs/s=22053.5
[2022-09-25T16:32:32.522922851Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310667, last root slot=152310666 slots=24 slots/s=12.0 txs/s=25828.5
[2022-09-25T16:32:37.373799145Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310688, last root slot=152310687 slots=21 slots/s=5.25 txs/s=11066
[2022-09-25T16:32:39.425619370Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310710, last root slot=152310709 slots=22 slots/s=11.0 txs/s=24427
[2022-09-25T16:32:41.477635763Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310735, last root slot=152310734 slots=25 slots/s=12.5 txs/s=27290.5
[2022-09-25T16:32:43.507896814Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310759, last root slot=152310758 slots=24 slots/s=12.0 txs/s=25152
[2022-09-25T16:32:45.662466110Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310779, last root slot=152310778 slots=19 slots/s=9.5 txs/s=21511
[2022-09-25T16:32:50.529192069Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310799, last root slot=152310798 slots=19 slots/s=4.75 txs/s=10801.75
[2022-09-25T16:32:52.536809201Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310822, last root slot=152310821 slots=23 slots/s=11.5 txs/s=24337
[2022-09-25T16:32:54.626067638Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310849, last root slot=152310848 slots=27 slots/s=13.5 txs/s=29520
[2022-09-25T16:32:56.658988565Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310869, last root slot=152310868 slots=20 slots/s=10.0 txs/s=22195.5
[2022-09-25T16:32:58.665934330Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310893, last root slot=152310892 slots=24 slots/s=12.0 txs/s=24205.5
[2022-09-25T16:33:03.751534450Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310908, last root slot=152310907 slots=15 slots/s=3.0 txs/s=7177.6
[2022-09-25T16:33:05.765173403Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310928, last root slot=152310927 slots=20 slots/s=10.0 txs/s=21350.5
[2022-09-25T16:33:07.770816876Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310952, last root slot=152310951 slots=24 slots/s=12.0 txs/s=25965
[2022-09-25T16:33:09.775936028Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310973, last root slot=152310972 slots=21 slots/s=10.5 txs/s=23095
[2022-09-25T16:33:11.823684380Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152310995, last root slot=152310994 slots=22 slots/s=11.0 txs/s=24345
[2022-09-25T16:33:16.868368628Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311013, last root slot=152311012 slots=14 slots/s=2.8 txs/s=5447.2
[2022-09-25T16:33:18.918867020Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311031, last root slot=152311030 slots=18 slots/s=9.0 txs/s=20707.5
[2022-09-25T16:33:20.987332113Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311053, last root slot=152311052 slots=22 slots/s=11.0 txs/s=23988
[2022-09-25T16:33:23.104365804Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311075, last root slot=152311074 slots=22 slots/s=11.0 txs/s=23470.5
[2022-09-25T16:33:25.126420305Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311094, last root slot=152311093 slots=19 slots/s=9.5 txs/s=20863
[2022-09-25T16:33:29.915242770Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311113, last root slot=152311112 slots=19 slots/s=4.75 txs/s=10425.5
[2022-09-25T16:33:32.035467639Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311130, last root slot=152311129 slots=17 slots/s=8.5 txs/s=18517.5
[2022-09-25T16:33:34.086055985Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311149, last root slot=152311148 slots=19 slots/s=9.5 txs/s=21110
[2022-09-25T16:33:36.165402634Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311169, last root slot=152311168 slots=20 slots/s=10.0 txs/s=21327.5
[2022-09-25T16:33:38.226404198Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311197, last root slot=152311196 slots=27 slots/s=13.5 txs/s=25944
[2022-09-25T16:33:43.058622924Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311224, last root slot=152311223 slots=25 slots/s=6.25 txs/s=13374
[2022-09-25T16:33:45.096082503Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311239, last root slot=152311238 slots=15 slots/s=7.5 txs/s=18250
[2022-09-25T16:33:47.204522069Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311257, last root slot=152311256 slots=17 slots/s=8.5 txs/s=18052
[2022-09-25T16:33:49.308055619Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311281, last root slot=152311280 slots=22 slots/s=11.0 txs/s=23851.5
[2022-09-25T16:33:51.457456378Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311300, last root slot=152311299 slots=18 slots/s=9.0 txs/s=19122.5
[2022-09-25T16:33:55.976645288Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311317, last root slot=152311316 slots=17 slots/s=4.25 txs/s=9725
[2022-09-25T16:33:57.992362435Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311341, last root slot=152311340 slots=24 slots/s=12.0 txs/s=26238
[2022-09-25T16:34:00.034371416Z INFO  solana_ledger::blockstore_processor] processing ledger: slot=152311359, last root slot=152311358 slots=18 slots/s=9.0 txs/s=20420.5

replayer_handle,
)?;

Ok(true)
Copy link
Contributor

@mvines mvines Oct 15, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like this'd be better than returning true here:

let slot_full = slot_entries_load_result.2;
confirm_slot_entries(...)?;
Ok(!slot_full)

that is, don't bother calling confirm_slot() again if the slot is full since there's nothing more to process by definition. Ok(!bank.is_complete()) could be another way to express this.

@github-actions github-actions bot added the stale [bot only] Added to stale content; results in auto-close after a week. label Dec 29, 2022
@github-actions github-actions bot closed this Jan 9, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
community Community contribution stale [bot only] Added to stale content; results in auto-close after a week.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants