QBFT in Besu: Genesis Config, Validators, and Block Time Guide
Written by David Viejo
Consensus is the backbone of every blockchain network. Get it wrong, and nothing else matters — not your smart contracts, not your privacy layer, not your monitoring stack. According to the Hyperledger Foundation's 2025 Annual Report, Besu adoption grew 38% year-over-year among enterprise users, and QBFT is the default consensus protocol for virtually all new permissioned deployments. Yet most guides skim over the actual mechanics. This one won't.
I've deployed dozens of Besu networks across development, staging, and production environments. The consensus layer is where most operational surprises live. Round changes that shouldn't happen. Validators that fall out of sync. Block times that drift under load. Understanding QBFT deeply — not just configuring it — is what separates a test network from a production one.
TL;DR: QBFT (Quorum Byzantine Fault Tolerant) consensus in Hyperledger Besu tolerates up to f Byzantine faults among 3f+1 validators, with immediate block finality. It replaced IBFT 2.0 as the recommended consensus for enterprise Besu networks, offering improved liveness guarantees and full EEA QBFT specification compliance (Enterprise Ethereum Alliance, 2023).
If you're new to Besu, our complete Hyperledger Besu guide covers the full platform -- architecture, privacy, use cases, and deployment options. Still deciding between Besu and Fabric? Read our Hyperledger Fabric vs Besu comparison first. Already committed to Besu? Keep reading.
QBFT stands for Quorum Byzantine Fault Tolerant consensus. The Enterprise Ethereum Alliance finalized the QBFT specification in 2023, making it the standard BFT consensus for permissioned Ethereum networks. It guarantees immediate finality — once a block is committed, it will never be reverted — which is non-negotiable for financial transactions and regulatory compliance.
QBFT evolved from IBFT 2.0 to address specific liveness and correctness issues identified in production environments. The key improvement is the round-change mechanism, which ensures the network recovers faster when a block proposer fails. For anyone deploying a new Besu network in 2026, QBFT is the only consensus mechanism worth considering for permissioned use cases.
The Byzantine Generals Problem is a classic distributed systems challenge first described by Lamport, Shostak, and Pease in 1982. Imagine several army generals surrounding a city. They must agree on a coordinated attack plan, but some generals might be traitors who send conflicting messages.
In blockchain terms, validators are the generals. Transactions are the attack plan. Byzantine faults are the traitors — nodes that crash, send contradictory messages, or act maliciously. QBFT solves this by requiring a supermajority of honest validators to agree before any block is finalized.
The formula is straightforward: you need at least 3f+1 validators to tolerate f Byzantine faults. Four validators tolerate one faulty node. Seven tolerate two. Ten tolerate three. This isn't just theoretical — it's the foundation every production sizing decision rests on.
QBFT is a voting-based consensus mechanism. There's no mining, no staking, and no probabilistic finality. Every validator participates in a structured multi-round protocol to agree on each block. The result is deterministic finality with known latency bounds.
Proof of Work (used by Bitcoin) wastes computational resources solving puzzles. Proof of Stake (used by public Ethereum) selects validators based on staked assets. Neither is appropriate for permissioned enterprise networks where validators are known, trusted to varying degrees, and governed by legal agreements rather than cryptoeconomic incentives.
Each QBFT block requires three message-passing phases — pre-prepare, prepare, and commit — before finalization. The ConsenSys Besu documentation specifies that a block is finalized when ceil(2n/3) validators have committed, where n is the total validator count. At four validators, that means three must agree.
The process starts when a designated proposer creates a candidate block. Validators then exchange messages in a structured sequence until supermajority agreement is reached. If agreement fails within a timeout window, a round change occurs and a new proposer takes over.
The block proposer for the current round constructs a candidate block containing pending transactions. It broadcasts a PRE-PREPARE message to all other validators. This message includes the block data, the round number, and the proposer's signature.
Only one validator proposes per round. The proposer rotates in a round-robin fashion based on validator index and block height. This rotation prevents any single validator from controlling block production indefinitely.
When validators receive the PRE-PREPARE message, they verify that the proposer is legitimate for this round, the block is valid, and the round number matches their expected state. If any check fails, the validator ignores the message.
Each validator that accepts the PRE-PREPARE broadcasts a PREPARE message to all other validators. This message signals: "I've seen the proposed block and I consider it valid."
A validator waits until it has received ceil(2n/3) PREPARE messages (including its own) for the same block. With four validators, that's three matching PREPARE messages. At this point, the validator knows that a supermajority has seen and accepted the same block proposal. It hasn't committed yet — but it knows consensus is forming.
The prepare phase prevents equivocation. Even if the proposer sent different blocks to different validators (a Byzantine behavior), the prepare quorum ensures that at most one block can gather enough votes.
Once a validator has collected enough PREPARE messages, it broadcasts a COMMIT message containing a commit seal — a cryptographic signature over the block hash. This seal is the validator's binding vote.
When a validator collects ceil(2n/3) COMMIT messages, the block is finalized. The validator appends the block to its local chain, including all commit seals in the block's extra data field. These seals serve as on-chain proof that consensus was reached.
Finality is absolute. There's no chain reorganization, no uncle blocks, no longest-chain rule. Once committed, the block is permanent. This property is why financial institutions prefer BFT consensus for settlement networks.
Round changes are QBFT's recovery mechanism. If the proposer is offline, slow, or Byzantine, the network doesn't stall — it advances to a new round with a different proposer.
Each round has a configurable timeout (set via requesttimeoutseconds in genesis.json). If a validator doesn't receive a valid PRE-PREPARE within this window, it broadcasts a ROUND-CHANGE message. Once ceil(2n/3) validators agree to change rounds, the next proposer in the rotation takes over.
QBFT improved this mechanism significantly over IBFT 2.0. In IBFT 2.0, round changes could stall under certain network partition scenarios. QBFT's round-change protocol includes prepared-round proofs that prevent validators from getting stuck in conflicting states. In my experience running production networks, this improvement alone justifies the migration from IBFT 2.0.
QBFT finalizes blocks through a three-phase protocol (pre-prepare, prepare, commit) requiring ceil(2n/3) validator agreement. The protocol guarantees immediate, absolute finality with no chain reorganization possible. Round-change improvements over IBFT 2.0 ensure faster recovery when proposers fail (Enterprise Ethereum Alliance QBFT Specification, 2023).
The minimum is four validators, which tolerates exactly one Byzantine fault. The ConsenSys Besu documentation recommends starting with four for development and scaling to seven or more for production. In my experience, seven validators hit the sweet spot — tolerating two faults while keeping message overhead manageable.
The fault tolerance formula is N >= 3f + 1, where N is the total validator count and f is the maximum number of Byzantine faults you want to tolerate:
More validators means more messages per block. QBFT's message complexity is O(n^2) because each phase requires every validator to broadcast to every other validator. At 4 validators, that's 12 messages per phase. At 13, it's 156. At 25, it's 600.
This quadratic growth increases latency and bandwidth consumption. Beyond 15-20 validators, you'll notice measurable block time increases unless you also increase blockperiodseconds. For most enterprise networks, 7-10 validators provides strong fault tolerance without degrading performance.
For production deployments, distribute validators across availability zones or data centers. If all seven validators run in the same rack, a single power failure takes down the entire network. But don't distribute them too widely — cross-continental latency adds seconds to consensus rounds.
A good pattern: three availability zones within the same cloud region, with at least two validators per zone. This tolerates both individual node failures and full zone outages.
QBFT requires N >= 3f+1 validators to tolerate f Byzantine faults. Seven validators (tolerating two faults) is the recommended production minimum. Message complexity grows quadratically with validator count — O(n^2) per phase — making 7-10 validators the practical sweet spot for most enterprise deployments (ConsenSys Besu Documentation).
Free resource
5 QBFT Settings That Make or Break Your Besu Network
Genesis config template + validator key setup guide. Includes the exact block time, epoch length, and gas settings we use for enterprise Besu deployments.
QBFT replaced IBFT 2.0 as the recommended consensus for Besu permissioned networks. The Hyperledger Besu changelog deprecated IBFT 2.0 for new deployments starting in version 23.x. The differences aren't cosmetic — QBFT fixes real correctness and liveness issues in IBFT 2.0's round-change protocol.
Only if you have an existing network that can't be migrated. IBFT 2.0 remains supported for backward compatibility, but no new features or optimizations target it. If you're starting a new network, always use QBFT. If you're running IBFT 2.0 in production, plan a migration.
Besu supports an in-place transition from IBFT 2.0 to QBFT using a coordinated fork. The process involves configuring a future block number at which all validators switch consensus protocols simultaneously. I'll cover the exact steps in the migration section below.
QBFT replaced IBFT 2.0 as Besu's recommended consensus protocol, with IBFT 2.0 deprecated for new deployments since Besu 23.x. QBFT provides full EEA specification compliance, stronger liveness guarantees, and a formally verified round-change protocol that eliminates stall conditions present in IBFT 2.0 (Hyperledger Besu Changelog).
QBFT configuration lives in the genesis file under the config.qbft object. The genesis file defines the initial state of the blockchain, including consensus parameters, initial account balances, and the validator set encoded in the extraData field. Here's a complete, production-ready example:
This sets the target time between blocks. The default is 5 seconds. Lower values (1-2 seconds) increase transaction throughput and reduce confirmation latency but generate more blocks, consuming more disk space and increasing chain sync times.
For development, 1-2 seconds keeps the feedback loop tight. For production, 5 seconds is the most common choice. Financial settlement networks sometimes use 10-15 seconds to reduce chain growth. Don't go below 1 second unless your validators are co-located with sub-millisecond latency.
Epoch length defines how many blocks between validator vote tallies. At the end of each epoch, pending validator addition and removal votes are counted and applied. The default is 30,000 blocks.
With a 5-second block time, that's roughly 42 hours per epoch. For networks where validator changes are rare, this is fine. If you need faster validator rotation (development environments, for instance), set it to 1,000 or even 100.
This is the round-change timeout. If a validator doesn't see a valid PRE-PREPARE within this window, it initiates a round change. The default is 10 seconds — double the block period.
Set this too low, and you'll see unnecessary round changes during normal network jitter. Set it too high, and the network takes longer to recover from a failed proposer. A good rule of thumb: 2-3x the block period for co-located validators, 4-5x for geographically distributed ones.
The extraData field in QBFT genesis encodes the initial validator set using RLP (Recursive Length Prefix) encoding. It contains a 32-byte vanity prefix, the list of validator addresses, and empty placeholders for proposer and commit seals that will be populated in future blocks.
QBFT supports dynamic validator management through on-chain voting. Existing validators propose additions or removals, and changes take effect at epoch boundaries. The JSON-RPC API exposes the necessary methods. No network restart is required.
To add a new validator, existing validators call qbft_proposeValidatorVote with the new validator's address and true:
# From validator node 1curl -X POST --data '{ "jsonrpc":"2.0", "method":"qbft_proposeValidatorVote", "params":["0xNewValidatorAddress", true], "id":1}' http://localhost:8545# From validator node 2 (same vote)curl -X POST --data '{ "jsonrpc":"2.0", "method":"qbft_proposeValidatorVote", "params":["0xNewValidatorAddress", true], "id":1}' http://localhost:8546
A majority of current validators must submit the same proposal before the next epoch boundary. With 4 validators, you need 3 votes. With 7, you need 4.
Never remove validators below the 3f+1 threshold. If you have 4 validators and remove one, you're at 3 — which tolerates zero faults. Always add the replacement before removing the old validator. Coordinate vote submissions across validators within a single epoch to avoid split votes carrying across boundaries.
5 QBFT Settings That Make or Break Your Besu Network
Genesis config template + validator key setup guide. Includes the exact block time, epoch length, and gas settings we use for enterprise Besu deployments.
Block time configuration is the primary performance lever. The ConsenSys Besu benchmark suite (2024) measured QBFT throughput at 200-800 TPS with 4 validators, depending on transaction complexity. Tuning block time, gas limits, and validator placement can push you toward the higher end of that range.
The gasLimit in genesis.json caps how much computation a single block can contain. The default 0x1fffffffffffff (essentially unlimited) is fine for development. In production, set it based on your expected transaction complexity.
Simple ETH transfers use 21,000 gas. ERC-20 transfers use 50,000-65,000 gas. Complex smart contract calls can use millions. If your average transaction uses 100,000 gas and you want 200 transactions per block, set the gas limit to at least 20,000,000.
For geographically distributed validators, you need to account for network round-trip times. If your validators span US-East and EU-West (typical RTT: 80-120ms), each consensus round adds 240-360ms of pure network latency across three message phases.
Set requesttimeoutseconds to at least 4-5x your block period for cross-region deployments. Monitor round-change frequency as your primary indicator. If you see more than 1-2 round changes per hour under normal load, your timeouts are too tight.
Besu exposes Prometheus metrics for QBFT consensus at the /metrics endpoint. According to Hyperledger Besu's metrics documentation, enabling --metrics-enabled=true and --metrics-category=CONSENSUS provides the critical datapoints you need. Monitoring round changes, block proposal rates, and validator participation should be your baseline.
At minimum, set alerts for three conditions. First, block height stalled for more than three block periods — this means consensus has stopped entirely. Second, round number consistently above zero — this indicates proposer failures or network partitions. Third, validator count dropping below the expected value — a validator has disconnected or crashed.
In production networks I've operated, round-change alerts catch 90% of consensus issues before they become visible to application users.
Besu supports an in-place transition from IBFT 2.0 to QBFT using a consensus protocol schedule in the genesis file. The Besu migration guide specifies a coordinated fork approach — all validators switch at the same block height. This avoids chain splits and requires no downtime.
Step 1: Choose a future block number for the transition. Pick a block far enough ahead to give all validator operators time to update their genesis files. Calculate it based on current block height plus a comfortable buffer — typically 1,000-5,000 blocks.
Step 2: Update the genesis file on every validator node. Add a transitions section specifying the QBFT switch:
Step 3: Restart all validator nodes with the updated genesis file before the transition block. They'll continue running IBFT 2.0 until block 150,000, then switch to QBFT automatically.
Step 4: After the transition block, verify QBFT is active:
curl -X POST --data '{ "jsonrpc":"2.0", "method":"qbft_getValidatorsByBlockNumber", "params":["latest"], "id":1}' http://localhost:8545
If the qbft_ namespace returns results, the migration succeeded. The ibft_ namespace will no longer work for blocks after the transition.
The most common failure: not all validators update their genesis file before the transition block. If some validators switch to QBFT while others remain on IBFT 2.0, the network partitions. Always confirm every validator operator has deployed the updated configuration before the target block arrives.
Most QBFT issues trace back to three root causes: network connectivity between validators, time synchronization, and misconfigured genesis parameters. In production networks, I've found that 80% of consensus problems resolve by checking these fundamentals first.
Symptoms: One validator isn't signing blocks even though it's online.
Diagnosis: Verify the validator's address appears in qbft_getValidatorsByBlockNumber. Check that the node's key file matches the expected validator address.
Common causes: Wrong node key configured. Validator was removed during a previous epoch. The node synced from a snapshot that predates its addition to the validator set.
Validator key compromise is the highest-impact security risk in a QBFT network. If an attacker controls f+1 private keys in a 3f+1 network, they can halt consensus entirely. According to a Chainalysis 2024 report, private key compromise accounted for $2.2 billion in crypto losses that year. Enterprise networks need robust key management from day one.
Never store validator private keys in plain text on validator nodes. Use hardware security modules (HSMs) or cloud KMS services. Besu supports external key signing through its --security-module flag.
Best practices:
Generate validator keys offline or in a secure enclave
Configure TLS for all P2P communication between validators. Restrict RPC access to authorized clients only — never expose JSON-RPC endpoints to the public internet without authentication.
Rate-limit JSON-RPC requests at the reverse proxy level. If validators accept connections from non-validator nodes (bootnodes, fullnodes), use node permissioning to whitelist known enode URLs:
Production QBFT networks should separate validator nodes from application-facing nodes. A Hyperledger Foundation architecture guide (2024) recommends at least three tiers: validators, bootnodes, and fullnodes. Validators run consensus, bootnodes handle peer discovery, and fullnodes serve RPC requests from applications.
Validators must focus on consensus. RPC requests — especially eth_call or complex event queries — consume CPU and memory. If a validator is busy processing RPC traffic, it might miss a consensus round timeout, triggering unnecessary round changes. Dedicated fullnodes absorb application traffic while validators stay responsive.
This architecture also improves security. Validators can run behind strict firewalls with no public-facing ports except P2P. Application traffic hits the fullnodes through a load balancer, with rate limiting and authentication enforced at the edge.
Four validators is the absolute minimum, tolerating one Byzantine fault (3f+1 where f=1). However, the ConsenSys Besu documentation recommends seven validators for production. Seven tolerates two faults and provides a more comfortable operating margin. Losing one node out of four puts you at the fault-tolerance boundary; losing one out of seven leaves room.
Not without a coordinated restart. Block period is set in the genesis file, which is immutable after network creation. To change it, you'd need to use the transitions mechanism to schedule a parameter change at a future block — similar to the IBFT 2.0-to-QBFT migration process. All validators must update their genesis files before the transition block.
No, but they share lineage. PBFT (Practical Byzantine Fault Tolerance) was proposed by Castro and Liskov in 1999. QBFT builds on the same three-phase commit structure but includes optimizations for blockchain use cases — specifically, round-change improvements, validator voting, and epoch-based governance. Think of QBFT as PBFT adapted for Ethereum-compatible networks with dynamic validator sets.
No. QBFT is designed exclusively for permissioned networks with a known validator set. Public Ethereum uses Proof of Stake (the Beacon Chain). You'd only use QBFT when deploying a private or consortium Besu network where validators are identified and governed by agreement, not by stake.
QBFT provides immediate, absolute finality. A block is final the instant it's committed. Proof of Stake on public Ethereum requires waiting for epochs to be finalized, which takes approximately 12-15 minutes. For enterprise applications that need instant settlement confirmation, QBFT's deterministic finality is a significant advantage.
The network halts. It cannot produce new blocks until enough validators come back online to form a ceil(2n/3) quorum. Existing data remains intact and the network resumes automatically once quorum is restored. This is a safety property — QBFT will never produce conflicting blocks, even at the cost of temporary unavailability.
QBFT is the definitive consensus mechanism for production Hyperledger Besu networks. Its three-phase commit protocol guarantees immediate finality. The 3f+1 validator model provides quantifiable Byzantine fault tolerance. The EEA specification compliance ensures interoperability with other QBFT implementations.
The practical decisions that matter most: choose seven validators for production, set block period to 5 seconds as a baseline, distribute validators across availability zones, separate validator nodes from RPC-serving fullnodes, and monitor round-change frequency as your primary health indicator. Get these fundamentals right, and QBFT will run reliably for years.
If you're building on Besu, start with our 2-minute Besu deployment guide to get a working network, then apply the configuration and monitoring practices from this guide to harden it for production. For teams evaluating AI-assisted development workflows, our guide on building a Besu PoC with Claude Code shows how to accelerate the process.
David Viejo is the founder of ChainLaunch and a Hyperledger Foundation contributor. He created the Bevel Operator Fabric project and has been building blockchain infrastructure tooling since 2020.
Free resource
5 QBFT Settings That Make or Break Your Besu Network
Genesis config template + validator key setup guide. Includes the exact block time, epoch length, and gas settings we use for enterprise Besu deployments.