ChainLaunch

QBFT in Besu: Genesis Config, Validators, and Block Time Guide

QBFT in Besu: Genesis Config, Validators, and Block Time Guide

David Viejo

Written by David Viejo

Consensus is the backbone of every blockchain network. Get it wrong, and nothing else matters — not your smart contracts, not your privacy layer, not your monitoring stack. According to the Hyperledger Foundation's 2025 Annual Report, Besu adoption grew 38% year-over-year among enterprise users, and QBFT is the default consensus protocol for virtually all new permissioned deployments. Yet most guides skim over the actual mechanics. This one won't.

I've deployed dozens of Besu networks across development, staging, and production environments. The consensus layer is where most operational surprises live. Round changes that shouldn't happen. Validators that fall out of sync. Block times that drift under load. Understanding QBFT deeply — not just configuring it — is what separates a test network from a production one.

TL;DR: QBFT (Quorum Byzantine Fault Tolerant) consensus in Hyperledger Besu tolerates up to f Byzantine faults among 3f+1 validators, with immediate block finality. It replaced IBFT 2.0 as the recommended consensus for enterprise Besu networks, offering improved liveness guarantees and full EEA QBFT specification compliance (Enterprise Ethereum Alliance, 2023).

If you're new to Besu, our complete Hyperledger Besu guide covers the full platform -- architecture, privacy, use cases, and deployment options. Still deciding between Besu and Fabric? Read our Hyperledger Fabric vs Besu comparison first. Already committed to Besu? Keep reading.

What Is QBFT and Why Does It Matter?

QBFT stands for Quorum Byzantine Fault Tolerant consensus. The Enterprise Ethereum Alliance finalized the QBFT specification in 2023, making it the standard BFT consensus for permissioned Ethereum networks. It guarantees immediate finality — once a block is committed, it will never be reverted — which is non-negotiable for financial transactions and regulatory compliance.

QBFT evolved from IBFT 2.0 to address specific liveness and correctness issues identified in production environments. The key improvement is the round-change mechanism, which ensures the network recovers faster when a block proposer fails. For anyone deploying a new Besu network in 2026, QBFT is the only consensus mechanism worth considering for permissioned use cases.

The Byzantine Generals Problem

The Byzantine Generals Problem is a classic distributed systems challenge first described by Lamport, Shostak, and Pease in 1982. Imagine several army generals surrounding a city. They must agree on a coordinated attack plan, but some generals might be traitors who send conflicting messages.

In blockchain terms, validators are the generals. Transactions are the attack plan. Byzantine faults are the traitors — nodes that crash, send contradictory messages, or act maliciously. QBFT solves this by requiring a supermajority of honest validators to agree before any block is finalized.

The formula is straightforward: you need at least 3f+1 validators to tolerate f Byzantine faults. Four validators tolerate one faulty node. Seven tolerate two. Ten tolerate three. This isn't just theoretical — it's the foundation every production sizing decision rests on.

How QBFT Differs from Proof of Work and Proof of Stake

QBFT is a voting-based consensus mechanism. There's no mining, no staking, and no probabilistic finality. Every validator participates in a structured multi-round protocol to agree on each block. The result is deterministic finality with known latency bounds.

Proof of Work (used by Bitcoin) wastes computational resources solving puzzles. Proof of Stake (used by public Ethereum) selects validators based on staked assets. Neither is appropriate for permissioned enterprise networks where validators are known, trusted to varying degrees, and governed by legal agreements rather than cryptoeconomic incentives.

How Does the QBFT Consensus Process Work?

Each QBFT block requires three message-passing phases — pre-prepare, prepare, and commit — before finalization. The ConsenSys Besu documentation specifies that a block is finalized when ceil(2n/3) validators have committed, where n is the total validator count. At four validators, that means three must agree.

The process starts when a designated proposer creates a candidate block. Validators then exchange messages in a structured sequence until supermajority agreement is reached. If agreement fails within a timeout window, a round change occurs and a new proposer takes over.

Phase 1: Pre-Prepare

The block proposer for the current round constructs a candidate block containing pending transactions. It broadcasts a PRE-PREPARE message to all other validators. This message includes the block data, the round number, and the proposer's signature.

Only one validator proposes per round. The proposer rotates in a round-robin fashion based on validator index and block height. This rotation prevents any single validator from controlling block production indefinitely.

When validators receive the PRE-PREPARE message, they verify that the proposer is legitimate for this round, the block is valid, and the round number matches their expected state. If any check fails, the validator ignores the message.

Phase 2: Prepare

Each validator that accepts the PRE-PREPARE broadcasts a PREPARE message to all other validators. This message signals: "I've seen the proposed block and I consider it valid."

A validator waits until it has received ceil(2n/3) PREPARE messages (including its own) for the same block. With four validators, that's three matching PREPARE messages. At this point, the validator knows that a supermajority has seen and accepted the same block proposal. It hasn't committed yet — but it knows consensus is forming.

The prepare phase prevents equivocation. Even if the proposer sent different blocks to different validators (a Byzantine behavior), the prepare quorum ensures that at most one block can gather enough votes.

Phase 3: Commit

Once a validator has collected enough PREPARE messages, it broadcasts a COMMIT message containing a commit seal — a cryptographic signature over the block hash. This seal is the validator's binding vote.

When a validator collects ceil(2n/3) COMMIT messages, the block is finalized. The validator appends the block to its local chain, including all commit seals in the block's extra data field. These seals serve as on-chain proof that consensus was reached.

Finality is absolute. There's no chain reorganization, no uncle blocks, no longest-chain rule. Once committed, the block is permanent. This property is why financial institutions prefer BFT consensus for settlement networks.

Round Changes: What Happens When Consensus Fails?

Round changes are QBFT's recovery mechanism. If the proposer is offline, slow, or Byzantine, the network doesn't stall — it advances to a new round with a different proposer.

Each round has a configurable timeout (set via requesttimeoutseconds in genesis.json). If a validator doesn't receive a valid PRE-PREPARE within this window, it broadcasts a ROUND-CHANGE message. Once ceil(2n/3) validators agree to change rounds, the next proposer in the rotation takes over.

QBFT improved this mechanism significantly over IBFT 2.0. In IBFT 2.0, round changes could stall under certain network partition scenarios. QBFT's round-change protocol includes prepared-round proofs that prevent validators from getting stuck in conflicting states. In my experience running production networks, this improvement alone justifies the migration from IBFT 2.0.

QBFT finalizes blocks through a three-phase protocol (pre-prepare, prepare, commit) requiring ceil(2n/3) validator agreement. The protocol guarantees immediate, absolute finality with no chain reorganization possible. Round-change improvements over IBFT 2.0 ensure faster recovery when proposers fail (Enterprise Ethereum Alliance QBFT Specification, 2023).

How Many Validators Does a QBFT Network Need?

The minimum is four validators, which tolerates exactly one Byzantine fault. The ConsenSys Besu documentation recommends starting with four for development and scaling to seven or more for production. In my experience, seven validators hit the sweet spot — tolerating two faults while keeping message overhead manageable.

The fault tolerance formula is N >= 3f + 1, where N is the total validator count and f is the maximum number of Byzantine faults you want to tolerate:

Validators Faults Tolerated Quorum Required Use Case
4 1 3 Development, testing
5 1 4 Small production
7 2 5 Recommended production
10 3 7 High-availability production
13 4 9 Multi-region, critical infra

Why Not Just Add More Validators?

More validators means more messages per block. QBFT's message complexity is O(n^2) because each phase requires every validator to broadcast to every other validator. At 4 validators, that's 12 messages per phase. At 13, it's 156. At 25, it's 600.

This quadratic growth increases latency and bandwidth consumption. Beyond 15-20 validators, you'll notice measurable block time increases unless you also increase blockperiodseconds. For most enterprise networks, 7-10 validators provides strong fault tolerance without degrading performance.

Geographic Distribution

For production deployments, distribute validators across availability zones or data centers. If all seven validators run in the same rack, a single power failure takes down the entire network. But don't distribute them too widely — cross-continental latency adds seconds to consensus rounds.

A good pattern: three availability zones within the same cloud region, with at least two validators per zone. This tolerates both individual node failures and full zone outages.

QBFT requires N >= 3f+1 validators to tolerate f Byzantine faults. Seven validators (tolerating two faults) is the recommended production minimum. Message complexity grows quadratically with validator count — O(n^2) per phase — making 7-10 validators the practical sweet spot for most enterprise deployments (ConsenSys Besu Documentation).

Free resource

5 QBFT Settings That Make or Break Your Besu Network

Genesis config template + validator key setup guide. Includes the exact block time, epoch length, and gas settings we use for enterprise Besu deployments.

No spam. Unsubscribe anytime.

How Does QBFT Compare to IBFT 2.0?

QBFT replaced IBFT 2.0 as the recommended consensus for Besu permissioned networks. The Hyperledger Besu changelog deprecated IBFT 2.0 for new deployments starting in version 23.x. The differences aren't cosmetic — QBFT fixes real correctness and liveness issues in IBFT 2.0's round-change protocol.

Feature IBFT 2.0 QBFT
EEA Specification Partial compliance Full QBFT spec compliance
Round-change protocol Can stall under partitions Includes prepared-round proofs
Message format Legacy Besu format EEA-standardized encoding
Liveness guarantee Weaker under adversarial conditions Stronger, formally verified
Block header encoding Besu-specific EEA-standard extra data
New deployment support Deprecated Recommended
Existing network support Still supported Preferred target for migration

When Should You Still Use IBFT 2.0?

Only if you have an existing network that can't be migrated. IBFT 2.0 remains supported for backward compatibility, but no new features or optimizations target it. If you're starting a new network, always use QBFT. If you're running IBFT 2.0 in production, plan a migration.

The Migration Path

Besu supports an in-place transition from IBFT 2.0 to QBFT using a coordinated fork. The process involves configuring a future block number at which all validators switch consensus protocols simultaneously. I'll cover the exact steps in the migration section below.

QBFT replaced IBFT 2.0 as Besu's recommended consensus protocol, with IBFT 2.0 deprecated for new deployments since Besu 23.x. QBFT provides full EEA specification compliance, stronger liveness guarantees, and a formally verified round-change protocol that eliminates stall conditions present in IBFT 2.0 (Hyperledger Besu Changelog).

How Do You Configure QBFT in genesis.json?

QBFT configuration lives in the genesis file under the config.qbft object. The genesis file defines the initial state of the blockchain, including consensus parameters, initial account balances, and the validator set encoded in the extraData field. Here's a complete, production-ready example:

{
  "config": {
    "chainId": 1337,
    "berlinBlock": 0,
    "qbft": {
      "blockperiodseconds": 5,
      "epochlength": 30000,
      "requesttimeoutseconds": 10
    }
  },
  "nonce": "0x0",
  "timestamp": "0x0",
  "extraData": "0xf87aa00000000000000000000000000000000000000000000000000000000000000000f854944a6e7c137a6691d55693f2dc32609ee57b1a8cb39406ab83f0e4a3ae5c5288fd1e57b4f3acee01a7dd94711a2e43e3e1d87f180f3b86a02c6fda97c0e310894a4b26e46e85a84ed6a134f4bf9f530dbb268e3c080c0",
  "gasLimit": "0x1fffffffffffff",
  "difficulty": "0x1",
  "mixHash": "0x63746963616c2062797a616e74696e65206661756c7420746f6c6572616e6365",
  "alloc": {
    "4a6e7c137a6691d55693f2dc32609ee57b1a8cb3": {
      "balance": "0xd3c21bcecceda1000000"
    },
    "06ab83f0e4a3ae5c5288fd1e57b4f3acee01a7dd": {
      "balance": "0xd3c21bcecceda1000000"
    },
    "711a2e43e3e1d87f180f3b86a02c6fda97c0e310": {
      "balance": "0xd3c21bcecceda1000000"
    },
    "a4b26e46e85a84ed6a134f4bf9f530dbb268e3": {
      "balance": "0xd3c21bcecceda1000000"
    }
  }
}

Let's break down each parameter and what it actually controls.

blockperiodseconds

This sets the target time between blocks. The default is 5 seconds. Lower values (1-2 seconds) increase transaction throughput and reduce confirmation latency but generate more blocks, consuming more disk space and increasing chain sync times.

For development, 1-2 seconds keeps the feedback loop tight. For production, 5 seconds is the most common choice. Financial settlement networks sometimes use 10-15 seconds to reduce chain growth. Don't go below 1 second unless your validators are co-located with sub-millisecond latency.

epochlength

Epoch length defines how many blocks between validator vote tallies. At the end of each epoch, pending validator addition and removal votes are counted and applied. The default is 30,000 blocks.

With a 5-second block time, that's roughly 42 hours per epoch. For networks where validator changes are rare, this is fine. If you need faster validator rotation (development environments, for instance), set it to 1,000 or even 100.

requesttimeoutseconds

This is the round-change timeout. If a validator doesn't see a valid PRE-PREPARE within this window, it initiates a round change. The default is 10 seconds — double the block period.

Set this too low, and you'll see unnecessary round changes during normal network jitter. Set it too high, and the network takes longer to recover from a failed proposer. A good rule of thumb: 2-3x the block period for co-located validators, 4-5x for geographically distributed ones.

The extraData Field

The extraData field in QBFT genesis encodes the initial validator set using RLP (Recursive Length Prefix) encoding. It contains a 32-byte vanity prefix, the list of validator addresses, and empty placeholders for proposer and commit seals that will be populated in future blocks.

Besu provides a command to generate this field:

besu rlp encode --from=toEncode.json --type=QBFT_EXTRA_DATA

Where toEncode.json contains the validator addresses:

["0x4a6e7c137a6691d55693f2dc32609ee57b1a8cb3",
 "0x06ab83f0e4a3ae5c5288fd1e57b4f3acee01a7dd",
 "0x711a2e43e3e1d87f180f3b86a02c6fda97c0e310",
 "0xa4b26e46e85a84ed6a134f4bf9f530dbb268e3"]

QBFT genesis configuration defines three critical parameters: blockperiodseconds (default 5, controls block cadence), epochlength (default 30,000, controls validator vote tally intervals), and requesttimeoutseconds (default 10, controls round-change triggers). The extraData field RLP-encodes the initial validator set (ConsenSys Besu Documentation).

For a hands-on walkthrough that generates this genesis file automatically, see our guide on deploying a Besu network in 2 minutes.

How Do You Add and Remove Validators?

QBFT supports dynamic validator management through on-chain voting. Existing validators propose additions or removals, and changes take effect at epoch boundaries. The JSON-RPC API exposes the necessary methods. No network restart is required.

Adding a Validator

To add a new validator, existing validators call qbft_proposeValidatorVote with the new validator's address and true:

# From validator node 1
curl -X POST --data '{
  "jsonrpc":"2.0",
  "method":"qbft_proposeValidatorVote",
  "params":["0xNewValidatorAddress", true],
  "id":1
}' http://localhost:8545
 
# From validator node 2 (same vote)
curl -X POST --data '{
  "jsonrpc":"2.0",
  "method":"qbft_proposeValidatorVote",
  "params":["0xNewValidatorAddress", true],
  "id":1
}' http://localhost:8546

A majority of current validators must submit the same proposal before the next epoch boundary. With 4 validators, you need 3 votes. With 7, you need 4.

Removing a Validator

The process is identical, but with false as the second parameter:

curl -X POST --data '{
  "jsonrpc":"2.0",
  "method":"qbft_proposeValidatorVote",
  "params":["0xValidatorToRemove", false],
  "id":1
}' http://localhost:8545

Checking Current Validators

Use qbft_getValidatorsByBlockNumber to see the active validator set:

curl -X POST --data '{
  "jsonrpc":"2.0",
  "method":"qbft_getValidatorsByBlockNumber",
  "params":["latest"],
  "id":1
}' http://localhost:8545

To see pending votes:

curl -X POST --data '{
  "jsonrpc":"2.0",
  "method":"qbft_getPendingVotes",
  "params":[],
  "id":1
}' http://localhost:8545

Validator Voting Best Practices

Never remove validators below the 3f+1 threshold. If you have 4 validators and remove one, you're at 3 — which tolerates zero faults. Always add the replacement before removing the old validator. Coordinate vote submissions across validators within a single epoch to avoid split votes carrying across boundaries.

Ready to deploy your own Besu network? Book a call with David to discuss your use case, or start deploying now with ChainLaunch.

Free resource

5 QBFT Settings That Make or Break Your Besu Network

Genesis config template + validator key setup guide. Includes the exact block time, epoch length, and gas settings we use for enterprise Besu deployments.

No spam. Unsubscribe anytime.

How Do You Tune QBFT for Production Performance?

Block time configuration is the primary performance lever. The ConsenSys Besu benchmark suite (2024) measured QBFT throughput at 200-800 TPS with 4 validators, depending on transaction complexity. Tuning block time, gas limits, and validator placement can push you toward the higher end of that range.

Block Time vs. Throughput Trade-offs

Block Period TPS Range Latency Disk Growth Best For
1 second 500-800 Low High (~6 GB/month) High-throughput DeFi
5 seconds 200-500 Medium Moderate (~1.2 GB/month) General enterprise
10 seconds 100-250 Higher Low (~600 MB/month) Settlement, audit
15 seconds 50-150 Highest Lowest (~400 MB/month) Archival, compliance

Gas Limit Tuning

The gasLimit in genesis.json caps how much computation a single block can contain. The default 0x1fffffffffffff (essentially unlimited) is fine for development. In production, set it based on your expected transaction complexity.

Simple ETH transfers use 21,000 gas. ERC-20 transfers use 50,000-65,000 gas. Complex smart contract calls can use millions. If your average transaction uses 100,000 gas and you want 200 transactions per block, set the gas limit to at least 20,000,000.

Network Latency and Timeout Coordination

For geographically distributed validators, you need to account for network round-trip times. If your validators span US-East and EU-West (typical RTT: 80-120ms), each consensus round adds 240-360ms of pure network latency across three message phases.

Set requesttimeoutseconds to at least 4-5x your block period for cross-region deployments. Monitor round-change frequency as your primary indicator. If you see more than 1-2 round changes per hour under normal load, your timeouts are too tight.

For a broader comparison of deployment approaches and their performance implications, see our Hyperledger Besu deployment tools comparison.

How Do You Monitor QBFT Consensus Health?

Besu exposes Prometheus metrics for QBFT consensus at the /metrics endpoint. According to Hyperledger Besu's metrics documentation, enabling --metrics-enabled=true and --metrics-category=CONSENSUS provides the critical datapoints you need. Monitoring round changes, block proposal rates, and validator participation should be your baseline.

Key Metrics to Track

Metric What It Measures Alert Threshold
besu_consensus_round Current consensus round > 0 sustained = proposer issues
besu_consensus_validators Active validator count < expected count
besu_blockchain_height Latest block number Stalled for > 2x block period
besu_synchronizer_in_sync Sync status false = node falling behind
besu_consensus_round_changes Round-change count Increasing rapidly = network issue

Enabling Metrics

Add these flags to your Besu startup command:

besu --metrics-enabled=true \
     --metrics-host=0.0.0.0 \
     --metrics-port=9545 \
     --metrics-category=BLOCKCHAIN,CONSENSUS,SYNCHRONIZER

Alerting Rules

At minimum, set alerts for three conditions. First, block height stalled for more than three block periods — this means consensus has stopped entirely. Second, round number consistently above zero — this indicates proposer failures or network partitions. Third, validator count dropping below the expected value — a validator has disconnected or crashed.

In production networks I've operated, round-change alerts catch 90% of consensus issues before they become visible to application users.

How Do You Migrate from IBFT 2.0 to QBFT?

Besu supports an in-place transition from IBFT 2.0 to QBFT using a consensus protocol schedule in the genesis file. The Besu migration guide specifies a coordinated fork approach — all validators switch at the same block height. This avoids chain splits and requires no downtime.

Step-by-Step Migration

Step 1: Choose a future block number for the transition. Pick a block far enough ahead to give all validator operators time to update their genesis files. Calculate it based on current block height plus a comfortable buffer — typically 1,000-5,000 blocks.

Step 2: Update the genesis file on every validator node. Add a transitions section specifying the QBFT switch:

{
  "config": {
    "chainId": 1337,
    "ibft2": {
      "blockperiodseconds": 5,
      "epochlength": 30000,
      "requesttimeoutseconds": 10
    },
    "transitions": {
      "qbft": [
        {
          "block": 150000,
          "blockperiodseconds": 5,
          "epochlength": 30000,
          "requesttimeoutseconds": 10
        }
      ]
    }
  }
}

Step 3: Restart all validator nodes with the updated genesis file before the transition block. They'll continue running IBFT 2.0 until block 150,000, then switch to QBFT automatically.

Step 4: After the transition block, verify QBFT is active:

curl -X POST --data '{
  "jsonrpc":"2.0",
  "method":"qbft_getValidatorsByBlockNumber",
  "params":["latest"],
  "id":1
}' http://localhost:8545

If the qbft_ namespace returns results, the migration succeeded. The ibft_ namespace will no longer work for blocks after the transition.

What Can Go Wrong

The most common failure: not all validators update their genesis file before the transition block. If some validators switch to QBFT while others remain on IBFT 2.0, the network partitions. Always confirm every validator operator has deployed the updated configuration before the target block arrives.

What Are Common QBFT Troubleshooting Scenarios?

Most QBFT issues trace back to three root causes: network connectivity between validators, time synchronization, and misconfigured genesis parameters. In production networks, I've found that 80% of consensus problems resolve by checking these fundamentals first.

Consensus Not Progressing

Symptoms: Block height is stuck, no new blocks produced.

Diagnosis:

  1. Check how many validators are online: qbft_getValidatorsByBlockNumber
  2. Verify each validator can reach the others via P2P (check peer count with net_peerCount)
  3. Look for round-change messages in logs: --logging=DEBUG

Common causes: Fewer than ceil(2n/3) validators are reachable. Firewall rules blocking P2P ports. Validators using different genesis files.

Frequent Round Changes

Symptoms: Blocks are produced but the besu_consensus_round metric frequently shows values > 0.

Diagnosis: Check network latency between validators. Verify requesttimeoutseconds is appropriate for your network topology.

Common causes: Proposer node is overloaded. Network latency exceeds the request timeout. One validator's clock is significantly skewed.

Validator Not Participating

Symptoms: One validator isn't signing blocks even though it's online.

Diagnosis: Verify the validator's address appears in qbft_getValidatorsByBlockNumber. Check that the node's key file matches the expected validator address.

Common causes: Wrong node key configured. Validator was removed during a previous epoch. The node synced from a snapshot that predates its addition to the validator set.

Debug Logging

Enable detailed consensus logging to diagnose stubborn issues:

besu --logging=DEBUG \
     --Xlog-filter=org.hyperledger.besu.consensus.qbft=TRACE

This produces verbose output. Use it temporarily during troubleshooting, not in steady-state production.

Ready to deploy your own Besu network? Book a call with David to discuss your use case, or start deploying now with ChainLaunch.

What Security Considerations Apply to QBFT Networks?

Validator key compromise is the highest-impact security risk in a QBFT network. If an attacker controls f+1 private keys in a 3f+1 network, they can halt consensus entirely. According to a Chainalysis 2024 report, private key compromise accounted for $2.2 billion in crypto losses that year. Enterprise networks need robust key management from day one.

Validator Key Management

Never store validator private keys in plain text on validator nodes. Use hardware security modules (HSMs) or cloud KMS services. Besu supports external key signing through its --security-module flag.

Best practices:

  • Generate validator keys offline or in a secure enclave
  • Use AWS KMS or HashiCorp Vault for key storage (see our AWS KMS deployment guide)
  • Rotate keys periodically by adding a new validator address and removing the old one
  • Maintain secure backups of key material in geographically separated locations

Network-Level Security

Configure TLS for all P2P communication between validators. Restrict RPC access to authorized clients only — never expose JSON-RPC endpoints to the public internet without authentication.

besu --p2p-host=0.0.0.0 \
     --p2p-port=30303 \
     --rpc-http-host=127.0.0.1 \
     --rpc-http-port=8545 \
     --host-allowlist="localhost,your-app-server.internal" \
     --rpc-http-cors-origins="none"

DoS Protection

Rate-limit JSON-RPC requests at the reverse proxy level. If validators accept connections from non-validator nodes (bootnodes, fullnodes), use node permissioning to whitelist known enode URLs:

besu --permissions-nodes-config-file-enabled=true \
     --permissions-nodes-config-file=node-permissions.toml

How Should You Plan Network Topology for Production?

Production QBFT networks should separate validator nodes from application-facing nodes. A Hyperledger Foundation architecture guide (2024) recommends at least three tiers: validators, bootnodes, and fullnodes. Validators run consensus, bootnodes handle peer discovery, and fullnodes serve RPC requests from applications.

Node Type Count Role Publicly Accessible
Validators 7+ Consensus, block production No
Bootnodes 2-3 Peer discovery Limited (P2P only)
Fullnodes 2+ per app Serve RPC, sync chain Yes (via load balancer)

Why Separate Validators from RPC Traffic

Validators must focus on consensus. RPC requests — especially eth_call or complex event queries — consume CPU and memory. If a validator is busy processing RPC traffic, it might miss a consensus round timeout, triggering unnecessary round changes. Dedicated fullnodes absorb application traffic while validators stay responsive.

This architecture also improves security. Validators can run behind strict firewalls with no public-facing ports except P2P. Application traffic hits the fullnodes through a load balancer, with rate limiting and authentication enforced at the edge.

To explore how this architecture compares to other deployment approaches, see our blockchain platform selection guide.

FAQ

What's the minimum number of validators for a production QBFT network?

Four validators is the absolute minimum, tolerating one Byzantine fault (3f+1 where f=1). However, the ConsenSys Besu documentation recommends seven validators for production. Seven tolerates two faults and provides a more comfortable operating margin. Losing one node out of four puts you at the fault-tolerance boundary; losing one out of seven leaves room.

Can I change block time on a running QBFT network?

Not without a coordinated restart. Block period is set in the genesis file, which is immutable after network creation. To change it, you'd need to use the transitions mechanism to schedule a parameter change at a future block — similar to the IBFT 2.0-to-QBFT migration process. All validators must update their genesis files before the transition block.

Is QBFT the same as PBFT?

No, but they share lineage. PBFT (Practical Byzantine Fault Tolerance) was proposed by Castro and Liskov in 1999. QBFT builds on the same three-phase commit structure but includes optimizations for blockchain use cases — specifically, round-change improvements, validator voting, and epoch-based governance. Think of QBFT as PBFT adapted for Ethereum-compatible networks with dynamic validator sets.

Does QBFT work with Besu on public Ethereum?

No. QBFT is designed exclusively for permissioned networks with a known validator set. Public Ethereum uses Proof of Stake (the Beacon Chain). You'd only use QBFT when deploying a private or consortium Besu network where validators are identified and governed by agreement, not by stake.

How does QBFT finality compare to Proof of Stake?

QBFT provides immediate, absolute finality. A block is final the instant it's committed. Proof of Stake on public Ethereum requires waiting for epochs to be finalized, which takes approximately 12-15 minutes. For enterprise applications that need instant settlement confirmation, QBFT's deterministic finality is a significant advantage.

What happens if more than f validators go offline simultaneously?

The network halts. It cannot produce new blocks until enough validators come back online to form a ceil(2n/3) quorum. Existing data remains intact and the network resumes automatically once quorum is restored. This is a safety property — QBFT will never produce conflicting blocks, even at the cost of temporary unavailability.

Conclusion

QBFT is the definitive consensus mechanism for production Hyperledger Besu networks. Its three-phase commit protocol guarantees immediate finality. The 3f+1 validator model provides quantifiable Byzantine fault tolerance. The EEA specification compliance ensures interoperability with other QBFT implementations.

The practical decisions that matter most: choose seven validators for production, set block period to 5 seconds as a baseline, distribute validators across availability zones, separate validator nodes from RPC-serving fullnodes, and monitor round-change frequency as your primary health indicator. Get these fundamentals right, and QBFT will run reliably for years.

If you're building on Besu, start with our 2-minute Besu deployment guide to get a working network, then apply the configuration and monitoring practices from this guide to harden it for production. For teams evaluating AI-assisted development workflows, our guide on building a Besu PoC with Claude Code shows how to accelerate the process.

Related guides: Deploy a Besu Network in 2 Minutes | Hyperledger Fabric vs Besu | Besu Deployment Tools Comparison | Blockchain Platform Selection Guide | Build a Besu PoC with Claude Code


Ready to deploy your own Besu network? Book a call with David to discuss your use case, or start deploying now with ChainLaunch.


David Viejo is the founder of ChainLaunch and a Hyperledger Foundation contributor. He created the Bevel Operator Fabric project and has been building blockchain infrastructure tooling since 2020.

Free resource

5 QBFT Settings That Make or Break Your Besu Network

Genesis config template + validator key setup guide. Includes the exact block time, epoch length, and gas settings we use for enterprise Besu deployments.

No spam. Unsubscribe anytime.

Related Articles

Ready to Deploy?

Deploy Fabric & Besu in minutes. Self-host for free or let us handle the infrastructure.

David Viejo, founder of ChainLaunch

Not sure which option?

Book a free 15-min call with David Viejo

No commitment. Cancel anytime.