ChainLaunch

chaindeploy v0.4.0: Fabric-X is Now Generally Available

chaindeploy v0.4.0: Fabric-X is Now Generally Available

David Viejo

Written by David Viejo

Three days ago we tagged v0.4.0-beta.1 and asked you to break it. You did, mostly in the boring places we hoped you would — bind-mount cache races on Docker Desktop, port collisions when chaindeploy shares a host with chainlaunch-pro, an icon component that was on my disk but never git add-ed. All fixed in main. Today v0.4.0 is tagged from the same commit that hosts the merged UI polish, the GitHub release is out, and the binaries are building. Fabric-X support in chaindeploy is now generally available.

If you read the beta announcement, the headline is unchanged: chaindeploy is the first open-source operator with first-class Hyperledger Fabric-X support, ships as a single Go binary with embedded UI, and gets you from git clone to a running 4-party Arma BFT network in roughly five minutes. What changed in three days is that we have receipts now.

TL;DR: chaindeploy v0.4.0 promotes Fabric-X to GA after a 3-day beta soak. Provision a complete network from the web UI's Quick Start wizard or chainlaunch fabricx quickstart --parties 4 on the CLI. Two MSP modes (single shared AcmeMSP for sample apps, multi-MSP for production-shape Arma BFT). Linux arm64 binary now ships alongside amd64 and the macOS pair. Apache 2.0, single binary, download here.


What "GA" means for an open-source operator

Let me be specific, because "GA" gets thrown around. For chaindeploy v0.4.0, GA means three things.

The Fabric-X subsystem itself is no longer behind a beta tag. The committer group, party-owned children, role-aware lifecycle, per-role Prometheus metrics, and cert renewal are the same code that ran the beta — but they have now been exercised against the FabricX E2E job in CI on every PR for the last three days, plus whatever you all threw at the beta binary. We did not find a regression that warranted respinning the release.

The CLI surface is stable. chainlaunch fabricx quickstart shipped in the beta as a way to bypass the wizard for scripted environments. Its flags (--network-name, --parties, --mode, --single-msp, --base-port, etc.) are now part of the documented API contract. We will not break them in a 0.4.x patch.

The UI surfaces FabricX where you would expect. This is the polish that landed between beta.1 and the final tag, in PR #28. The brand icon (a cyan "X" badge on the Fabric stroke logo) renders in the dashboard's Platform Distribution panel, the resource-filter tabs, the network and node lists, and the "Create Network" dropdown actions. Small thing, but if you cannot tell a FabricX node from a classic Fabric peer at a glance, you will misread your own dashboard.

What GA does not mean: that you should run a single chaindeploy instance as the only operator for a Fabric-X production deployment without thinking about HA, backup verification, and observability beyond the built-in Prometheus exporters. It means the runtime is stable enough to commit to. The operator-of-operators concerns are still on you.


Free resource

87% of Blockchain Projects Die Before Production — Readiness Scorecard

Score your project across 5 dimensions: infrastructure, key management, monitoring, DR, and team readiness. Know exactly where the gaps are before they kill your timeline.

No spam. Unsubscribe anytime.

Two ways to provision a network

There are now two paths to a running Fabric-X network in chaindeploy. They build the same thing.

Path 1: Web UI Quick Start. Open http://localhost:8100, log in, and pick Networks → Fabric-X → Quick-start. The wizard runs a five-phase provisioning flow (organizations, postgres service, per-party databases, orderer groups, committer, network creation, join) and streams progress. Single-MSP mode is the default and what you want for token-sdk-x sample apps — one shared AcmeMSP owns all four parties, so endorsements collapse to one signer. Multi-MSP mode is the production-shape Arma BFT topology with one MSP per party.

Path 2: CLI. Same flow, no browser:

# macOS / Windows Docker Desktop only:
export CHAINLAUNCH_FABRICX_LOCAL_DEV=true
 
chainlaunch fabricx quickstart \
  --network-name my-fabricx \
  --parties 4 \
  --mode single

The CLI is what CI pipelines and provisioning scripts should use. It exits non-zero on failure, supports --clean to wipe a prior bundle of the same name, and has retry logic baked in for the Docker bind-mount cache races we saw the most during the beta. Full flag reference lives in the CLI docs.

A single-MSP 4-party quickstart spins up roughly 22 containers (16 orderer-group containers + 5 committer containers + 1 shared Postgres). Multi-MSP is closer to 37, because each party gets its own committer instead of sharing one. Both fit on a 16 GB laptop with room to spare.

If you want the deeper walkthrough — screenshots, namespace creation, the architecture of why Fabric-X needs 22 containers in the first place — the FabricX Quickstart post covers it end-to-end.


Namespaces, not channels

One of the things you have to internalize about Fabric-X if you are coming from classic Fabric: the channel model is gone. A Fabric-X network has exactly one channel, always called arma. Logical isolation comes from namespaces, which partition the single channel and each get their own postgres table on every committer.

Practically, this means:

  • You create a namespace by submitting a transaction, not by editing channel configs offline.
  • Namespace IDs match ^[a-z0-9_]+$ with a 60-character cap. Hyphens and uppercase get rejected on commit with MALFORMED_NAMESPACE_ID_INVALID. The web UI and CLI both check the rule client-side so you fail fast.
  • Each namespace's state is queryable via the committer's query-service over the Postgres protocol. This is the "rich queries" Fabric people have been asking for since 2018, and it is finally not behind CouchDB.

The committer's query-service exposes a Postgres-compatible read endpoint per namespace. Your application can SELECT against committed state directly. The dump path also matters: chaindeploy runs pg_dumpall inside every managed Postgres container before each backup snapshot, so a backup contains both the live PGDATA directory (recoverable via WAL replay) and a transactionally-consistent SQL dump you can psql -f if WAL recovery is not enough. Failures of pg_dumpall get logged and surfaced in backup metadata but do not abort the backup — partial is still better than none.


Free resource

87% of Blockchain Projects Die Before Production — Readiness Scorecard

Score your project across 5 dimensions: infrastructure, key management, monitoring, DR, and team readiness. Know exactly where the gaps are before they kill your timeline.

No spam. Unsubscribe anytime.

Linux arm64 ships, and other things you might miss

A few items from the changelog that are easy to skim past but matter operationally.

linux-arm64 is now a release artifact. Previously chaindeploy shipped binaries for darwin-amd64, darwin-arm64, and linux-amd64. As of v0.4.0 there is also linux-arm64. If you run on Graviton, on Ampere, or on a Raspberry Pi cluster (you know who you are), you no longer have to build from source.

Backup credential encryption at rest. Backup target configs — S3 keys, EBS credentials, VMware vCenter logins — are now encrypted at rest in the SQLite database. The encryption key is the same one chaindeploy uses for organization signing keys and is auto-generated on first start (or supplied via KEY_ENCRYPTION_KEY if you want to manage it yourself). If you were holding off on configuring backups because plaintext credentials in SQLite made you nervous, the holdup is gone.

E2E test improvements. Playwright tests for login and user creation got a meaningful overhaul. Not glamorous, but it caught two bugs in the beta soak window that would have shipped to GA otherwise.

Postgres host port is configurable. The shared Postgres container in a Quick Start network used to bind to 15432 unconditionally. Now --postgres-port lets you pick. Useful when chaindeploy and chainlaunch-pro are on the same host and you do not want to play port roulette.


What is not in v0.4.0 (and why)

Honest list, because the gap matters more than the marketing.

Fabric-X chaincode is not managed by chaindeploy. Business logic for Fabric-X lives in token-sdk-x applications outside the operator. chaindeploy provisions the network, the namespaces, and the cert material. It does not deploy a chaincode jar. This is by design — Fabric-X's execution model is fundamentally different from classic Fabric's chaincode lifecycle, and conflating the two would create the worst kind of leaky abstraction.

Fabric-X templates are not supported yet. The network template system that lets you export a Fabric or Besu network as a portable JSON and re-import it on another instance currently only handles fabric and besu platform values. Fabric-X variables (party identifiers, role-to-image bindings, namespace state) are not in the template schema yet. Targeting v0.5.0.

Plugin x-sources for token-sdk-x are Pro-only. ChainLaunch Pro ships two reference x-source handlers — fabricx-network and fabricx-identity — that resolve a network ID + organization ID into a connection bundle (router endpoint, query-service endpoint, TLS CA, MSP folder) for token-sdk-x applications running as plugins. The open-source build does not include them. If you need this and you are not a Pro customer, the docs page in the chaindeploy repo described what the handler returns; you can reproduce it as a custom plugin in a few hundred lines.

HA for the chaindeploy operator itself is still single-node. chaindeploy uses an embedded SQLite database. You can run multiple chaindeploy instances against different databases and federate them via the Pro Sharing protocol, but there is no leader election or shared-state mode for a single logical operator with multiple replicas. This is on the roadmap and will land before we call the operator itself "production-ready" without caveats.


Try it

Pull v0.4.0:

# Latest binary, all platforms:
curl -fsSL https://chainlaunch.dev/deploy.sh | bash
 
# Or grab the platform-specific binary directly:
# https://github.com/LF-Decentralized-Trust-labs/chaindeploy/releases/tag/v0.4.0

Then either:

# Web UI:
open http://localhost:8100
# Networks → Fabric-X → Quick-start
 
# Or CLI:
export CHAINLAUNCH_FABRICX_LOCAL_DEV=true   # macOS / Windows only
chainlaunch fabricx quickstart --network-name demo --parties 4

The full release notes live at docs.chainlaunch.dev/release-notes. The Quickstart guide is at docs.chainlaunch.dev/fabricx/quickstart. Source, issues, and the LF Decentralized Trust Discord #chainlaunch channel are the right places to land bugs and questions.

A note on the timeline: three days from beta tag to GA is short, and that is intentional. The beta was tagged on a Saturday, soaked over a weekend that included real users running real quickstart flows on real hardware, and the only blocker that appeared was the icon-component-not-staged bug I just owned up to in this post. Shipping fast is only safe when you actually read the bug reports — and we read every one.

If you ship Fabric in production, the question is no longer whether Fabric-X is real. The Hyperledger Performance Working Group benchmarks have been showing Arma sustaining 15,000+ TPS for almost a year now. The question is what tooling you will use to operate it. chaindeploy v0.4.0 is one answer. It is not the only one, but it is the only open-source one today, and that matters.

Pull it down. Tell us what hurts. We are listening.

Free resource

87% of Blockchain Projects Die Before Production — Readiness Scorecard

Score your project across 5 dimensions: infrastructure, key management, monitoring, DR, and team readiness. Know exactly where the gaps are before they kill your timeline.

No spam. Unsubscribe anytime.

Related Articles

Ready to Deploy?

Deploy Fabric & Besu in minutes. Self-host for free or let us handle the infrastructure.

David Viejo, founder of ChainLaunch

Not sure which option?

Book a free 15-min call with David Viejo

No commitment. Cancel anytime.