Installation¶
This page covers every way to deploy HyperbyteDB: pre-built Docker images, Docker Compose with a full monitoring stack, building from source, and Kubernetes—including local kind and the HyperbyteDB operator for production deployments.
Pre-built Docker Image (Recommended)¶
The fastest way to run HyperbyteDB. Images are published to GitHub Container Registry.
docker pull ghcr.io/hyperbyte-cloud/hyperbytedb:latest
docker run -d \
--name hyperbytedb \
-p 8086:8086 \
-v hyperbytedb-data:/var/lib/hyperbytedb \
-e HYPERBYTEDB__SERVER__BIND_ADDRESS=0.0.0.0 \
-e HYPERBYTEDB__SERVER__PORT=8086 \
ghcr.io/hyperbyte-cloud/hyperbytedb:latest
Verify it started:
The container stores all data under /var/lib/hyperbytedb. Mount a volume to persist data across container restarts.
Note:
HYPERBYTEDB__SERVER__BIND_ADDRESS=0.0.0.0is required so the process accepts connections from outside the container.
Docker Compose (Full Stack)¶
The included docker-compose.yml starts HyperbyteDB with Telegraf, Prometheus, and Grafana for a complete observability setup.
| Service | Port | Description |
|---|---|---|
| HyperbyteDB | 8086 | Time-series database (API + Prometheus metrics) |
| Telegraf | — | Collects host metrics and writes to HyperbyteDB |
| Prometheus | 9090 | Scrapes HyperbyteDB /metrics |
| Grafana | 3000 | Pre-provisioned dashboards (login: admin/admin) |
Quick smoke test:
# Create a database
curl -sS -XPOST 'http://localhost:8086/query' --data-urlencode 'q=CREATE DATABASE mydb'
# Write a point
curl -sS -XPOST 'http://localhost:8086/write?db=mydb' \
--data-binary 'cpu,host=server01,region=us-west usage_idle=95.2,usage_user=4.8'
# Wait for flush (~10s), then query
curl -sS -G 'http://localhost:8086/query' \
--data-urlencode 'db=mydb' \
--data-urlencode 'q=SELECT * FROM cpu'
Building from Source¶
Prerequisites¶
| Requirement | Details |
|---|---|
| Rust | Latest stable toolchain (rustup update stable) |
| libchdb | Embedded ClickHouse library |
| System packages | clang, llvm-dev, libclang-dev, pkg-config, libssl-dev |
| Platform | Linux x86_64 |
Install system dependencies¶
# Debian/Ubuntu
sudo apt-get update && sudo apt-get install -y \
clang llvm-dev libclang-dev pkg-config libssl-dev build-essential
# Fedora/RHEL
sudo dnf install -y clang llvm-devel clang-devel pkgconfig openssl-devel
Install libchdb¶
This places libchdb.so in /usr/local/lib/ and chdb.h in /usr/local/include/.
Verify:
Build¶
# Debug build (faster compilation, slower runtime)
cargo build
# Release build (optimized, recommended for production)
cargo build --release
The release build uses LTO, single codegen unit, and strip for maximum performance and minimum binary size.
Run¶
# Start with default config (./config.toml)
./target/release/hyperbytedb serve
# Start with a custom config file
./target/release/hyperbytedb -c /etc/hyperbytedb/config.toml serve
CLI Commands¶
| Command | Description |
|---|---|
hyperbytedb serve | Start the HTTP server |
hyperbytedb backup --output <path> | Create a full backup |
hyperbytedb restore --input <path> | Restore from a backup |
| Flag | Default | Description |
|---|---|---|
-c, --config | config.toml | Path to TOML config file |
Docker Compose Cluster (3-Node)¶
For a clustered deployment with Docker Compose, create a compose file with three HyperbyteDB services. Each node needs a unique NODE_ID and must list the other nodes as peers:
services:
db1:
image: ghcr.io/hyperbyte-cloud/hyperbytedb:latest
hostname: db1
ports: ["8086:8086"]
volumes: [db1-data:/var/lib/hyperbytedb]
environment:
HYPERBYTEDB__SERVER__BIND_ADDRESS: "0.0.0.0"
HYPERBYTEDB__CLUSTER__ENABLED: "true"
HYPERBYTEDB__CLUSTER__NODE_ID: "1"
HYPERBYTEDB__CLUSTER__CLUSTER_ADDR: "db1:8086"
HYPERBYTEDB__CLUSTER__PEERS: "db2:8086,db3:8086"
HYPERBYTEDB__CLUSTER__REPLICATION_LOG_DIR: "/var/lib/hyperbytedb/replication_log"
networks: [cluster]
db2:
image: ghcr.io/hyperbyte-cloud/hyperbytedb:latest
hostname: db2
ports: ["8087:8086"]
volumes: [db2-data:/var/lib/hyperbytedb]
environment:
HYPERBYTEDB__SERVER__BIND_ADDRESS: "0.0.0.0"
HYPERBYTEDB__CLUSTER__ENABLED: "true"
HYPERBYTEDB__CLUSTER__NODE_ID: "2"
HYPERBYTEDB__CLUSTER__CLUSTER_ADDR: "db2:8086"
HYPERBYTEDB__CLUSTER__PEERS: "db1:8086,db3:8086"
HYPERBYTEDB__CLUSTER__REPLICATION_LOG_DIR: "/var/lib/hyperbytedb/replication_log"
networks: [cluster]
db3:
image: ghcr.io/hyperbyte-cloud/hyperbytedb:latest
hostname: db3
ports: ["8088:8086"]
volumes: [db3-data:/var/lib/hyperbytedb]
environment:
HYPERBYTEDB__SERVER__BIND_ADDRESS: "0.0.0.0"
HYPERBYTEDB__CLUSTER__ENABLED: "true"
HYPERBYTEDB__CLUSTER__NODE_ID: "3"
HYPERBYTEDB__CLUSTER__CLUSTER_ADDR: "db3:8086"
HYPERBYTEDB__CLUSTER__PEERS: "db1:8086,db2:8086"
HYPERBYTEDB__CLUSTER__REPLICATION_LOG_DIR: "/var/lib/hyperbytedb/replication_log"
networks: [cluster]
volumes:
db1-data:
db2-data:
db3-data:
networks:
cluster:
driver: bridge
Write to any node; all nodes see the same data after replication.
Kubernetes (kind)¶
The deploy/kind/ directory contains manifests for a local Kubernetes cluster using kind:
This creates a kind cluster with: - HyperbyteDB StatefulSet (2 replicas by default) - Prometheus and Grafana for monitoring - NodePort services mapped to localhost ports
See Administration for cluster operations details.
Kubernetes (HyperbyteDB operator)¶
For production-style deployments on a real Kubernetes cluster, use the HyperbyteDB Kubernetes operator. It extends the API with HyperbytedbCluster, HyperbytedbBackup, and HyperbytedbRestore, and reconciles StatefulSets, services, and optional monitoring resources.
| Doc | What it covers |
|---|---|
| Operator overview | Custom resources, lifecycle phases, capabilities |
| Operator installation (Helm) | OCI chart install, upgrade, uninstall, raw YAML |
| HyperbytedbCluster | CRD fields, examples, TLS, autoscaling |
| Backup and restore (operator) | S3 backups and restores via custom resources |
| hyperbytedb-proxy | Optional health-aware HTTP proxy in front of the database Service |
The kind-based setup above is aimed at local development. Treat the operator path as the supported way to run managed HyperbyteDB on Kubernetes in production.
Next Steps¶
- Configuration — Tune settings for your workload
- Basic operations — Start writing and querying data