Configuration Reference¶
HyperbyteDB loads configuration in this order (later sources override earlier):
- Built-in defaults
- TOML config file (path from
--config/-cflag; defaults to./config.toml) - Environment variables with prefix
HYPERBYTEDB__
Environment Variable Format¶
Double underscores (__) separate the section name and key. For nested sections, add another level:
HYPERBYTEDB__SERVER__PORT=9090
HYPERBYTEDB__STORAGE__S3__BUCKET=my-bucket
HYPERBYTEDB__CLUSTER__PEERS="node2:8086,node3:8086"
[server]¶
HTTP server settings.
| Key | Type | Default | Description |
|---|---|---|---|
bind_address | string | "0.0.0.0" | Network interface to bind to |
port | integer | 8086 | HTTP listen port |
max_body_size_bytes | integer | 26214400 | Maximum request body size (25 MB) |
request_timeout_secs | integer | 30 | HTTP request timeout |
query_timeout_secs | integer | 30 | InfluxQL query execution timeout |
max_concurrent_queries | integer | 0 | Max concurrent InfluxQL executions; 0 = unlimited (bounded by work-stealing / resources). Use with single chDB session. |
tls_enabled | boolean | false | Enable HTTPS with TLS |
tls_cert_path | string | "" | Path to PEM certificate file |
tls_key_path | string | "" | Path to PEM private key file |
[storage]¶
Storage backends and directory paths.
| Key | Type | Default | Description |
|---|---|---|---|
data_dir | string | "./data" | Directory for Parquet data files |
wal_dir | string | "./wal" | Write-ahead log directory (RocksDB) |
meta_dir | string | "./meta" | Metadata directory (RocksDB) |
backend | string | "local" | Storage backend: "local" or "s3" |
[storage.s3]¶
Required when backend = "s3". Works with AWS S3, MinIO, Cloudflare R2, and other S3-compatible services.
| Key | Type | Default | Description |
|---|---|---|---|
bucket | string | "" | S3 bucket name |
prefix | string | "" | Object key prefix |
region | string | "us-east-1" | AWS region |
endpoint | string | "" | Custom endpoint URL (for MinIO, R2, etc.) |
access_key_id | string | "" | Access key |
secret_access_key | string | "" | Secret key |
[flush]¶
Controls the background WAL-to-Parquet flush pipeline.
| Key | Type | Default | Description |
|---|---|---|---|
interval_secs | integer | 10 | How often the flush service runs (seconds) |
wal_size_threshold_mb | integer | 64 | WAL size that triggers an immediate flush (MB) |
time_bucket_duration | string | "1h" | Time partitioning granularity for Parquet files ("1h" or "1d") |
max_points_per_batch | integer | 0 | Max points per Parquet batch. 0 = auto-detect based on available memory |
wal_batch_size | integer | 64 | WAL group-commit: max entries to coalesce per write batch; 0 = disabled |
wal_batch_delay_us | integer | 200 | WAL group-commit: max microseconds to wait for more entries before flushing |
[compaction]¶
Background Parquet compaction and optional cluster self-repair.
| Key | Type | Default | Description |
|---|---|---|---|
enabled | boolean | true | Enable background compaction |
interval_secs | integer | 30 | How often the compaction loop runs |
min_files_to_compact | integer | 2 | Minimum Parquet files per measurement before merges run |
target_file_size_mb | integer | 256 | Target output file size (MB) |
bucket_duration | string | "1h" | Time bucket: "1h" (hourly) or "1d"/"24h" (daily) |
verified_compaction_age_secs | integer | 3600 | Minimum data age (seconds) before hash verification |
self_repair_enabled | boolean | true | Enable membership-driven repair in cluster mode |
max_repair_checks_per_cycle | integer | 128 | Max repair checks per compaction tick |
compact_all_max_inflight | integer | 8 | Max concurrent measurement compactions for POST /internal/compact |
[chdb]¶
Embedded ClickHouse (chDB) query engine settings.
| Key | Type | Default | Description |
|---|---|---|---|
session_data_path | string | "./chdb_data" | chDB session state directory |
pool_size | integer | 1 | Deprecated: libchdb is a process-global singleton (one real session). Values other than 1 log a warning at startup and are ignored. Parallelism is controlled with server.max_concurrent_queries (work is still serialized through one engine session). |
[auth]¶
Authentication configuration.
| Key | Type | Default | Description |
|---|---|---|---|
enabled | boolean | false | Enable authentication on /write and /query |
When enabled, /write and /query require valid credentials. Health/metrics and other public routes, plus admin-only internal/cluster APIs, are documented in Authentication.
[cardinality]¶
Limits to prevent unbounded series growth from high-cardinality data.
| Key | Type | Default | Description |
|---|---|---|---|
max_tag_values_per_measurement | integer | 100000 | Max distinct tag values per tag key per measurement |
max_measurements_per_database | integer | 10000 | Max measurements per database |
If a write exceeds these limits, it returns HTTP 422 with a cardinality limit exceeded error.
[cluster]¶
Master-master peer-to-peer clustering with Raft consensus for schema mutations.
| Key | Type | Default | Description |
|---|---|---|---|
enabled | boolean | false | Enable cluster mode |
node_id | integer | 1 | Unique node identifier |
cluster_addr | string | "127.0.0.1:8086" | Address other nodes use to reach this node |
peers | string | "" | Deprecated seed list; when empty, use operator/HTTP membership APIs. See Deep Dive: Clustering. |
heartbeat_interval_secs | integer | 2 | How often to send heartbeats |
heartbeat_miss_threshold | integer | 5 | Missed heartbeats before marking a peer disconnected |
anti_entropy_enabled | boolean | true | When false, the periodic Merkle verify / delta-sync loop is not started |
anti_entropy_interval_secs | integer | 60 | Merkle tree verification interval |
replication_log_dir | string | "./replication_log" | RocksDB directory for replication tracking |
raft_dir | string | "./raft" | RocksDB directory for Raft consensus state |
sync_max_concurrent_files | integer | 4 | Max concurrent file downloads during node sync |
replication_max_retries | integer | 5 | Max retries for failed replications |
replication_queue_depth | integer | 8192 | Bounded outbound replication queue (ingest-sized batches) |
replication_max_inflight_batches | integer | 8 | Max concurrent outbound replication fan-out rounds |
replication_max_coalesce_body_bytes | integer | 8388608 | Max bytes for coalescing consecutive WAL batches (same db/rp/precision) |
replicate_receiver_queue_depth | integer | 1024 | Bounded apply queue on the replicate receiver |
replicate_receiver_workers | integer | 1 | Deprecated; receiver uses a single ordered worker |
replication_truncate_stale_peer_multiplier | integer | 2 | When >0, peers with ack 0 and stale heartbeats are omitted from truncate barrier (× heartbeat interval) |
raft_heartbeat_interval_ms | — | unset | Optional Raft heartbeat (ms); uses internal default if omitted |
raft_election_timeout_ms | — | unset | Optional Raft election timeout (ms) |
raft_snapshot_threshold | — | unset | Optional log entries before Raft snapshot |
[cluster.replication]¶
Per-node coordinator replication behavior. If the whole block is omitted, mode is async (fire-and-forget fan-out, same as legacy).
| Key | Type | Default | Description |
|---|---|---|---|
mode | string | "async" | "async" or "sync_quorum" (await W peer acks before client response) |
ack_timeout_ms | integer | 5000 | For sync_quorum: max wait for peer acks; on timeout, HTTP 504 and hinted handoff for unacked peers |
sync_quorum.min_acks | string or int | "majority" | Peer acks required: "majority" (of cluster, excluding self) or explicit count |
[logging]¶
| Key | Type | Default | Description |
|---|---|---|---|
level | string | "info" | Log level: trace, debug, info, warn, error |
format | string | "text" | Output format: "text" (human-readable) or "json" (structured) |
[statement_summary]¶
Query statement tracking for debugging and observability.
| Key | Type | Default | Description |
|---|---|---|---|
enabled | boolean | true | Enable statement summary tracking |
max_entries | integer | 1000 | Max recent statements kept in the ring buffer |
When enabled, recently executed statements are accessible via GET /api/v1/statements.
[hinted_handoff]¶
Hinted handoff stores writes destined for unreachable peers and replays them when the peer recovers.
| Key | Type | Default | Description |
|---|---|---|---|
enabled | boolean | true | Enable hinted handoff (cluster mode only) |
max_hints_per_peer | integer | 100000 | Max queued hints per unreachable peer before oldest are dropped |
max_hint_age_secs | integer | 3600 | Hints older than this (seconds) are discarded on drain |
[rate_limit]¶
HTTP rate limiting for /write and /query.
| Key | Type | Default | Description |
|---|---|---|---|
enabled | boolean | false | Enable per-endpoint request rate limiting |
max_requests_per_second | integer | 0 | Max requests per second per endpoint; 0 = unlimited when enabled (set a positive value to enforce) |
Example: Minimal Single-Node¶
[server]
bind_address = "0.0.0.0"
port = 8086
[storage]
data_dir = "./data"
wal_dir = "./wal"
meta_dir = "./meta"
[flush]
interval_secs = 10
[compaction]
enabled = true
[chdb]
session_data_path = "./chdb_data"
[logging]
level = "info"
Example: Production with S3¶
[server]
bind_address = "0.0.0.0"
port = 8086
query_timeout_secs = 60
max_concurrent_queries = 32
tls_enabled = true
tls_cert_path = "/etc/hyperbytedb/cert.pem"
tls_key_path = "/etc/hyperbytedb/key.pem"
[storage]
data_dir = "/var/lib/hyperbytedb/data"
wal_dir = "/var/lib/hyperbytedb/wal"
meta_dir = "/var/lib/hyperbytedb/meta"
backend = "s3"
[storage.s3]
bucket = "hyperbytedb-production"
prefix = "data/"
region = "us-east-1"
[flush]
interval_secs = 10
time_bucket_duration = "1h"
[compaction]
enabled = true
interval_secs = 30
target_file_size_mb = 256
[chdb]
session_data_path = "/var/lib/hyperbytedb/chdb"
[auth]
enabled = true
[cardinality]
max_tag_values_per_measurement = 100000
max_measurements_per_database = 10000
[logging]
level = "info"
format = "json"
Example: Environment Variable Overrides¶
export HYPERBYTEDB__SERVER__PORT=9090
export HYPERBYTEDB__SERVER__QUERY_TIMEOUT_SECS=60
export HYPERBYTEDB__STORAGE__DATA_DIR=/var/lib/hyperbytedb/data
export HYPERBYTEDB__STORAGE__BACKEND=s3
export HYPERBYTEDB__STORAGE__S3__BUCKET=my-hyperbytedb-bucket
export HYPERBYTEDB__SERVER__MAX_CONCURRENT_QUERIES=32
export HYPERBYTEDB__LOGGING__LEVEL=debug
export HYPERBYTEDB__COMPACTION__SELF_REPAIR_ENABLED=true
See Also¶
- Installation — Deployment methods
- Administration — Operational tuning
- Authentication — Enabling auth, credentials, admin for internal routes
- Advanced features — Clustering, TLS, S3