Common workflows¶
Step-by-step guides for common operational tasks with HyperbyteDB.
Migrating from InfluxDB 1.x¶
HyperbyteDB is designed as a drop-in replacement for InfluxDB 1.x. Most clients, libraries, Telegraf, and Grafana work without modification.
Using the backfill tool¶
HyperbyteDB ships a hyperbytedb-backfill binary for migrating data from InfluxDB 1.x:
cargo build --release --bin hyperbytedb-backfill
# Migrate from a running InfluxDB instance
./target/release/hyperbytedb-backfill \
--influx-url http://old-influx:8086 \
--hyperbytedb-url http://new-hyperbytedb:8086 \
--database mydb \
--start "2024-01-01T00:00:00Z" \
--end "2024-12-31T23:59:59Z"
The backfill tool: 1. Queries InfluxDB in time-chunked SELECT statements. 2. Converts results to line protocol. 3. POSTs them to HyperbyteDB via /write. 4. Supports --batch-size, --chunk-interval, and optional --dump-dir for saving raw line protocol to disk.
Replay from line protocol files¶
If you have exported line protocol files:
./target/release/hyperbytedb-backfill \
--from-dir /path/to/line-protocol-files \
--hyperbytedb-url http://localhost:8086 \
--database mydb
What works identically¶
- Line protocol write format and semantics
/writeand/queryendpoint behavior/pingresponse for client library connection tests- JSON response shapes (
{"results":[...]}) epochparameter for timestamp formatting- Authentication (query params, Basic, Token; internal APIs need admin)
- Gzip write support
- Chunked query responses
Known differences¶
| Area | Difference |
|---|---|
| Query engine | ClickHouse (chDB) instead of TSM; minor floating-point edge cases |
| Storage format | Parquet instead of TSM shards |
fill(previous/linear) | Implemented via ClickHouse INTERPOLATE; may differ at series boundaries |
SELECT INTO with regex | Not supported; use explicit measurement names |
| Permissions | Admin vs non-admin only; no per-database GRANT/REVOKE |
| Subscriptions | Not supported |
Integrating Telegraf¶
Telegraf works with HyperbyteDB out of the box using its InfluxDB v1 output plugin.
telegraf.conf¶
[[outputs.influxdb]]
urls = ["http://hyperbytedb:8086"]
database = "telegraf"
skip_database_creation = false
timeout = "5s"
[[inputs.cpu]]
percpu = true
totalcpu = true
[[inputs.mem]]
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs"]
[[inputs.net]]
[[inputs.system]]
Point Telegraf at HyperbyteDB just as you would at InfluxDB. The skip_database_creation = false setting tells Telegraf to create the database if it doesn't exist.
Integrating Grafana¶
HyperbyteDB works as an InfluxDB v1 datasource in Grafana.
Add datasource¶
- Open Grafana → Configuration → Data Sources → Add data source.
- Select InfluxDB.
- Set the URL to
http://hyperbytedb:8086. - Set the database name (e.g.,
telegraf). - If auth is enabled, enter credentials in the InfluxDB Details section.
- Click Save & Test.
Docker Compose (pre-configured)¶
The included docker-compose.yml ships Grafana with pre-provisioned datasources: - An InfluxDB datasource pointing to HyperbyteDB - A Prometheus datasource for internal metrics
Grafana is accessible at http://localhost:3000 with login admin/admin.
Setting Up Monitoring¶
Prometheus scrape config¶
Add HyperbyteDB as a Prometheus target:
scrape_configs:
- job_name: 'hyperbytedb'
static_configs:
- targets: ['hyperbytedb:8086']
metrics_path: /metrics
scrape_interval: 15s
Key metrics to watch¶
| Metric | Type | What it tells you |
|---|---|---|
hyperbytedb_write_requests_total | counter | Write throughput |
hyperbytedb_query_requests_total | counter | Query throughput |
hyperbytedb_query_duration_seconds | histogram | Query latency (P50/P95/P99) |
hyperbytedb_ingestion_points_total | counter | Points ingested |
hyperbytedb_flush_duration_seconds | histogram | WAL-to-Parquet flush health |
hyperbytedb_parquet_files_count | gauge | File count per measurement |
Alert recommendations¶
| Condition | Alert |
|---|---|
rate(hyperbytedb_query_errors_total[5m]) > 0 | Query failures |
hyperbytedb_query_duration_seconds{quantile="0.99"} > 10 | Slow queries |
rate(hyperbytedb_write_errors_total[5m]) > 0 | Write failures |
hyperbytedb_flush_duration_seconds{quantile="0.99"} > 30 | Slow flushes |
Backup and Restore¶
Create a backup¶
The backup contains: - wal/ — RocksDB checkpoint of the WAL - meta/ — RocksDB checkpoint of metadata - data/ — Copy of all Parquet data files - manifest.json — Timestamp, WAL sequence, and file list
Backups can be taken while HyperbyteDB is running. RocksDB checkpoints provide a consistent snapshot without stopping writes.
Restore from backup¶
# 1. Stop HyperbyteDB
# 2. Restore (overwrites data_dir, wal_dir, meta_dir)
hyperbytedb restore --input /backups/hyperbytedb-20240115
# 3. Start HyperbyteDB
hyperbytedb serve
Warning: Restore overwrites the configured
data_dir,wal_dir, andmeta_dir. Ensure the config file points to the correct directories.
Downsampling with Continuous Queries¶
A typical pattern for long-term data retention:
- Raw data — kept for 7 days in the default retention policy.
- 5-minute rollups — kept for 90 days.
- 1-hour rollups — kept indefinitely.
-- Create retention policies
CREATE RETENTION POLICY "7d" ON "mydb" DURATION 7d REPLICATION 1 DEFAULT
CREATE RETENTION POLICY "90d" ON "mydb" DURATION 90d REPLICATION 1
CREATE RETENTION POLICY "forever" ON "mydb" DURATION INF REPLICATION 1
-- Create downsampling CQs
CREATE CONTINUOUS QUERY "cq_5m" ON "mydb"
BEGIN
SELECT mean("usage_idle") AS "usage_idle", mean("usage_user") AS "usage_user"
INTO "mydb"."90d"."cpu_5m"
FROM "cpu"
GROUP BY time(5m), *
END
CREATE CONTINUOUS QUERY "cq_1h" ON "mydb"
BEGIN
SELECT mean("usage_idle") AS "usage_idle", mean("usage_user") AS "usage_user"
INTO "mydb"."forever"."cpu_1h"
FROM "cpu"
GROUP BY time(1h), *
END
See Also¶
- Administration — Backup procedures, cluster operations, compaction tuning
- Troubleshooting — Common problems and fixes
- API & InfluxQL Reference — Full syntax reference