-
Notifications
You must be signed in to change notification settings - Fork 476
Description
Search before asking
- I searched in the issues and found nothing similar.
Motivation
RocksDB's flush and compaction operations can generate significant I/O pressure, potentially impacting:
- System stability: Uncontrolled I/O may saturate disk bandwidth, affecting other operations
- Query latency: High I/O contention can degrade read/write performance
- Resource predictability: Difficult to manage multi-tenant TabletServer resource allocation
RocksDB provides a built-in rate limiter mechanism to control flush and compaction write rates, but Fluss currently doesn't expose this capability to users.
Solution
1. Shared Rate Limiter Architecture
- Server-level rate limiting: A single rate limiter shared across all RocksDB instances on each TabletServer
- Resource isolation: Prevents any single table from consuming excessive I/O bandwidth
- Cost efficiency: Reduces memory overhead compared to per-instance rate limiters
2. Configuration Options
New configuration: kv.shared-rate-limiter-bytes-per-sec
- Type: MemorySize
- Default:
0b(disabled) - Scope: Cluster-level, dynamically updatable
- Example values:
100MB,500MB,1GB
Static configuration in conf/server.yaml:
kv.shared-rate-limiter-bytes-per-sec: 200mb### 3. Dynamic Configuration Support
Rate limiter can be updated at runtime without server restart:
- Persistence: Changes persisted to ZooKeeper
- Cluster-wide: Applies to all TabletServers
- Zero-downtime: Thread-safe updates via RocksDB API
3. Flink Stored Procedures
SQL-based management interface for operators:
Set rate limiter:
-- Named argument (Flink 1.19+)
CALL fluss_catalog.sys.set_shared_rocksdb_rate_limiter(rate_limit => '200MB');
-- Indexed argument
CALL fluss_catalog.sys.set_shared_rocksdb_rate_limiter('500MB');
-- Disable rate limiter
CALL fluss_catalog.sys.set_shared_rocksdb_rate_limiter('0MB');Query current setting:
CALL fluss_catalog.sys.get_shared_rocksdb_rate_limiter();
Anything else?
No response
Willingness to contribute
- I'm willing to submit a PR!