Dedicated Rook Ceph Cluster
Description
Dedicated Rook Ceph Cluster
05.2026
I’ve set up a dedicated K8s cluster running Rook Ceph, and successfully connected it as an external storage provider to both my Admin and Workload K8s clusters.
The Breakdown:
- The Engine: A dedicated 3-node Ceph cluster (managed by Rook) running smoothly with a HEALTH_OK status. It manages OSDs, Monitors, and Managers across the worker nodes
- Total Isolation: In the Ceph dashboard, you can see I’ve carved out dedicated storage pools for block (RBD) and file (CephFS) storage—strictly separated for admin and workload traffic
- CSI Magic: The Container Storage Interface (CSI) is doing the heavy lifting. As shown in the terminal outputs, both the Admin and Workload clusters are dynamically provisioning their own Persistent Volumes (PVs) and Claims (PVCs) directly from the external Ceph cluster
Why go through the trouble of decoupling storage?
- Blast Radius Reduction: If I completely destroy and rebuild my Workload K8s cluster, my persistent data doesn't go down with the ship. It lives safely on the dedicated storage nodes
- Resource Management: Heavy storage I/O on the Ceph cluster won't starve my application pods of CPU or RAM on the Workload cluster
- Enterprise Reality: Managing stateful applications in Kubernetes is notoriously tricky. Replicating an external, distributed, software-defined storage architecture is exactly how the big players do it