Storage – RBD (Ceph)
NetActuate RBD (Ceph) provides tenant-isolated block storage that can be used either:
- Directly on a VM (map + mount an RBD image), or
- Through Kubernetes using the Ceph CSI RBD driver (
rbd.csi.ceph.com).
Before you start, replace placeholders like <TENANT_ID>, <CLUSTER_ID>, <MON1>, and <YOUR_SECRET_KEY> with your real values.
- VM Access (map + mount)
- Kubernetes (Ceph CSI)
VM Access (Ceph RBD Block Storage)
What you need
| Item | Notes / example |
|---|---|
| Ceph monitors | mon_host = <MON1>,<MON2>,<MON3> |
| Ceph user ID | volume-client-<TENANT_ID> |
| Ceph user key | <YOUR_SECRET_KEY> |
| RADOS namespace | volume-namespace-<TENANT_ID> |
| Pool | global-block-pool |
| Image/volume name | image-<TENANT_ID> (or the volume name you were issued) |
Install required packages
Install the Ceph client tools on your VM:
sudo apt update
sudo apt install ceph ceph-common -y
Authentication
Create a keyring file with your key:
sudo tee /etc/ceph/ceph.volume-client-<TENANT_ID>.keyring <<'EOF'
[volume-client-<TENANT_ID>]
key = <YOUR_SECRET_KEY>
EOF
sudo chmod 600 /etc/ceph/ceph.volume-client-<TENANT_ID>.keyring
Create /etc/ceph/ceph.conf with monitor addresses:
sudo tee /etc/ceph/ceph.conf <<'EOF'
[global]
mon_host = <MON1>,<MON2>,<MON3>
EOF
Map and mount the RBD volume
Map the RBD image to the host:
sudo rbd map volume-namespace-<TENANT_ID>/image-<TENANT_ID> \
--pool global-block-pool \
--id volume-client-<TENANT_ID> \
--keyring /etc/ceph/ceph.volume-client-<TENANT_ID>.keyring
Format the device (first time only):
sudo mkfs.ext4 /dev/rbd/global-block-pool/volume-namespace-<TENANT_ID>/image-<TENANT_ID>
Create a mount point and mount it:
sudo mkdir -p /mnt/rbd
sudo mount /dev/rbd/global-block-pool/volume-namespace-<TENANT_ID>/image-<TENANT_ID> /mnt/rbd
Make it persistent across reboots:
echo "/dev/rbd/global-block-pool/volume-namespace-<TENANT_ID>/image-<TENANT_ID> /mnt/rbd ext4 defaults 0 0" | sudo tee -a /etc/fstab
Key points
All block volumes live in the global-block-pool pool. Tenant isolation is enforced by your issued Ceph user and namespace scoping.
Kubernetes (Ceph CSI Driver)
This section shows a working pattern that installs the Ceph CSI RBD driver into a dedicated namespace and then creates tenant-specific objects (Secret, StorageClass, PVC, and a test Pod) in your workload namespace.
What you need
| Item | Notes / example |
|---|---|
| Ceph CSI version | Example: quay.io/cephcsi/cephcsi:v3.15.1 |
| Ceph cluster ID | <CLUSTER_ID> |
| Monitor(s) | <MON1>:6789 (add more if provided) |
| Pool | global-block-pool |
| Tenant workload namespace | namespace-<TENANT_ID> |
| Tenant Ceph user | block-namespace-client-<TENANT_ID> |
| Tenant Ceph key | <YOUR_SECRET_KEY> |
| Tenant RADOS namespace | namespace-<TENANT_ID> |
Complete install + tenant manifest (template)
The manifest below is intentionally self-contained and includes:
- CSI driver namespace (
ceph-csi-rbd) - ConfigMaps + RBAC + CSIDriver
- Controller Deployment + Node DaemonSet
- Tenant Secret + StorageClass + PVC + test Pod
Show: Ceph CSI RBD install + tenant manifest (copy/paste template)
###############################################################################
# Ceph CSI RBD (external) + tenant storage for namespace-<TENANT_ID>
# ClusterID: <CLUSTER_ID>
# MON: <MON1>:6789
# Pool: global-block-pool
# RADOS namespace: namespace-<TENANT_ID>
# Ceph user: block-namespace-client-<TENANT_ID>
###############################################################################
---
apiVersion: v1
kind: Namespace
metadata:
name: ceph-csi-rbd
labels:
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privileged
###############################################################################
# CONFIGMAPS (must be in ceph-csi-rbd)
###############################################################################
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ceph-csi-config
namespace: ceph-csi-rbd
data:
config.json: |
[
{
"clusterID": "<CLUSTER_ID>",
"monitors": [
"<MON1>:6789"
],
"rbd": {
"radosNamespace": "namespace-<TENANT_ID>"
}
}
]
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ceph-config
namespace: ceph-csi-rbd
data:
ceph.conf: |
[global]
mon_host = <MON1>:6789
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ceph-csi-encryption-kms-config
namespace: ceph-csi-rbd
data:
config.json: "{}"
###############################################################################
# RBAC
###############################################################################
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-csi-provisioner
namespace: ceph-csi-rbd
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-csi-nodeplugin
namespace: ceph-csi-rbd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: rbd-external-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["patch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get"]
- apiGroups: [""]
resources: ["serviceaccounts/token"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: rbd-csi-provisioner-role
subjects:
- kind: ServiceAccount
name: rbd-csi-provisioner
namespace: ceph-csi-rbd
roleRef:
kind: ClusterRole
name: rbd-external-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: rbd-csi-nodeplugin
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["serviceaccounts/token"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: rbd-csi-nodeplugin
subjects:
- kind: ServiceAccount
name: rbd-csi-nodeplugin
namespace: ceph-csi-rbd
roleRef:
kind: ClusterRole
name: rbd-csi-nodeplugin
apiGroup: rbac.authorization.k8s.io
###############################################################################
# CSIDriver
###############################################################################
---
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: rbd.csi.ceph.com
spec:
attachRequired: true
podInfoOnMount: false
fsGroupPolicy: File
###############################################################################
# CSI CONTROLLER (Deployment)
###############################################################################
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: csi-rbdplugin-provisioner
namespace: ceph-csi-rbd
spec:
replicas: 1
selector:
matchLabels:
app: csi-rbdplugin-provisioner
template:
metadata:
labels:
app: csi-rbdplugin-provisioner
spec:
serviceAccountName: rbd-csi-provisioner
volumes:
- name: socket-dir
emptyDir: { medium: "Memory" }
- name: ceph-etc
emptyDir: {}
- name: ceph-config
configMap: { name: ceph-config }
- name: ceph-csi-config
configMap: { name: ceph-csi-config }
- name: ceph-csi-encryption-kms-config
configMap: { name: ceph-csi-encryption-kms-config }
- name: keys-tmp-dir
emptyDir: { medium: "Memory" }
containers:
- name: csi-rbdplugin
image: quay.io/cephcsi/cephcsi:v3.15.1
args:
- "--nodeid=$(NODE_ID)"
- "--type=rbd"
- "--controllerserver=true"
- "--endpoint=unix:///csi/csi-provisioner.sock"
- "--v=5"
- "--drivername=rbd.csi.ceph.com"
- "--pidlimit=-1"
- "--setmetadata=true"
env:
- name: NODE_ID
valueFrom: { fieldRef: { fieldPath: spec.nodeName } }
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: ceph-csi-config
mountPath: /etc/ceph-csi-config/
- name: ceph-csi-encryption-kms-config
mountPath: /etc/ceph-csi-encryption-kms-config/
- name: keys-tmp-dir
mountPath: /tmp/csi/keys
- name: ceph-etc
mountPath: /etc/ceph
- name: ceph-config
mountPath: /etc/ceph/ceph.conf
subPath: ceph.conf
readOnly: true
- name: csi-provisioner
image: registry.k8s.io/sig-storage/csi-provisioner:v5.1.0
args:
- "--csi-address=unix:///csi/csi-provisioner.sock"
- "--v=2"
- "--timeout=150s"
- "--leader-election=true"
- "--extra-create-metadata=true"
- "--default-fstype=ext4"
- "--immediate-topology=false"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-attacher
image: registry.k8s.io/sig-storage/csi-attacher:v4.8.0
args:
- "--v=2"
- "--csi-address=/csi/csi-provisioner.sock"
- "--leader-election=true"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-resizer
image: registry.k8s.io/sig-storage/csi-resizer:v1.13.1
args:
- "--csi-address=unix:///csi/csi-provisioner.sock"
- "--v=2"
- "--timeout=150s"
- "--leader-election=true"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: liveness-prometheus
image: quay.io/cephcsi/cephcsi:v3.15.1
args:
- "--type=liveness"
- "--endpoint=unix:///csi/csi-provisioner.sock"
- "--metricsport=8680"
- "--metricspath=/metrics"
- "--polltime=60s"
- "--timeout=3s"
volumeMounts:
- name: socket-dir
mountPath: /csi
###############################################################################
# CSI NODEPLUGIN (DaemonSet) - includes driver-registrar
###############################################################################
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: csi-rbdplugin
namespace: ceph-csi-rbd
spec:
selector:
matchLabels:
app: csi-rbdplugin
template:
metadata:
labels:
app: csi-rbdplugin
spec:
serviceAccountName: rbd-csi-nodeplugin
hostNetwork: true
hostPID: true
dnsPolicy: ClusterFirstWithHostNet
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/rbd.csi.ceph.com
type: DirectoryOrCreate
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry/
type: Directory
- name: plugin-dir
hostPath:
path: /var/lib/kubelet/plugins
type: Directory
- name: mountpoint-dir
hostPath:
path: /var/lib/kubelet/pods
type: DirectoryOrCreate
- name: host-dev
hostPath: { path: /dev }
- name: host-sys
hostPath: { path: /sys }
- name: host-mount
hostPath: { path: /run/mount }
- name: lib-modules
hostPath: { path: /lib/modules }
- name: ceph-etc
emptyDir: {}
- name: ceph-config
configMap: { name: ceph-config }
- name: ceph-csi-config
configMap: { name: ceph-csi-config }
- name: ceph-csi-encryption-kms-config
configMap: { name: ceph-csi-encryption-kms-config }
- name: keys-tmp-dir
emptyDir: { medium: "Memory" }
containers:
- name: csi-rbdplugin
image: quay.io/cephcsi/cephcsi:v3.15.1
securityContext:
privileged: true
allowPrivilegeEscalation: true
capabilities:
add: ["SYS_ADMIN"]
args:
- "--nodeid=$(NODE_ID)"
- "--pluginpath=/var/lib/kubelet/plugins"
- "--stagingpath=/var/lib/kubelet/plugins/kubernetes.io/csi/"
- "--type=rbd"
- "--nodeserver=true"
- "--endpoint=unix:///csi/csi.sock"
- "--v=5"
- "--drivername=rbd.csi.ceph.com"
env:
- name: NODE_ID
valueFrom: { fieldRef: { fieldPath: spec.nodeName } }
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
- name: plugin-dir
mountPath: /var/lib/kubelet/plugins
mountPropagation: Bidirectional
- name: mountpoint-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: Bidirectional
- name: host-dev
mountPath: /dev
- name: host-sys
mountPath: /sys
- name: host-mount
mountPath: /run/mount
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: ceph-csi-config
mountPath: /etc/ceph-csi-config/
- name: ceph-csi-encryption-kms-config
mountPath: /etc/ceph-csi-encryption-kms-config/
- name: keys-tmp-dir
mountPath: /tmp/csi/keys
- name: ceph-etc
mountPath: /etc/ceph
- name: ceph-config
mountPath: /etc/ceph/ceph.conf
subPath: ceph.conf
readOnly: true
- name: driver-registrar
image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0
args:
- "--v=2"
- "--csi-address=/csi/csi.sock"
- "--kubelet-registration-path=/var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock"
env:
- name: KUBE_NODE_NAME
valueFrom: { fieldRef: { fieldPath: spec.nodeName } }
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
- name: liveness-prometheus
image: quay.io/cephcsi/cephcsi:v3.15.1
args:
- "--type=liveness"
- "--endpoint=unix:///csi/csi.sock"
- "--metricsport=8680"
- "--metricspath=/metrics"
- "--polltime=60s"
- "--timeout=3s"
ports:
- containerPort: 8680
name: http-metrics
protocol: TCP
volumeMounts:
- name: socket-dir
mountPath: /csi
###############################################################################
# TENANT STORAGE OBJECTS (namespace-<TENANT_ID>)
###############################################################################
---
apiVersion: v1
kind: Namespace
metadata:
name: namespace-<TENANT_ID>
---
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: namespace-<TENANT_ID>
type: Opaque
stringData:
userID: block-namespace-client-<TENANT_ID>
userKey: <YOUR_SECRET_KEY>
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rbd-sc-<TENANT_ID>
provisioner: rbd.csi.ceph.com
parameters:
clusterID: "<CLUSTER_ID>"
pool: "global-block-pool"
imageFeatures: layering
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: namespace-<TENANT_ID>
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: namespace-<TENANT_ID>
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
mountOptions:
- discard
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-volume-<TENANT_ID>
namespace: namespace-<TENANT_ID>
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-rbd-sc-<TENANT_ID>
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
name: test-pod-<TENANT_ID>
namespace: namespace-<TENANT_ID>
spec:
containers:
- name: test-container
image: busybox
command: ["sh", "-c", "echo 'mounted:'; df -h /mnt/rbd || true; sleep 3600"]
volumeMounts:
- name: rbd-vol
mountPath: /mnt/rbd
volumes:
- name: rbd-vol
persistentVolumeClaim:
claimName: test-volume-<TENANT_ID>
Apply
Save the manifest as ceph-csi-rbd-tenant-<TENANT_ID>.yaml, replace placeholders, then apply:
kubectl apply -f ceph-csi-rbd-tenant-<TENANT_ID>.yaml
Validate
Confirm CSI pods are running:
kubectl -n ceph-csi-rbd get pods
Confirm the PVC binds:
kubectl -n namespace-<TENANT_ID> get pvc
Confirm the pod sees the mount:
kubectl -n namespace-<TENANT_ID> exec -it test-pod-<TENANT_ID> -- sh -c "df -h /mnt/rbd && mount | grep rbd || true"
Need Help?
For guidance on complex setups, connect with a NetActuate infrastructure expert at support@netactuate.com or open a support ticket from the portal: portal.netactuate.com.