StorageClasses

Our kubernetes cluster comes with a couple of predefined storage classes. If there is no storage class that meets your needs, custom ones can be created. For more information take a look into the Cinder-CSI Driver documentation or feel free to contact our support team to get assistance.

$ kubectl get sc
NAME                  PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION
encrypted             cinder.csi.openstack.org   Delete          Immediate              true
encrypted-high-iops   cinder.csi.openstack.org   Delete          Immediate              true
high-iops             cinder.csi.openstack.org   Delete          Immediate              true
nws-storage           cinder.csi.openstack.org   Delete          Immediate              true
standard (default)    cinder.csi.openstack.org   Delete          Immediate              true

standard (Default)

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  name: standard
allowVolumeExpansion: true
parameters:
  csi.storage.k8s.io/fstype: ext4
provisioner: cinder.csi.openstack.org
reclaimPolicy: Delete
volumeBindingMode: Immediate

The StorageClass standard as being specified by it's annotation is the default class that is used when no SC gets specified. It will provision an ext4 formatted Volume in OpenStack immediately and delete it if the PVC will be deleted. VolumeExpansion is also supported. It enables the user to update the size of a PVC and let kubernetes handle the resize. The IOPS limit is set to 1000 IOPS, but enables a boost of up to 2000 IOPS for 60s.

nws-storage

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nws-storage
allowVolumeExpansion: true
parameters:
  csi.storage.k8s.io/fstype: xfs
provisioner: cinder.csi.openstack.org
reclaimPolicy: Delete
volumeBindingMode: Immediate

The nws-storage class is similar to standard, but uses xfs as it's filesystem. This is useful for PVCs with a lot of small files like Databases or Logging systems for example as xfs is able to scale inodes dynamically. This stands in contrast to EXT4, which will create a fixed size of inodes at creation time.

high-iops

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: high-iops
parameters:
  csi.storage.k8s.io/fstype: ext4
  type: Ceph-High-IOPS
allowVolumeExpansion: true
provisioner: cinder.csi.openstack.org
reclaimPolicy: Delete
volumeBindingMode: Immediate

The high-iops SC uses a different volume type in OpenStack called Ceph-High-IOPS, which allows the system up to 2000 IOPS for sustained loads and 4000 IOPS for bursts (60s).

encrypted(-high-iops)

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: encrypted
parameters:
  csi.storage.k8s.io/fstype: ext4
  type: Ceph-Encrypted(-High-IOPS)
allowVolumeExpansion: true
provisioner: cinder.csi.openstack.org
reclaimPolicy: Delete
volumeBindingMode: Immediate

The StorageClasses starting with encrypted use our Volume-Type that transparently enable encryption for the volume. The two classes differ they way standard and high-iops do. One uses the normal IOPS and the other the high IOPS profile.

Custom

You can of course also create your own storageclass with the any of the paramters and options available to the Cinder CSI Driver and Kubernetes. The following custom storageclass for example would use the Ceph-Encrypted Volume type with the XFS format while retaining the created PersistentVolume after deleting the PersistentVolumeClaim:

$ kubectl apply -f - <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: encrypted-xfs-retain
parameters:
  csi.storage.k8s.io/fstype: xfs
  type: Ceph-Encrypted
allowVolumeExpansion: true
provisioner: cinder.csi.openstack.org
reclaimPolicy: Retain
volumeBindingMode: Immediate
EOF

ReadWriteMany

Unfortunately we are unable to provide Volumes with the RWX access type. This is on our roadmap, but we cannot commit to any timeframe at this point in time. For now you'll need to build your own solution based on NFS or Rook Ceph for example. We would be happy to assist you with that.


Revision #5
Created 15 March 2024 08:48:26 by Justin Lamp
Updated 15 March 2024 11:06:18 by Justin Lamp