StorageOS Feature Labels

Feature labels are a powerful and flexible way to control storage features.

Applying specific feature labels triggers compression, replication and other storage features. No feature labels are present by default.

StorageOS Node labels

Nodes do not have any feature labels present by default. When StorageOS is run within Kubernetes, the StorageOS API Manager syncs any Kubernetes node labels to the corresponding StorageOS node. The Kubernetes node labels act as the “source of truth”, so labels should be applied to the Kubernetes nodes rather than to StorageOS nodes. This is because the Kubernetes node labels overwrite the StorageOS node labels on sync.

Feature Label Values Description
Compute only storageos.com/computeonly true / false Specifies whether a node should be computeonly where it only acts as a client and does not host volume data locally, otherwise the node is hyperconverged (the default), where the node can operate in both client and server modes.

You can set the computeonly label on the Kubernetes node and the label will be sync’d to the StorageOS node (labels take an eventual consistency reconciliation time of ~1min).

kubectl label node $NODE storageos.com/computeonly=true

StorageOS Volume labels

Volumes do not have any feature labels present by default. WARNING: The encryption, caching and compression labels can only apply during provision time, they can’t be changed during execution.

Feature Label Values Description
Caching storageos.com/nocache true / false Switches off caching.
Compression storageos.com/nocompress true / false Switches off compression of data at rest and in transit.
Encryption storageos.com/encryption true / false Encrypts the contents of the volume. For each volume, a key is automatically generated, stored, and linked with the PVC.
Failure Mode storageos.com/failure-mode hard, soft, alwayson or integers [0, 5] Sets the failure mode for a volume, either explicitly using a failure mode or implicitly using a replica threshold.
Replication storageos.com/replicas integers [0, 5] Replicates entire volume across nodes. Typically 1 replica is sufficient (2 copies of the data); more than 2 replicas is not recommended.

To create a volume with a feature label:

  • Option 1: PVC Label

    Add the label in the PVC definition, for instance:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-3
      labels:
        storageos.com/replicas: "1" # Label <-----
    spec:
      storageClassName: "fast"
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1G
    
  • Option 2: Set label in the StorageClass

    Any PVC using the StorageClass inherits the label. The PVC label takes precedence over the StorageClass parameters.

    The encryption label is not applicable to StorageClasses

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: storageos-replicated
    parameters:
      csi.storage.k8s.io/fstype: ext4
      storageos.com/replicas: "1" # Label   <--------
    provisioner: storageos # CSI driver (recommended)
    # Change the NameSpace below if StorageOS doesn't run in kube-system
    csi.storage.k8s.io/controller-expand-secret-name: csi-controller-expand-secret
    csi.storage.k8s.io/controller-publish-secret-name: csi-controller-publish-secret
    csi.storage.k8s.io/node-publish-secret-name: csi-node-publish-secret
    csi.storage.k8s.io/provisioner-secret-name: csi-provisioner-secret
    csi.storage.k8s.io/controller-expand-secret-namespace: kube-system   # NameSpace that runs StorageOS Daemonset
    csi.storage.k8s.io/controller-publish-secret-namespace: kube-system  # NameSpace that runs StorageOS Daemonset
    csi.storage.k8s.io/node-publish-secret-namespace: kube-system        # NameSpace that runs StorageOS Daemonset
    csi.storage.k8s.io/provisioner-secret-namespace: kube-system         # NameSpace that runs StorageOS Daemonset
    
    
  • Option 3:

    Once a PVC is created, you can update the Labels in StorageOS both in the UI or CLI. Those labels are going to be visible only for StorageOS and will not be synced to the Kubernetes resource.

StorageOS Pod labels

Feature Label Values Description
Pod fencing storageos.com/fenced true / false Targets a pod to be fenced in case of node failure. (default: false)

For a pod to be fenced by StorageOS, a few requirements described in the Fencing Operations page need to be fulfilled.

kubectl label pod $POD storageos.com/fenced=true

It is recommended to define the fenced label in the pod’s manifest, i.e in the Statefulset definitions.

N.B. The StorageOS API manager periodically syncs labels from Kubernetes PVCs to the corresponding StorageOS volume. Therefore changes to StorageOS volume labels should be made to the corresponding Kubernetes PVC rather than to the StorageOS volume directly.