ReadWriteMany

Please note: StorageOS Project edition is required to create RWX Volumes.

StorageOS supports ReadWriteMany (RWX) access mode Persistent Volumes. A RWX PVC can be used simultaneously by many Pods in the same Kubernetes namespace for read and write operations.

StorageOS RWX Volumes are based on a shared filesystem - in the case of our implementation, this is NFS.

Architecture

For each RWX Volume, the following components are involved:

StorageOS ReadWriteOnly (RWO) Volume

StorageOS provisions a standard Volume that provides a block device for the file system of the NFS server. This means that every RWX Volume has its own RWO Volume. This allows RWX Volumes to leverage the synchronous replication and automatic failover functionality of StorageOS, providing the NFS server with high availability.

NFS-Ganesha server

For each RWX Volume, an NFS-Ganesha server is spawned by StorageOS. The NFS server runs in user space on the Node containing the primary Volume. Each NFS server uses its own RWO Volume to store data so the data of each Volume is isolated.

StorageOS binds an ephemeral port to the host network interface for each NFS-Ganesha server. The NFS export is presented using NFS v4.2. Check the prerequisites page to see the range of ports needed for StorageOS RWX Volumes.

StorageOS API Manager

StorageOS fully integrates with Kubernetes. The StorageOS API Manager Pod monitors StorageOS RWX Volumes to create and maintain a Kubernetes Service that points towards each RWX Volume’s NFS export endpoint. The API Manager is responsible for updating the Service endpoint when a RWX Volume failover occurs.

Provisioning and using RWX PVCs

The sequence in which a RWX PVC is provisioned and used is as follows:

  1. A PersistentVolumeClaim (PVC) is created with RWX access mode using any StorageOS StorageClass.
  2. StorageOS dynamically provisions the PV.
  3. A new StorageOS RWO Volume is provisioned internally (not visible in Kubernetes).
  4. When the RWX PVC is consumed by a pod, an NFS-Ganesha server is instantiated on the same Node as the primary Volume. The NFS-Ganesha server thus uses the RWO StorageOS Volume as its back end disk.
  5. The StorageOS API Manager publishes the host IP and port for the NFS service endpoint, by creating a Kubernetes Service that points to the NFS-Ganesha server export endpoint.
  6. StorageOS issues a NFS mount on the Node where the Pod using the PVC is scheduled.

High availability

RWX Volumes failover in the same way as standard RWO StorageOS Volumes. The replica Volume is promoted upon detection of Node failure and the NFS-Ganesha server is started on the Node containing the promoted replica. The StorageOS API Manager updates the endpoint of the Volume’s NFS service, causing traffic to be routed to the URL of the new NFS-Ganesha server. The NFS client in the application Node (where the user’s Pod is running) automatically reconnects.

Notes

  • All feature labels that work on RWO Volumes will also work on RWX Volumes.
  • A StorageOS RWX Volume is matched one-to-one with a PVC. Therefore the StorageOS RWX Volume can only be accessed by Pods in the same Kubernetes namespace.
  • StorageOS RWX Volumes support volume resize. Refer to the resize documentation for more details.