Storage Benchmarking with Fio in Kubernetes

Joshua Robinson
5 min readJun 4, 2020

Benchmarking helps to build an understanding of your underlying infrastructure and validate correctly configured environments. Though never able to reflect real workloads and user experiences, a well done set of storage benchmarks is still useful for “burn-in” testing and to stress-test the system with additional load.

But if your team is all-in on Kubernetes, do you need to start from scratch and create new benchmarking tools? Fortunately, it is straightforward to use existing benchmark tools, like fio (flexible i/o tester), along with native Kubernetes PersistentVolumes and Container Storage Interface (CSI) provisioners to test your storage.

I will describe two different approaches for running parallel fio tests in Kubernetes and how they provision storage differently: 1) a simple Deployment and PersistentVolumeClaim for RWX volumes and 2) a Statefulset for RWO/RWX volumes. The fio config itself is kept simple in order to focus on Kubernetes storage concepts and I use dynamic volume provisioning to avoid the unnecessary configuration steps to manually create test volumes.

All files for running these tests can be found in this Github repository.

I assume there is already a CSI provisioner installed; see here for installation instructions for the Pure Service Orchestrator (PSO). Other CSI-compliant provisioners can also be used.

Running fio in Kubernetes is not new; there are other versions for non-CSI provisioned volumes or a benchmarking job based on fio for single-client benchmarking. My examples illustrate how to use fio for concurrent testing of CSI-provisioned volumes.

How It Works

My examples leverage four main Kubernetes concepts and I will first briefly introduce them:

  1. A configMap object is the equivalent of a configuration file; it can be injected into a running container so that it appears as a local file to the application.
  2. Deployments manage a set of near-identical application instances (replicas), creating each one using a template.
  3. A PersistentVolumeClaim is a request for storage. A CSI provisioner automatically creates a matching PersistentVolume to fulfill the request.
  4. Statefulsets manage a set of application instances, similar to Deployments but with more organization and logic. One specific difference is that each new instance will automatically create a new PersistentVolumeClaim based on a template, which results in a unique volume per replica.

It is also important to understand the difference between ReadWriteOnce (RWO) and ReadWriteMany (RWX) volumes. A RWO volume has a single, exclusive owner, whereas many applications can share access to a RWX volume, like a shared filesystem with NFS.

I run fio using either a Deployment or a Statefulset. The Kubernetes configMap stores the fio job configs in configs.yaml, which currently contains only one config but additional configs could easily be added to the same configmap. My public fio Dockerfile is a simple Alpine image based on fio version 3.19 and is only ~10MB in size.

A single instance of fio does not represent modern workloads, so I will focus on approaches for running multiple concurrent fio instances.

A deployment scales by creating nearly identical pods based on a template (“spec”). For storage, the template specifies a PersistentVolumeClaim so that each pod mounts the same volume. The Deployment and PersistentVolumeClaim are two separate Kubernetes objects, meaning the volume could be shared by a variety of different applications.

A Statefulset scales with more storage-related intelligence: a volumeClaimTemplate means that each new pod creates a new PersistentVolumeClaim and therefore volume. The PersistentVolumeClaim for each replica is automatically managed by the Statefulset, instead of directly managed.

The diagram below illustrates the Deployment and Statefulset approaches and how they use storage differently.

Deployment and PVC

A Deployment makes scaling simple, with all pods connecting to a single dynamically provisioned persistentVolume. Each instance uses a different output path to avoid collisions. This approach requires a RWX volume, i.e., a single shared filesystem, but not does not work with a RWO volume.

The deployment uses a PersistentVolumeClaim and volumeMount to attach the storage to each fio pod. The following yaml configures a pod to use volumeMounts to access both the configmap and the PersistentVolume. Note the volumeMounts are used similarly, but the underlying volumes have two different sources, a configMap and a persistentVolumeClaim.

volumeMounts:
- name: fio-config-vol
mountPath: /configs
- name: fio-data
mountPath: /scratch

volumes:
- name: fio-config-vol
configMap:
name: fio-job-config
- name: fio-data
persistentVolumeClaim:
claimName: fio-claim

The persistentVolumeClaim needs to also be defined with the desired size and StorageClass. Here I chose “pure-file” for the StorageClass, which signifies to the CSI provisioner how to allocate the storage on the backend.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: fio-claim
spec:
storageClassName: pure-file
accessModes:
- ReadWriteMany
resources:
requests:
storage: 4Ti

In my example configuration, the deployment and persistentVolumeClaim are defined in the same yaml file, meaning that “kubectl delete -f” results in removal of the pods and the underlying data. For important data, splitting the objects into separate files helps avoid accidental deletions.

Statefulset

With a Statefulset, each pod creates and attaches to a unique volume, making this approach suitable for RWO volumes as well as RWX. For example, creating 10 replicas resulting in 10 pods and 10 different volumes.

For some shared storage systems that scale-out via federation, using a Statefulset results in better performance because each node uses a different filesystem, i.e., no true sharing of data.

Volumes are mounted into the pods similarly to before, except now instead of a PersistentVolumeClaim, we use a volumeClaimTemplate. The only difference is that this is used as a template for creating multiple PersistentVolumeClaims, one for each replica.

volumeClaimTemplates:
- metadata:
name: fio-data
spec:
storageClassName: pure-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Ti

The Statefulset implementation does not automatically delete the volumes when the Statefulset is deleted, so it must be done manually to reclaim space. The following command deletes all volumes with the label “app=fio”

kubectl delete pvc -l app=fio

Usage

To start these workloads, create the configmap and then either the Deployment or Statefulset.

kubectl apply -f configs.yaml
kubectl apply -f fio_deployment_pvc.yaml

You can then leverage Kubernetes to scale the Deployment or Statefulset and dynamically change the parallelism:

kubectl scale --replicas=X deployment.apps/fio

or

kubectl scale --replicas=X statefulset.apps/fio

The following screenshot demonstrates performance as the number of replicas is incremented every 3 minutes.

The behavior above was created with the following command line:

for i in `seq 20`; do 
kubectl scale --replicas=$i deployment.apps/fio
sleep 180
done

To list the files being created by fio, the following command executes an ‘ls’ on one of the pods:

kubectl exec -it deployment.apps/fio -- ls /scratch/

Since all pods in the Deployment have the same view of the shared storage, I specified the deployment (i.e., any pod) instead of a specific pod.

If you make a change to the job config by updating the configMap, restart the pods to pick up the new configmap with the “kubectl rollout” command:

kubectl rollout restart deployment.apps/fio

Summary

Integrating storage benchmarking in Kubernetes is a great way to validate infrastructure with burn-in tests. The two approaches I have taken also illustrate the differences between how PersistentVolumeClaims and Statefulsets consume storage. With CSI drivers, the provisioning and attachment of storage to pods can be entirely automated, and tools like fio provide an extensive set of testing capabilities for block or file storage backends.

Github repository.

--

--