OpenEBS Local Volume Performance Overhead Kubernetes

I wrote about the usage of OpenEBS to manage local storage in Kubernetes in my previous posts:

Deploying Percona Kubernetes Operators with OpenEBS Local Storage

OpenEBS for the Management of Kubernetes Storage Volumes

The primary reason to use the local storage is performance and the logical question is then: How big is the performance overhead if we use OpenEBS to manage the local storage?

To look into this I will use sysbench fileio test on the NVMe storage Intel® SSD DC P4610.

Please note this results is for MIXED read and writes workload, and for pure reads and pure writes the results likely will be different.

Direct Storage

The command to prepare data:

The command to run test:

This means I will use Async IO to perform read-write IO, using O_DIRECT and by default, sysbench is using 16KiB block size. The result on the direct storage (without Kubernetes):

OpenEBS local-pv

Now, let’s run this on the volume managed by OpenEBS local-pv (hostpath). For this, we need to do some prep work.

In the first step, I prepared a sysbench docker image which we can use to start a pod in Kubernetes and it will include sysbench and sysbench-tpcc benchmarks

In the second step, I define a volume:

And define a pod to start an image with sysbench:

When the pod is running, we can shell into it as:

And execute the same commands as previously in  /mnt/store path. The results in this case:

As we can see, there is no measurable difference from the previous result, so, to be clear, there is no performance difference when using OpenEBS local-pv versus direct local storage.

OpenEBS zfs-localpv

Now let’s test OpenEBS zfs-localpv. ZFS adds extra features, but it comes with a performance overhead and I would like to measure it also. The full documentation on how to install and enable ZFS in your Kubernetes is here:

https://github.com/openebs/zfs-localpv

And I will just show the StorageClass I am using ( I am using 16k recordsize to match sysbench and InnoDB IO):

And then running a similar experiment as previously described, I get the following results:

And to summarize all results in the single table (these results are only valid for the mixed read/write workload)

 

Throughput, MiB/secRatio to direct
Direct read1691.30
Localpv read1700.061
Zfs-localpv read1149.170.67
Direct write1129.30
Localpv write1134.101
Zfs-localpv write767.140.66

 

Conclusions

I want to state that there is no performance penalty on IO when using OpenEBS localpv volume manager. ZFS-localpv is a different story, and ZFS comes with additional features, but it comes with the cost of about 33% of the performance hit I observe in my results.