Skip to main content

OpenSDS CNCF Cluster Integration Testing

By April 24, 2017February 14th, 2020Blog

OpenSDS CNCF Cluster Integration Testing Summary

 

0. Introduction

OpenSDS Community was established around the end of year 2016 and the team has been busy developing the code from ground up ever since. We have applied a “discuss, develop, verify” approach that we have architects from member companies to discuss the high level architecture, developers to do a prototype implementation and verify the prototype through testing. Near the end of March we have finished the stage 1 of our PoC development, and we applied for a CNCF Cluster environment to test the PoC code.

1. CNCF Cluster Application:

Proposal at https://github.com/cncf/cluster/issues/30

1.1 Key Propositions:

What existing problem or community challenge does this work address? ( Please include any past experience or lessons learned )

Storage is an important part of functionality for Kubernetes, and currently there are multiple in-tree and out-of-tree volume plugins developed for Kubernetes, but it is hard to know whether the integration of storage provider is functioning as it should be without testing it. Moreover for storage that is provided from cloud provider (e.g OpenStack), there should be functional testing to prove the integration’s actually working.

OpenSDS is a fellow Linux Foundation collaborative project and aims to provide a unified storage service experience for Kubernetes. OpenSDS could provide various types of storage resource for Kubernetes (e.g OpenStack block and file service, baremetal storage device), and we want to use CNCF Cluster to :
a) Verify the integration of OpenSDS and Kubernetes
b) Verify the performance of the inegration via metrics such as IOPS, latency
c) Utilize the Intel S3610 400GB SSD and 2TB NLSAS HDD for storage provisioning

1.2 Important specs for the granted CNCF Cluster resource:

Bonding All hosts have 4x10Gb ports connected with LACP bonding
Transit network (Internet) static assignment on interface bond0.7
Transit gateway 10.2.0.1
Transit DNS servers 10.1.8.3
vlan range for internal usage usage 300 – 319 (create on bond0 interface)
untagged vlan (for custom PXE) 300
OS version ubuntu
Compute node spec 2x Intel E5-2680v3 12-core
256GB RAM
2x Intel S3610 400GB SSD
1x Intel P3700 800GB NVMe PCIe SSD
1x QP Intel X710
Storage Node spec 2x Intel E5-2680v3 12-core
128GB RAM
2x Intel S3610 400GB SSD
10x Intel 2TB NLSAS HDD
1x QP Intel X710

 

2. OpenSDS Integration Testing:

2.1 OpenSDS Architecture Introduction


OpenSDS consists of three layers:
● API Layer: Provides a general RSTful API for storage resource management/orchestration operations.
● Controller Layer: Provides scheduling/management/orchestration capabilities such as discovery, pooling, lifecycle, data protection and so forth.
● Hub Layer: Provides mechanisms to connect with various storage backends.

2.2 Basic Deployment Architecture:



For deployments, OpenSDS usually could be compiled into two separate modules: Control module and Hub Module. OpenSDS Control contains the API and Controller functionalites meanwhile OpenSDS Hub contains the southbound abstraction layer (we call it the “dock”) and all the necessary adaptors for the storage backends.

Total of 5 nodes are used in the testing, of which one serves as the master node and three others as the worker node. The master node is running on the compute node provided by the cluster, where as the worker nodes are running on the storage nodes.
OpenSDS’s Control module is deployed at the master node together with Kubernetes’ Control module, OpenSDS’s Hub module is deployed at the storage node together with Kubernetes’ kubelet module.

OpenStack Cinder (with LVM and Ceph as backends), OpenStack Manila (NFS) and CoprHD (with simulator of EMC storage device) are deployed as the storage resources for OpenSDS.

The advantage of deploy hub module in a distributed way on each storage node is that it could help OpenSDS scale without the need to have multiple controller instances everywhere which will inevitably introduce the problem of state syncing and brainsplit. Since Hub communicates with Control via gRPC, it could be scaled to a rather large scale. The states are maintained at the Control module whereas Hub module is stateless. When the compute platform (Kubernetes, Mesos, OpenStack …) queries about the state from OpenSDS, it could get the necessary information from the central controller, and the central controller sync up with all the underlying hubs via simple heartbeat implementation.

2.3 Scenario 1: Multiple backends support and providing PV for Kubernetes Pods


In this scenario, we successfully tested the OpenSDS southbound adaptors for OpenStack Cinder, OpenStack Manila and CoprHD functions properly. We could demonstrate the CRUD operations for volumes and shared volumes (files) on OpenStack, and CRUD operations for volumes on CoprHD. This verifies that by integrating with various backends such as OpenStack and CoprHD, OpenSDS could indeed provide a unified management framework.

We also successfully tested that via OpenSDS Flex Volume plugin, OpenSDS could provide Ceph volume which is managed by Cinder for Kubernetes Pods. Alongside with the multi-backends support, this verifies that by integrating OpenSDS with Kubernetes, it could provides storage resource from various backends for kubernetes through a unified management plane.

2.4 Scenario 2: OpenSDS storage support for Kubernetes auto-scaling


In this scenario, we have OpenSDS Hub deployed on three storage nodes with Cinder and Ceph as the backend. Volumes are created on the backend of all the three storage nodes.

We first have Kubernetes create Pod1 on 10.2.1.233 and attach the volume provided by OpenSDS, and then by pre-configuration of the replication controller, Kubernetes will scale out Pod1 by creating Pod2 and Pod3 with the same application container (Nginix in this case) on 10.2.1.234 and 10.2.1.235. Via the yaml config file, Kubernetes will notify OpenSDS about the attachment of volume for the newly created Pods. With the help of distributed hubs on each storage node, OpenSDS is able to fulfill such task. The testing verifies that OpenSDS could provide the storage support for the auto-scaling in Kubernetes.

2.5 Scenario 3: OpenSDS storage support for Kubernetes Failover


In this scenario, we have OpenSDS Hub deployed on two storage nodes with Cinder and Ceph as the backend. Volumes are created on the backend of all the three storage nodes.

We first have Kubernetes create Pod1 on 10.2.1.233 and attach the volume provided by OpenSDS, and then we kill Pod1. With the pre-configuration of the replication controller, Kubernetes will replicate Pod1 on 10.2.1.234. Via the yaml config file, Kubernetes will notify OpenSDS about the attachment of the original volume for the migrated Pod. With the help of distributed hubs on each storage node, as well as the replication support provided by Ceph, OpenSDS is able to fulfill such task. The testing verifies that OpenSDS could provide the storage support for the fast failover in Kubernetes.

 

3. OpenSDS Integration Testing Conclusions

3.1 Summary

With the help of the CNCF Cluster facilities, we have successfully performed the integration testing for OpenSDS and Kubernetes, and also OpenSDS with OpenStack storage module and CoprHD. The testing verifies the OpenSDS PoC prototype’s capability to provide storage support for Kubernetes for multiple storage backends, on auto-scaling and fast failover.

3.2 Lesson Learnt

3.2.1 OpenSDS – Kubernetes

1. The flex volume plugin will read the output from stdout, therefore if you add log functionality to the plugin itself to print to stdout, it will report error.
2. The message type in protobuf is int32, therefore extra care needs to be put on the int-int32 conversion when using gRPC.

3.2.2 OpenSDS – OpenStack

1. The mountpoint parameter in OpenStack API documentation is labeled as optional, however in the testing we found that Cinder requires this parameter.
2. The pool_name parameter for Ceph has to be set in the Cinder config file.
3. OpenStack’ Goclinet (which is used for OpenSDS Cinder and Manila adaptor) currently only supports keystone v2, which has been deprecated.

3.2.3 OpenSDS –CoprHD

1. CoprHD only supports OpenSuse, however our cluster host os is Ubuntu. We have to deploy it in a OpenSuse VM.
2. CoprHD’s CLI is not as friendly as its UI, therefore makes it harder to deploy.
3. CoprHD’s default backends are VMAX and VNX, although we have simulators, the volume could not be listed after creation.

3.2.4 CNCF Cluster:

Unfulfilled goals: We did not accomplished the second and the third goal of our cluster application, mainly due to the time constraint. We spent about one week to setup the environment and another week to do the integration testing. A longer testing period would be preferred in the future if possible.