Persistent Volumes Zoning Guide
When deploying on a cloud that spans multiple availability zones, certain steps are required in order to use persistent volumes. These steps will vary depending on the version of Kubernetes being provided by your cloud.
If your cloud provider is running Kubernetes 1.11.x, refer to the legacy zoning guide.
There is no upgrade path from Kubernetes 1.11 to 1.12 configurations since the underlying |
Create a Multi-Zoned Storage Class
When creating volumes across multiple availability zones, we need to ensure that the volumes are bound in the same zone as the pod.
This can be accomplished by creating a StorageClass
with volumeBindingMode
set to WaitForFirstConsumer
.
When volumes are created under this binding mode, it means that the pod will determine which node the volumes should bind to since the pod is considered its first consumer.
Another reason why this is necessary is because the default binding mode of Immediate
often results in scheduling conflicts when the pod is created in a different zone than its volumes.
IMPORTANT
Even if you are not using server groups you will still need to set the binding mode to WaitForFirstConsumer
when your cloud has multiple zones.
Some cloud providers may not require this, but for performance, it’s recommended that pods and volumes are always placed in the same zone.
The following snippet is an example of a storage class that uses the binding mode WaitForFirstConsumer
to ensure volumes are bound to the proper zone:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
k8s-addon: storage-aws.addons.k8s.io
name: gp2-multi
parameters:
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Notice here that volumeBindingMode
is set to WaitForFirstConsumer
.
Everything else can be the exact same as your other storage classes.
Add the Storage Class to the Configuration
Storage classes are used by the persistent volume claim templates to dynamically create volumes for Pods.
Therefore, the final step of configuration is to specify the storage class which should be used for creating volumes in the volumeClaimTemplates.spec.storageClassName
section of the spec.
In the example below the gp2-multi
storage class we just created is specified as the storage class for provisioning volumes:
spec:
servers:
- name: data-nodes
size: 3
services:
- data
- index
pod:
volumeMounts:
default: gp2-multi
data: gp2-multi
volumeClaimTemplates:
- metadata:
name: gp2-multi
spec:
storageClassName: gp2-multi
resources:
requests:
storage: 1Gi
Multi-Dimensional Scaling Across Zones
The most practical use-case for deploying Kubernetes across multiple availability zones is to prevent data loss in the event a outage. Therefore, Couchbase server groups are often used in these environments, and when combined with persistent volumes, an even higher level of resiliance to disaster can be achieved.
The following example shows the use of server groups with persistent volumes on a cloud that’s configured with multiple availability groups:
spec:
servers:
- name: zone-a
size: 3
services:
- data
- index
serverGroups:
- us-east-1a
pod:
volumeMounts:
default: gp2-multi
data: gp2-multi
- name: zone-b
size: 3
services:
- data
- index
serverGroups:
- us-east-1b
pod:
volumeMounts:
default: gp2-multi
data: gp2-multi
- name: zone-c
size: 3
services:
- data
- index
serverGroups:
- us-east-1c
pod:
volumeMounts:
default: gp2-multi
data: gp2-multi
volumeClaimTemplates:
- metadata:
name: gp2-multi
spec:
storageClassName: gp2-multi
resources:
requests:
storage: 1Gi
Here we have three zones, each able to consume volumes provided by a single storage class named gp2-multi
.
See the CouchbaseCluster configuration documentation for more detailed information about defining a cluster with persistent volumes.