Setting up Persistent Volumes in Kubernetes
To use persistent volumes with the Couchbase Autonomous Operator, you first need to set them up in Kubernetes first.
This topic outlines how to set up Kubernetes to support persistent volumes. Only dynamic volume provisioning using storage classes is currently supported.
Create a Storage Class
Dynamic Storage Classes
Storage classes define how physical volumes are created through the provided storage backend.
For instance, the AWS storage class is used to create volumes through the aws-ebs
provisioner:
kind: StorageClass
...
provisioner: kubernetes.io/aws-ebs
Many cloud providers integrate storage classes with their installation. To check if your environment already has a default storage class, run the following command:
kubectl get storageclass
You should also run the kubernetes describe
command on any of the returned storage classes to ensure that its provisioner matches the storage system that you want to use for creating volumes.
If you do not see your storage class listed in the above command, then you will need to add it to Kubernetes. Different providers offer different ways of configuring their provisioners. Refer to the following links for information about how to configure a storage class for one of the supported providers:
Provider | Information |
---|---|
AWS |
|
Azure Disk |
|
Ceph RBD |
|
GCE |
|
GlusterFS |
|
Portworx |
Local Storage Class (Experimental)
Local storage volume provisioning is not supported in production. For production use-cases, use dynamic volume provisioning. |
When using local storage or manually created persistent volumes, create a storage class with the local
provisioner:
kind: StorageClass
metadata:
name: myssd
provisioner: local
This storage class is able to claim any persistent volume with the same storageClassName
from its spec.
For example, if you want your storage class named myssd
to claim 10Gi volumes, then you should create the following persistent volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: couchbase-data
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
storageClassName: myssd
local:
path: /dev/sc
fsType: ext4
When creating a Couchbase cluster, the storageClass
will claim this volume and attach it to your pod.
Be aware that when using local volumes, you will need to create at least the minimum number of volumes that you expect your cluster to use. This is because auto-scaling and recovery will have undefined behavior when the expected number of volumes are not available or pods that are being restarted across nodes. Because of this undefined behavior, local volumes should be reserved for development deployments only. |
Manually-created volumes can also use cloud storage. Refer to the following list of Persistent Volume Sources that can be used in place of local sources.
Verify Storage Class
Once you’ve created a storage class, you can verify its ability to claim or request volumes by creating a Persistent Volume Claim (PVC).
The following example creates a claim that will use your storage class to provide persistent volumes, where <MY-STORAGE-CLASS>
is the name of your storage class:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-test-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: <MY-STORAGE-CLASS>
resources:
requests:
storage: 1Gi
Save the spec for this PersistentVolumeClaim
to a file and create the claim using kubectl
:
kubectl create -f pvc.yaml
After creation, verify that a persistent volume is created for the claim:
kubectl get persistentvolumes
If no persistent volume is created, it’s very likely that the storage class' provisioner was unable to create the requested volume. In that case, you will need to debug further by checking the status of the claim and referring to the documentation of the associated provider:
kubectl describe pvc my-test-claim
Add Storage Class to Couchbase Cluster Configuration
Once Kubernetes is set up to provide persistent volumes, the final step to add the storage class to your CouchbaseCluster specification involves adding volumeMounts
, which specify the paths within the pods you want to persist along with the storage class that they should use for creating volumes.
The following is a snippet of a three-node cluster that is configured to use persistent volumes for its data services:
---
spec:
securityContext:
fsGroup: 1000
servers:
- size: 3
name: all_services
services:
- data
pod:
volumeMounts:
default: couchbase
data: couchbase
volumeClaimTemplates:
- metadata:
name: couchbase
spec:
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
Notice that the pods volume mounts specified within servers.pod.volumeMounts
(couchbase
) matches the name of a volumeClaimTemplate
(also named couchbase
), and that this template refers to the storage class that was created previously in spec.volumeClaimTemplates.spec.storageClassName
(my-storage-class
).
See the CouchbaseCluster documentation to further understand how to define a cluster with persistent volumes.
Setting the fsGroup
fsGroup
is a Kubernetes security parameter that is used to ensure that the user that runs a particular container has access to the data on the persistent volume that is attached to it.
Depending on your setup, persistent volumes may only be attached if you set the fsGroup
to a specific value or value range.
Accepted fsGroup
values are set by the Kubernetes cluster administrator and are usually set on the namespace/project that the Operator is running in, or in the security context constraint attached to the service account that is running Couchbase pods.
If you are running a basic Kubernetes cluster, then you should be able to set the fsGroup
to any value.
If you are running a basic OpenShift cluster, then you need to check the project that the Operator is running in to get the accepted fsGroup
range:
# oc get project operator-example-namespace -o json
{
"apiVersion": "project.openshift.io/v1",
"kind": "Project",
"metadata": {
"annotations": {
"openshift.io/description": "",
"openshift.io/display-name": "",
"openshift.io/requester": "developer",
"openshift.io/sa.scc.mcs": "s0:c12,c9",
"openshift.io/sa.scc.supplemental-groups": "1000150000/10000",
"openshift.io/sa.scc.uid-range": "1000150000/10000"
},
"creationTimestamp": "2018-09-04T20:39:38Z",
"name": "operator-example-namespace",
"resourceVersion": "312376",
"selfLink": "/apis/project.openshift.io/v1/projects/operator-example-namespace",
"uid": "a42f48b0-b082-11e8-9a10-020859cce73e"
},
"spec": {
"finalizers": [
"openshift.io/origin",
"kubernetes"
]
},
"status": {
"phase": "Active"
}
}
In the output, note that the annotations contain an entry for openshift.io/sa.scc.supplemental-groups
with a value of 1000150000/10000.
This means that the fsGroup
must be set to a valid value between 1000150000 and 1000160000:
---
spec:
securityContext:
fsGroup: 1000150000
servers:
- size: 3
services:
- data
pod:
volumeMounts:
default: couchbase
data: couchbase
volumeClaimTemplates:
- metadata:
name: couchbase
spec:
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
If you’re running a non-standard setup and having trouble with persistent volumes not being attached to pods for security reasons, then contact your cluster administrator for a valid |