Setting up Persistent Volumes
This topic outlines how to set up Kubernetes for the dynamic provisioning of Persistent Volumes. Only dynamic volume provisioning using storage classes is currently supported.
Create Storage Class
Storage classes define how physical volumes are created through the provided storage backend. For instance, the AWS storage class is used to create volumes through the aws-ebs
provisioner:
kind: StorageClass
...
provisioner: kubernetes.io/aws-ebs
Many cloud providers integrate storage classes with their installation. To check if your environment already has a default storage class, run the following command:
kubectl get storageclass
You should also run the kubernetes describe
command on any of the returned storage classes to ensure that its provisioner matches the storage system that you want to use for creating volumes.
If you do not see your storage class listed in the above command, then you will need to add it to Kubernetes. Different providers offer different ways of configuring their provisioners. Refer to the following docs for information about how to configure a storage class for one of the supported providers:
-
AWS
-
GCE
-
GlusterFS
-
Azure Disk
-
Ceph RBD
-
Portworx
Verify Storage Class
Once you’ve created a storage class, you can verify its ability to provide volumes by creating a Persistent Volume Claim (PVC). The following example creates a claim that will use your storage class to provide persistent volumes:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my_test_claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: <MY-STORAGE-CLASS>
resources:
requests:
storage: 1Gi
Replace <MY-STORAGE-CLASS>
with the name of your storage class.
Save the spec for this PersistentVolumeClaim
to a file and create the claim using kubectl
:
kubectl create -f pvc.yaml
After creation, verify that a persistent volume is created for the claim:
kubectl get persistentvolumes
If no persistent volume is created, it’s very likely that the storage class' provisioner was unable to create the requested volume. In this case, you will need to debug further by checking the status of the claim and referring to the documentation of the associated provider:
kubectl describe pvc my_test_claim
Add Storage Class to Couchbase Cluster Configuration
Once Kubernetes is set up to provide persistent volumes, the final step to add the storage class to your CouchbaseCluster specification involves adding volumeMounts
which specify the paths within the pods you want to persist along with the storage class they should use for creating volumes. The following is a snippet of a three-node cluster that is configured to use persistent volumes for its data services:
---
spec:
securityContext:
fsGroup: 1000
servers:
- size: 3
services:
- data
pod:
volumeMounts:
default: couchbase
data: couchbase
volumeClaimTemplates:
- metadata:
name: couchbase
spec:
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
Notice that the Pods Volume Mounts specified within servers.pod.volumeMounts
(couchbase) matches the name of a volumeClaimTemplate also named couchbase, and that this template refers to the Storage Class we created spec.volumeClaimTemplates.spec.storageClassName
(my-storage-class).
See CouchbaseCluster Configuration Guide to further understand how to define a cluster with Persistent Volumes.
Setting the fsGroup
fsGroup is a security parameter that is used to ensure that the user that runs a particular container has access to the data on the persistent volume that is attached to it. Depending on your setup, persistent volumes may only be attached if you set the fsGroup to a specific value or value range. Accepted fsGroup values are set by the Kubernetes cluster administrator and are usually set on the namespace/project that the operator is running in, or in the security context constraint attached to the service account running Couchbase pods.
If you are running a basic Kubernetes cluster then you should be able to set the fsGroup to any value. If you are running a basic OpenShift cluster then you need to check the project that the operator is running in to get the accepted fsGroup range. This can be done with the command below.
# oc get project operator-example-namespace -o json
{
"apiVersion": "project.openshift.io/v1",
"kind": "Project",
"metadata": {
"annotations": {
"openshift.io/description": "",
"openshift.io/display-name": "",
"openshift.io/requester": "developer",
"openshift.io/sa.scc.mcs": "s0:c12,c9",
"openshift.io/sa.scc.supplemental-groups": "1000150000/10000",
"openshift.io/sa.scc.uid-range": "1000150000/10000"
},
"creationTimestamp": "2018-09-04T20:39:38Z",
"name": "operator-example-namespace",
"resourceVersion": "312376",
"selfLink": "/apis/project.openshift.io/v1/projects/operator-example-namespace",
"uid": "a42f48b0-b082-11e8-9a10-020859cce73e"
},
"spec": {
"finalizers": [
"openshift.io/origin",
"kubernetes"
]
},
"status": {
"phase": "Active"
}
}
In the output of this command, note that the annotations contain an entry for openshift.io/sa.scc.supplemental-groups
with a value of 1000150000/10000. This means that the fsGroup must be set to a valid value between 1000150000 and 1000160000.
---
spec:
securityContext:
fsGroup: 1000150000
servers:
- size: 3
services:
- data
pod:
volumeMounts:
default: couchbase
data: couchbase
volumeClaimTemplates:
- metadata:
name: couchbase
spec:
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
If you are running a non-standard setup and having trouble with persistent volumes not being attached to pods for security reasons then contact your cluster administrator for a valid fsGroup range for your target storage type. |