About the CouchbaseCluster Configuration File
To use the Couchbase Autonomous Operator to deploy a Couchbase Server cluster, you need to create a CouchbaseCluster configuration file that describes how the cluster should be configured and maintained by the Operator. Like all Kubernetes configurations, Couchbase Server clusters are defined using either YAML or JSON (YAML is preferred by Kubernetes) and then pushed into Kubernetes.
The following CouchbaseCluster configuration shows all of the available parameters. (Note that this configuration is not valid and is meant for example purposes only.)
apiVersion: couchbase.com/v1
kind: CouchbaseCluster
metadata:
name: cb-example
namespace: default
spec:
baseImage: couchbase/server
version: enterprise-5.5.2
paused: false
antiAffinity: true
tls:
static:
member:
serverSecret: couchbase-server-tls
operatorSecret: couchbase-operator-tls
authSecret: my-secret
exposeAdminConsole: true
adminConsoleServices:
- data
exposedFeatures:
- xdcr
softwareUpdateNotifications: true
serverGroups:
- us-east-1a
- us-east-1b
- us-east-1c
securityContext:
runAsUser: 1000
runAsNonRoot: true
fsGroup: 1000
disableBucketManagement: false
logRetentionTime: 604800s
logRetentionCount: 20
cluster:
dataServiceMemoryQuota: 256
indexServiceMemoryQuota: 256
searchServiceMemoryQuota: 256
eventingServiceMemoryQuota: 256
analyticsServiceMemoryQuota: 1024
indexStorageSetting: default
autoFailoverTimeout: 120
autoFailoverMaxCount: 3
autoFailoverOnDataDiskIssues: true
autoFailoverOnDataDiskIssuesTimePeriod: 120
autoFailoverServerGroup: false
buckets:
- name: default
type: couchbase
memoryQuota: 1024
replicas: 1
ioPriority: high
evictionPolicy: value-eviction
conflictResolution: seqno
enableFlush: true
enableIndexReplica: false
servers:
- size: 1
name: all_services
services:
- data
- index
- query
- search
- eventing
- analytics
serverGroups:
- us-east-1a
pod:
couchbaseEnv:
- name: ENV1
value: value
resources:
limits:
cpu: 4
memory: 8Gi
storage: 100Gi
requests:
cpu: 2
memory: 8Gi
storage: 50Gi
labels:
couchbase_services: all
nodeSelector:
instanceType: large
tolerations:
- key: app
operator: Equal
value: cbapp
effect: NoSchedule
automountServiceAccountToken: false
volumeMounts:
default: couchbase
data: couchbase
index: couchbase
analytics:
- couchbase
- couchbase
logs: couchbase
volumeClaimTemplates:
- metadata:
name: couchbase
spec:
storageClassName: "standard"
resources:
requests:
storage: 1Gi
Top-Level Definitions
The following are the top-level parameters for a CouchbaseCluster configuration:
apiVersion: couchbase.com/v1
kind: CouchbaseCluster
metadata:
name: cb-example
namespace: default
apiversion
This field specifies the API version for the Couchbase integration. As the integration changes over time, you can change the API version whenever new features are added. For any given release, the API versions that are supported by that Operator will be specified in the documentation. It is recommended that you upgrade to the latest API version whenever possible.
Field rules: This field is required and must be set to a valid API Version. The value of this field cannot be changed after the cluster is created.
kind
This field specifies that this Kubernetes configuration will use the custom Couchbase controller to manage the cluster.
Field rules: This field is required and must always be set to
CouchbaseCluster
. The value of this field cannot be changed after the cluster is created.
metadata
Allows setting a name for the Couchbase cluster and a namespace that the cluster should be deployed in.
Field rules: A name is highly recommended, but not required. If values are not set in the metadata field then defaults will be set. The value of
name
will be autogenerated and the value ofnamespace
will be set todefault
. The value of these fields cannot be changed after the cluster is created.
spec
This section describes the top-level parameters related to a Couchbase cluster deployment.
spec:
baseImage: couchbase/server
version: enterprise-5.5.2
paused: false
antiAffinity: true
tls:
static:
member:
serverSecret: couchbase-server-tls
operatorSecret: couchbase-operator-tls
authSecret: my-auth-secret
exposeAdminConsole: true
adminConsoleServices:
- data
exposedFeatures:
- xdcr
softwareUpdateNotifications: true
serverGroups:
- us-east-1a
- us-east-1b
- us-east-1c
securityContext:
runAsUser: 1000
runAsNonRoot: true
fsGroup: 1000
disableBucketManagement: false
logRetentionTime: 604800s
logRetentionCount: 20
baseImage
This field specifies the image that should be used.
Field rules: The
baseImage
field is required and should be set tocouchbase/server
unless you want Kubernetes to pull the Couchbase Server docker container from a different location. The value of this field cannot be changed after the cluster is created.
version
This field specifies the version number of Couchbase Server to install. This should match an available version number in the Couchbase Docker repository. It may never be changed to a lower version number than the version that is currently running.
Field rules: The
version
field is required and should be set to the version of Couchbase Server that should be used to build the cluster. The value of this field cannot be changed after the cluster is created.
paused
This field specifies whether or not the Operator is currently managing this cluster. This parameter should generally be set to true
, but may be set to false
if you decide to make manual changes to the cluster. By disabling the Operator you can change the cluster configuration without having to worry about the Operator reverting the changes. However, before re-enabling the Operator, ensure that the Kubernetes configuration matches the cluster configuration.
Field rules: The
paused
field is not required and defaults tofalse
if not specified.
antiAffinity
This field specifies whether or not two pods in this cluster can be deployed on the same Kubernetes node. In a production setting this parameter should always be set to true
in order to reduce the chance of data loss in case a Kubernetes node crashes.
Field rules: The
antiAffinity
field is not required and defaults tofalse
if not specified. The value of this field cannot be changed after the cluster is created.
tls
This field is optional and controls whether the Operator uses TLS for communication with the cluster. It also sets the TLS certificates that are used by Couchbase clients and XDCR. Refer to the TLS documentation for more information.
authSecret
This field specifies the name of a Kubernetes Secret that should be used as the user name and password of the Couchbase super-user.
Field rules: The
authSecret
field is required and should reference the name of a Kubernetes Secret that already exists. The value of this field cannot be changed after the cluster is created.
exposeAdminConsole
This field specifies whether or not the Couchbase Server Web Console should be exposed externally. Exposing the web console is done using a NodePort service and the port for the service can be found in the describe output when describing this cluster. This parameter may be changed while the cluster is running and the Operator will create/destroy the NodePort service as appropriate.
Field rules: The
exposeAdminConsole
field is not required and defaults tofalse
if not specified. If set totrue
the specification must also have the adminConsoleServices property defined.
adminConsoleServices
When the Couchbase Server Web Console is exposed with the exposeAdminConsole property, by default, opening a browser session to the Web Console will be automatically load balanced across all pods in the cluster to provide high availability. However, the Web Console will display different features based on the services that are running on the particular pod that it’s connected to.
This property allows the UI service to be constrained to pods running one or more specific services. The services that you specify are subtractive in nature — they will only identify pods running all of those services — so care must be used. For example, if the cluster is deployed with multi-dimensional scaling, the data service on one set of pods, and the analytics service on another mutually exclusive set, then specifying data
and analytics
as your Web Console services list would result in the Web Console not being accessible — no pods match all the constraints, i.e. no pods are running both the data and analytics services.
If you require access to a specific pod running a specific service, this can also be achieved by using the admin
option of the exposedFeatures property. This will allow access via a node port. You must connect directly to the IP address of the node that the pod is running on. The assigned node port is available via the cluster status structure returned by kubectl describe
, or via the output of kubectl get services
. Refer to the services documentation for more information.
Field rules: The
adminConsoleServices
list is not required and defaults to an empty list. Valid item names aredata
,index
,query
,search
,eventing
, andanalytics
, and must be unique. An empty list means that any node in the cluster may be chosen when connecting to the Couchbase Server Web Console.
exposedFeatures
This field specified a list of per-pod features to expose on the cluster network (as opposed to the pod network). These define sets of ports which are required to support the specified features. The supported values are as follows:
- admin
-
Exposes the admin API and UI
- xdcr
-
Exposes the ports necessary to support XDCR via L3 connectivity at the cluster network layer.
- client
-
Exposes all client services. These include data, views, query, full text search, analytics.
Field rules: The
exposedFeatures
list is optional, no feature sets are exposed to the cluster network if unset.
softwareUpdateNotifications
This field specifies whether or not software update notifications are displayed in the Couchbase UI. This provides a visual indication as to whether a software update is available and should be applied in order to increase functionality or fix defects.
Field rules: The
softwareUpdateNotifications
field is optional and defaults to false if not specified. This setting can be modified at any point in the cluster life-cycle.
serverGroups
Setting the server groups field enables automatic management of Couchbase server groups. The end user is responsible for adding labels to thier Kubernetes nodes which will be used to evenly distribute nodes across server groups so the cluster is tolerant to the loss of an entire data center (or any other desired failure domain). Nodes are labelled with a key of failure-domain.beta.kubernetes.io/zone
and an arbitrary name string. Multiple nodes may have the same server group to allow multiple pods to be scheduled there regardless of anti-affinity settings. An example of applying the label is as follows:
kubectl label nodes ip-172-16-0-10 failure-domain.beta.kubernetes.io/zone=us-east-1a
As the list of server groups to use is explicit the end user has flexibility in controlling exactly where pods will be scheduled e.g. one cluster may reside in one set of server groups, and another cluster in another set of server groups.
At present the scheduling simply stripes pods across the server groups; each new pod is run in a server group with the fewest existing cluster members. This is performed on a per-server configuration basis to ensure individual classes of servers are equally distributed for high-availability. For each class of server configuration you may choose override the set of server groups to schedule across; see the documentation under the spec.servers.serverGroups
configuration key.
The server group feature also not support service redistribution at this time, so scaling the set of server groups will not result in any pods being 'moved' to make best use of the new topology or evacuated from a removed server group.
Field rules: The
serverGroups
field is optional. If set pods will be scheduled across the specified set of server groups. The server groups must be set at cluster creation time, and currently should be assumed to be immutable.
securityContext
This field is a Kubernetes PodSecurityContext object which is attached to all pods that are created by the Operator. If unspecified, this will default to the couchbase
user, mount attached volumes as that user, and ensure that the containers are running as non-root. You may override the default behavior if using a custom container image or for testing purposes.
Field rules: The
securityContext
field is optional. If set, this will be attached to all new pods that are created by the Operator. This field should not be modified during the cluster lifecycle.
disableBucketManagement
This field specifies whether to ignore the bucket configuration. When disableBucketManagement
is set to false
(the default), the Operator will have sole control over creating the buckets that are specified in the configuration (and deleting the buckets that are not).
If set to true
, the creation and deletion of buckets must be done manually using the the Couchbase Server Web Console, CLI, REST API, or SDK. Even if buckets are specified in the configuration, as long as disableBucketManagement
is set to false
, the Operator will not create or delete any buckets.
Field rules: The
disableBucketManagement
field is optional and defaults tofalse
.
logRetentionTime
This field is used in conjunction with the logs
persistent volume mount. The value can be anything that can be parsed by the golang ParseDuration function. If specified this controls the retention period that log volumes are kept for after their associated pods have been deleted. If not specified (the default) or zero e.g. 0s, log volumes are retained indefinitely.
It is recommended to specify this field when using ephemeral server classes so that potentially sensitive information is not retained indefinitely. This allows compliance with data privacy legislation.
Field rules: The
logRetentionTime
field is optional and defaults to""
. If specified it must contain a decimal number and a unit suffix such ass
,m
orh
. The decimal number cannot be negative.
logRetentionCount
This field is used in conjunction with the logs
persistent volume mount. If specified this controls the maximum number of log volumes that can be kept after their associated pods have been deleted. If this threshold is passed, log volumes are deleted starting with the oldest first. If not specified or zero, log volumes are retained indefinitely.
It is recommended to specify this field when using ephemeral server classes so an unlimited number of persistent volumes cannot be created. This ensures storage quotas are not exhausted or significant costs are incurred.
Field rules: The
logRetentionCount
field is optional and defaults to0
. The value must be any non-negative integer.
spec.cluster
This section describes the various Couchbase cluster settings. This section is not meant to encompass every setting that is configurable on Couchbase. Cluster settings not mentioned here can be managed manually.
spec:
cluster:
dataServiceMemoryQuota: 4096
indexServiceMemoryQuota: 1024
searchServiceMemoryQuota: 2048
eventingServiceMemoryQuota: 256
analyticsServiceMemoryQuota: 1024
indexStorageSetting: plasma
autoFailoverTimeout: 120
autoFailoverMaxCount: 3
autoFailoverOnDataDiskIssues: true
autoFailoverOnDataDiskIssuesTimePeriod: 120
autoFailoverServerGroup: false
dataServiceMemoryQuota
The amount of memory to assign to the data service if it is present on a specific Couchbase node. The amount of memory is defined in Megabytes (MB).
Field rules: The
dataServiceMemoryQuota
field is required and must be set to be greater than or equal to 256. Keep in mind that the sum of all memory quotas must be no more than 80% of a pod’s available memory.
indexServiceMemoryQuota
The amount of memory to assign to the index service if it is present on a specific Couchbase node. The amount of memory is defined in Megabytes (MB).
Field rules: The
indexServiceMemoryQuota
field is required and must be set to be greater than or equal to 256. Keep in mind that the sum of all memory quotas must be no more than 80% of a pod’s available memory.
searchServiceMemoryQuota
The amount of memory to assign to the search service if it is present on a specific Couchbase node. The amount of memory is defined in Megabytes (MB). This parameter defaults to 256 MB if it is not set.
Field rules: The
searchServiceMemoryQuota
field is required and must be set to be greater than or equal to 256. Keep in mind that the sum of all memory quotas must be no more than 80% of a pod’s available memory.
eventingServiceMemoryQuota
The amount of memory to assign to the eventing service if present on a specific Couchbase node in Mi.
Field rules: The
eventingServiceMemoryQuota
field is required and must be set to be greater than or equal to 256. Keep in mind that the sum of all memory quotas must be no more than 80% of a pods available memory.
analyticsServiceMemoryQuota
The amount of memory to assign to the analytics service if present on a specific Couchbase node in Mi.
Field rules: The
analyticsServiceMemoryQuota
field is required and must be set to be greater than or equal to 1024. Keep in mind that the sum of all memory quotas must be no more than 80% of a pods available memory.
indexStorageSetting
Specifies the backend storage type to use for the index service. If the cluster already contains a Couchbase Server instance running the index service, then this parameter cannot be changed until all Couchbase instances running the index service are removed.
Field rules: The
indexStorageSetting
is required and must be set to eitherplasma
ormemory-optimized
. The value of this field can only be changed if there are no index nodes in the cluster.
autoFailoverTimeout
Specifies the auto-failover timeout in seconds. The Operator relies on the CouchbaseCluster to auto-failover nodes before removing them, so setting this field to an appropriate value is important.
Field rules: The
autoFailoverTimeout
is required and must be in the range 5-3600
autoFailoverMaxCount
Specifies the maximum number of failover events we can tolerate before manual intervention is required. If a bucket as 2 replicas we can tolerate 2 pods failing over. Applies to entire server groups also.
Field rules: The
autoFailoverMaxCount
is required and must be in the range 1-3
autoFailoverOnDataDiskIssues
Specifies whether a node will automatically fail over on data disk issues.
Field rules: The
autoFailoverOnDataDiskIssues
is required and must betrue
orfalse
.
autoFailoverOnDataDiskIssuesTimePeriod
Specifies the time period to wait before automatically failing over a node experiencing data disk issues. This field’s units are in seconds.
Field rules: The
autoFailoverOnDataDiskIssuesTimePeriod
is only required ifautoFailoverOnDataDiskIssues
is also set totrue
. Must be in the range 5-3600
spec.buckets
This section describes one or more buckets that must be created in the cluster.
If you remove an already specified bucket from the configuration, the Operator will delete that bucket from the cluster. |
spec:
buckets:
- name: default
type: couchbase
memoryQuota: 1024
replicas: 1
ioPriority: high
evictionPolicy: value-eviction
conflictResolution: seqno
enableFlush: true
enableIndexReplica: false
name
This field specifies the name of the bucket. This parameter is required when specifying a bucket.
Field rules: The
name
is required and must be a string using characters and numbers. The value of this field cannot be changed after the bucket is created.
type
This field specifies the type of bucket to create. This parameter can be set to couchbase
, ephemeral
, or memcached
. If the type is not specified, it defaults to couchbase
.
Field rules: The
type
is required and must be set to eithercouchbase
,ephemeral
, ormemcached
. The value of this field cannot be changed after the bucket is created.
memoryQuota
This field specifies the amount of memory to allocate to this bucket in Megabytes (MB). If the quota is not specified, it defaults to 100.
Field rules: The
memoryQuota
is required and must be set to greater than or equal to 100.
replicas
This field specifies the number of replicas that should be created for this bucket. This value may be set between 0 and 3 inclusive. If the number is not set, it defaults to 1. Note that this parameter has no effect for the memcached bucket type.
Field rules: The
replicas
field is required for buckets with typecouchbase
andephemeral
and must be set between 0 and 3. Memcached buckets will ignore values in this field. Changing the value of this field will cause a rebalance to occur.
ioPriority
This field sets the IO priority of background threads for this bucket. This option is valid for Couchbase and Ephemeral buckets only. Memcached buckets will ignore values in this field.
Field rules: The
ioPriority
is required for buckets with typecouchbase
andephemeral
and must be set to eitherhigh
orlow
. Memcached buckets will ignore values in this field. Changing the value of this field will cause downtime while the bucket is restarted.
evictionPolicy
This field sets the memory-cache eviction policy for this bucket. This option is valid for Couchbase and Ephemeral buckets only.
Couchbase buckets support either valueOnly
or fullEviction
. Specifying the valueOnly
policy means that each key stored in this bucket must be kept in memory. This is the default policy; using this policy improves the performance of key-value operations, but limits the maximum size of the bucket. Specifying the fullEviction
policy means that the performance is impacted for key-value operations, but the maximum size of the bucket is unbounded.
Ephemeral buckets support either noEviction
or nruEviction
.
Specifying noEviction
means that the bucket will not evict items from the cache if the cache is full. This type of eviction policy should be used for in-memory database use cases.
Specifying nruEviction
means that items not recently used will be evicted from memory when all the memory in the bucket is used. This type of eviction policy should be used for caching use cases.
Field rules: The
evictionPolicy
is required for buckets with typecouchbase
andephemeral
. Memcached buckets will ignore values in this field. Changing the value of this field will cause downtime while the bucket is restarted.
conflictResolution
This field specifies the bucket’s conflict resolution mechanism which is to be used if a conflict occurs during cross-datacenter replication (XDCR). There are two supported mechanisms: sequence-based and timestamp-based.
The sequence-based conflict resolution mechanism selects the document that has been updated the greatest number of times since the last sync. For example, if one cluster has updated a document twice since the last sync, and the other cluster has updated the document three times, the document updated three times is selected regardless of the specific times at which these updates took place.
The timestamp-based conflict resolution mechanism selects the document with the most recent timestamp. This is only supported when all of the clocks on all of the nodes are fully synchronized.
Field rules: The
conflictResolution
field is required for buckets with typecouchbase
andephemeral
and can be set to eitherseqno
orlww
. Memcached buckets will ignore values in this field. The value of this field cannot be changed after the bucket has been created.
enableFlush
This field specifies whether or not to enable the flush command on this bucket. This parameter defaults to false
if it is not specified. This parameter only affects Couchbase and Ephemeral buckets.
Field rules: The
enableFlush
field can be set to eithertrue
orfalse
. If this parameter is not specified, it defaults tofalse
.
enableIndexReplica
This field specifies whether or not to enable view index replicas for this bucket. This parameter defaults to false
if it is not specified. This parameter only affects Couchbase buckets.
Field rules: The
enableIndexReplica
field is required for buckets with typecouchbase
and can be set to eithertrue
orfalse
. Memcached and Ephemeral buckets will ignore values in this field.
spec.servers
You must specify at least one - but possibly multiple - node specifications. A node specification is used to allow multi-dimensional scaling of a Couchbase cluster with Kubernetes.
spec:
servers:
- size: 1
name: all_services
services:
- data
- index
- query
- search
- eventing
- analytics
serverGroups:
- us-east-1a
pod:
couchbaseEnv:
- name: CB_ENV_VAR
value: value
resources:
limits:
cpu: 4
memory: 8Gi
storage: 100Gi
requests:
cpu: 2
memory: 8Gi
storage: 50Gi
labels:
couchbase_services: all
nodeSelector:
instanceType: large
tolerations:
- key: app
operator: Equal
value: cbapp
effect: NoSchedule
Specification rules: The servers
portion of the specification is required and must always contain at least one server definition. There must also be at least one definition that contains the data service.
size
This field specifies the number of nodes of this type that should be in the cluster. This allows the user to scale up different parts of the cluster as necessary. If this parameter is changed at runtime, the Operator will automatically scale the cluster.
Field rules: The
size
is required and can be set to greater than or equal to 1.
name
This field specifies a name for this group of servers.
Field rules: The
name
field is required and must be unique in comparison to thename
field of other server definitions. The value of this field cannot be changed after a server has been defined.
services
This field specifies a list of services that should be run on nodes of this type. Users can specify data
, index
, query
, search
, eventing
and analytics
in the list. At least one service must be specified and all clusters must contain at least one node specification that includes the data service.
Field rules: The
services
list is required and must contain at least one service. Valid values for services aredata
,index
,query
,search
,eventing
andanalytics
. The values of this list cannot be changed after a server has been defined.
serverGroups
This controls the set of server groups to schedule pods in. Functionality is identical to that defined in the top level specification, but overrides it and allows the end user to specify exactly where pods of individual server/service configuration are scheduled. See the main documentation for details.
spec.servers.pod
The pod policy defines settings that apply to all pods deployed with this node configuration. A pod always contains a single running instance of Couchbase Server.
spec:
servers:
- pod:
couchbaseEnv:
- name: CB_ENV_VAR
value: value
resources:
limits:
cpu: 4
memory: 8Gi
storage: 100Gi
requests:
cpu: 2
memory: 8Gi
storage: 50Gi
labels:
couchbase_services: all
nodeSelector:
instanceType: large
tolerations:
- key: app
operator: Equal
value: cbapp
effect: NoSchedule
volumeMounts:
default: couchbase
data: couchbase
index: couchbase
analytics:
- couchbase
- couchbase
logs: couchbase
couchbaseEnv
This field specifies the environment variables (as key-values pairs) that should be set when the pod is started. This section is optional.
Field rules: The value of this field cannot be changed after a server has been defined.
resources
spec:
servers:
- pod:
resources:
limits:
cpu: 2
memory: 8Gi
storage: 100Gi
requests:
cpu: 1.5
memory: 8Gi
storage: 50Gi
- limits
-
This field lets you reserve resources on a specific node. It defines the maximum amount of CPU, memory, and storage the pods created in this node specification will reserve.
- requests
-
This field lets you reserve resources on a specific node. The requests section defines the minimum amount of CPU, memory, and storage the pods created in this node specification will reserve.
In general, CPU limits are specified as the number of threads required on physical hardware (vCPUs if in the cloud), and memory is specified in bytes. Refer to the Kubernetes documentation for more information about managing compute resources. |
labels
Labels are key-value pairs that are attached to objects in Kubernetes. They are intended to specify identifying attributes of objects that are meaningful to the user and do not directly imply semantics to the core system. Labels can be used to organize and select subsets of objects. They do not need to be unique across multiple objects. This section is optional.
Labels added in this section will apply to all pods added in this cluster. Note that by default we add the following labels to each pod, which should not be overridden.
app:couchbase couchbase_cluster:<metadata:name>
In the sample configuration file referenced in this topic, the label would be couchbase_cluster:cb-example
.
The label for the first pod is of the format: couchbase_node: <metadata:name>-<gen node id>
.
In the sample configuration file referenced in this topic, the label for the first pod would be couchbase_node:cb-example-0000
.
For more information, see the Kubernetes documentation about labels.
Field rules: The value of this field cannot be changed after a server has been defined.
nodeSelector
This field specifies a key-value map of the constraints on node placement for pods. For a pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). If this section is not specified, then Kubernetes will place the pod on any available node. For more information, see the Kubernetes documentation about label selectors.
Field rules: The value of this field cannot be changed after a server has been defined.
tolerations
This field specifies conditions upon which a node should not be selected when deploying a pod. From the sample configuration file referenced in this topic, you can see that any node with a label app:cbapp
should not be allowed to run the pod defined in this node specification. You might do this if you have nodes dedicated for running an application using Couchbase where you don’t want the database and application to be running on the same node. For more information about tolerations, see the Kubernetes documentation on taints and tolerations. The tolerations section is optional.
Field rules: The value of this field cannot be changed after a server has been defined.
spec.servers.pod.volumeMounts
spec:
servers:
- pod:
volumeMounts:
default: couchbase
data: couchbase
index: couchbase
analytics:
- couchbase
- couchbase
logs: couchbase
The VolumeMounts
configuration specifies the claims to use for the storage that is used by the Couchbase Server cluster.
default
This field is required when using persistent volumes. The value specifies the name of the volumeClaimTemplate
that is used to create a persisted volume for the default
path. This is always /opt/couchbase/var/lib/couchbase
. The claim must match the name of a volumeClaimTemplate
within the spec.
Field rules: The
default
volume mount may be specified for production clusters or those that require support. Please consult the best practices guide for more information. It must be specified with server classes running any of thedata
,index
oranalytics
services. It may however, be used with any service. It cannot be specified at the same time as thelogs
volume mount for a specific server configuration.
data
This field is the name of the volumeClaimTemplate
that is used to create a persisted volume for the data
path. When specified, the data path will be /mnt/data
. The claim must match the name of a volumeClaimTemplate
within the spec. If this field is not specified, then a volume will not be created and the data directory will be part of the "default" volume claim.
Field rules: The
data
volume mount can only be specified when thedefault
volume claim is also specified. Thespec.servers.services
field must also contain thedata
service.
index
This field is the name of the volumeClaimTemplate
that is used to create a persisted volume for the index
path. When specified, the index path will be /mnt/index
. The claim must match the name of a volumeClaimTemplate
within the spec. If this field is not specified, then a volume will not be created and the data directory will be part of the "default" volume claim.
Field rules: The
index
volume mount can only be specified when thedefault
volume claim is also specified. Thespec.servers.services
field must also contain theindex
service.
analytics
This field is the name of the volumeClaimTemplate
that is used to create a persisted volume for the analytics
paths. When specified, the analytics paths will be /mnt/analytics-00
(where 00
denotes the first path), with all subsequent paths having incrementing values. The claim must match the name of a volumeClaimTemplate
within the spec. If this field is not specified, then a volume will not be created and the data directory will be part of the "default" volume claim.
Field rules: The analytics
volume mount can only be specified when the default
volume claim is also specified. The spec.servers.services
field must also contain the analytics
service.
logs
This field is the name of the volumeClaimTemplate
that is used to create a persisted volume for the logs
path. When specified, the logs path will be /opt/couchbase/var/lib/couchbase/logs
. The claim must match the name of a volumeClaimTemplate
within the spec. If this field is not specified, then a volume will not be created.
Field rules: The
logs
volume mount may be specified for production clusters that require support. Please consult the best practices guide for more information. It may only be specified with server classes not running any of thedata
,index
oranalytics
services. It cannot be specified at the same time as thedefault
volume mount a specific server configuration.
spec.volumeClaimTemplates
spec:
volumeClaimTemplates:
- metadata:
name: couchbase
spec:
storageClassName: "standard"
resources:
requests:
storage: 1Gi
Defines a template of a persistent volume claim. At runtime, the Operator will create a persistent volume from this template for each pod. Claims can request volumes from various types of storage systems as identified by the storage class name.
spec.volumeClaimTemplates.spec
storageClassName
The storageClassName
is required by the claim. A storageClassName
provides a way for administrators to describe the classes of storage they offer. If no storageClassName
is specified, then the default storage class is used.
Refer to the Kubernetes documentation for more information about storage classes.