CouchbaseCluster
Resource
To use the Couchbase Autonomous Operator to deploy a Couchbase Server cluster, you need to create a CouchbaseCluster
configuration file that describes how the cluster should be configured and maintained by the Operator. Like all Kubernetes configurations, Couchbase Server clusters are defined using either YAML or JSON (YAML is preferred by Kubernetes) and then pushed into Kubernetes.
The following CouchbaseCluster
configuration shows all of the available parameters.
(Note that this configuration is not valid and is meant for example purposes only.)
CouchbaseCluster
configuration parametersapiVersion: couchbase.com/v2
kind: CouchbaseCluster
metadata:
name: cb-example
spec:
image: couchbase/server:6.6.0
paused: false
antiAffinity: true
softwareUpdateNotifications: true
upgradeStrategy: RollingUpgrade
hibernate: false
hibernationStrategy: Immediate
recoveryPolicy: PrioritizeDataIntegrity
serverGroups:
- us-east-1a
- us-east-1b
- us-east-1c
securityContext:
runAsUser: 1000
runAsNonRoot: true
fsGroup: 1000
platform: aws
cluster:
clusterName: cb-example
dataServiceMemoryQuota: 256Mi
indexServiceMemoryQuota: 256Mi
searchServiceMemoryQuota: 256Mi
eventingServiceMemoryQuota: 256Mi
analyticsServiceMemoryQuota: 1Gi
indexStorageSetting: memory_optimized # DEPRECATED
indexer:
threads: 1
logLevel: info
maxRollbackPoints: 2
memorySnapshotInterval: 200ms
stableSnapshotInterval: 5s
storageMode: memory_optimized
autoFailoverTimeout: 120s
autoFailoverMaxCount: 3
autoFailoverOnDataDiskIssues: true
autoFailoverOnDataDiskIssuesTimePeriod: 120s
autoFailoverServerGroup: false
autoCompaction:
databaseFragmentationThreshold:
percent: 30
size: 1Gi
viewFragmentationThreshold:
percent: 30
size: 1Gi
parallelCompaction: false
timeWindow:
start: 02:00
end: 06:00
abortCompactionOutsideWindow: true
tombstonePurgeInterval: 72h
security:
adminSecret: my-secret
rbac:
managed: true
selector:
matchLabels:
cluster: cb-example
ldap:
hosts:
- ldap.example.com
bindDN: "cn=admin,dc=example,dc=com"
bindSecret: cb-example-auth
userDNMapping:
template: "uid=%s,ou=People,dc=example,dc=com"
networking:
exposeAdminConsole: true
adminConsoleServiceTemplate:
metadata:
labels:
key: value
spec:
type: LoadBalancer
adminConsoleServices: # DEPRECATED
- data
adminConsoleServiceType: LoadBalancer # DEPRECATED
exposedFeatureServiceTemplate:
metadata:
labels:
key: value
spec:
type: LoadBalancer
exposedFeatureServiceType: LoadBalancer # DEPRECATED
exposedFeatureTrafficPolicy: Local # DEPRECATED
serviceAnnotations: # DEPRECATED
my-annotation: my-value
loadBalancerSourceRanges: # DEPRECATED
- "10.0.0.0/8"
networkPlatform: Istio
tls:
static:
serverSecret: couchbase-server-tls
operatorSecret: couchbase-operator-tls
clientCertificatePolicy: mandatory
clientCertificatePaths:
- path: san.email
prefix: ""
delimiter: "@"
nodeToNodeEncryption: All
dns:
domain: couchbase-0.us-east-1.example.com
logging:
logRetentionTime: 604800s
logRetentionCount: 20
enablePreviewScaling: false
servers:
- size: 1
name: all_services
services:
- data
- index
- query
- search
- eventing
- analytics
serverGroups:
- us-east-1a
autoscaleEnabled: false
env:
- name: ENV1
value: value
envFrom:
- secretRef:
name: environment-secret
resources:
limits:
cpu: 4
memory: 8Gi
requests:
cpu: 2
memory: 8Gi
volumeMounts:
default: couchbase
data: couchbase
index: couchbase
analytics:
- couchbase
- couchbase
pod:
metadata:
labels:
couchbase_services: all
annotations:
couchbase.acme.com: production
spec:
nodeSelector:
instanceType: large
tolerations:
- key: app
operator: Equal
value: cbapp
effect: NoSchedule
priorityClassName: high-priority
automountServiceAccountToken: false
serviceAccountName: couchbase-pods
imagePullSecrets:
- name: my-pull-secret
dnsPolicy: None
dnsConfig:
nameservers:
- "10.0.0.2"
searches:
- cluster.local
buckets:
managed: true
selector:
matchLabels:
cluster: cb-example
xdcr:
managed: true
remoteClusters:
- name: remote
uuid: 611e50b21e333a56e3d6d3570309d7e3
hostname: 10.1.0.2:31851
authenticationSecret: my-xdcr-secret
tls:
secret: my-xdcr-tls-secret
replications:
selector:
matchLabels:
cluster: cb-example
backup:
managed: true
image: couchbase/operator-backup:6.6.0
serviceAccountName: couchbase-backup
s3Secret: s3-secret
resources:
requests:
cpu: 100m
memory: 100Mi
selector:
matchLabels:
cluster: cb-example
nodeSelector:
class: couchbase-backup
tolerations:
- key: couchbase-backup
operator: Exists
effect: NoSchedule
monitoring:
prometheus:
enabled: true
image: couchbase/exporter:1.0.3
authorizationSecret: cb-metrics-token
resources:
requests:
cpu: 100m
memory: 100Mi
volumeClaimTemplates:
- metadata:
name: couchbase
labels:
my-label: my-value
annotations:
my-annotation: my-value
spec:
storageClassName: "standard"
resources:
requests:
storage: 1Gi
Top-Level Definitions
The following are the top-level parameters for a CouchbaseCluster
configuration:
apiVersion: couchbase.com/v2
kind: CouchbaseCluster
metadata:
name: cb-example
namespace: default
apiVersion
This field specifies the API version for the Couchbase integration. As the integration changes over time, you can change the API version whenever new features are added. For any given release, the API versions that are supported by that Operator will be specified in the documentation. It is recommended that you upgrade to the latest API version whenever possible.
Field rules: This field is required and must be
couchbase.com/v2
.
kind
This field specifies that this Kubernetes configuration will use the custom Couchbase controller to manage the cluster.
Field rules: This field is required and must always be set to
CouchbaseCluster
. The value of this field cannot be changed after the cluster is created.
metadata.name
This field sets the name of the Couchbase cluster.
It must be unique within the Kubernetes namespace.
This name controls how Operator managed resources are named.
The logical cluster name — as displayed in the Couchbase UI — may be changed with the spec.cluster.clusterName
parameter.
Field rules: This field must be a valid DNS hostname string.
spec
This section describes the top-level parameters related to a Couchbase cluster deployment.
spec:
image: couchbase/server:6.6.0
paused: false
antiAffinity: true
softwareUpdateNotifications: true
upgradeStrategy: RollingUpgrade
hibernate: false
hibernationStrategy: Immediate
recoveryPolicy: PrioritizeDataIntegrity
serverGroups:
- us-east-1a
- us-east-1b
- us-east-1c
securityContext:
runAsUser: 1000
runAsNonRoot: true
fsGroup: 1000
platform: aws
enablePreviewScaling: false
spec.image
This field specifies the image that should be used.
For docker hub images for use on Kubernetes this should be similar to couchbase/server:6.6.0
.
For Red Hat images for use with OCP this should be similar to registry.connect.redhat.com/couchbase/server:6.6.0-1
.
Please check the relevant registry for the most up to date image tags.
You may use a different location if, for example, you have a local container registry.
Image tags must contain a valid semantic version in order to dynamically enable/disable Operator features and control upgrade paths. For further information on upgrade constraints, please consult the upgrade documentation.
Field rules: The
image
field is required and must be a valid container image.
spec.paused
This field specifies whether or not the Operator is currently managing this cluster.
This parameter may be set to true
if you decide to make manual changes to the cluster.
By disabling Operator management, you can change the cluster configuration without having to worry about the Operator reverting the changes.
However, before re-enabling the Operator, ensure that the Kubernetes configuration matches the cluster configuration.
Field rules: The
paused
field is optional and defaults tofalse
if not specified.
spec.antiAffinity
This field specifies whether or not two pods in this cluster can be deployed on the same Kubernetes node. In a production setting this parameter should always be set to true
in order to reduce the chance of data loss in case a Kubernetes node crashes.
Field rules: The
antiAffinity
field is optional and defaults tofalse
if not specified.
spec.softwareUpdateNotifications
This field specifies whether or not software update notifications are displayed in the Couchbase UI. This provides a visual indication as to whether a software update is available and should be applied in order to increase functionality or fix defects.
Field rules: The
softwareUpdateNotifications
field is optional and defaults to false if not specified. This setting can be modified at any point in the cluster life-cycle.
spec.upgradeStrategy
This field controls how the Operator performs upgrades/modifications to pods when required.
The default value if not specified, RollingUpgrade
, upgrades one pod at a time.
Rolling upgrades are safer in that they limit the scope of an upgrade, and can be rolled back, however will take longer to complete.
The value ImmediateUpgrade
can be used to upgrade all pods at the same time.
Immediate upgrades are much faster than rolling upgrades.
Field rules: The
upgradeStrategy
field is optional and must be one ofRollingUpgrade
orImmediateUpgrade
.
spec.hibernate
This field defines whether the cluster is hibernating or not.
When set to false, the default, the cluster operates as usual or is woken up from hibernation.
When set to true, as defined by spec.hibernationStrategy
, the cluster is deprovisioned.
For more information, read the hibernation concept page.
Field rules: This field is optional and must be a boolean.
spec.hibernationStrategy
This field defines how the Operator should hibernate the cluster when requested.
When Immediate
, the default, the Operator will terminate all pods associated with the Couchbase cluster immediately.
For more information, read the hibernation concept page.
Field rules: This field is optional and must be
Immediate
spec.recoveryPolicy
This field defines how the Operator should attempt to recover the cluster in the event of any pods failing, or when waking-up a hibernating cluster.
When PrioritizeDataIntegrity
, the default, is set the Operator will take a cautious approach to recovery.
When a pod is reported as down with this policy, the Operator will fully delegate responsibility for failing over Couchbase server instances to Couchbase server.
If Couchbase server deems auto-failover dangerous, or resulting in data loss, then the Operator will loop continually until the affected pods become either active again or failed.
This may require manual intervention.
When PrioritizeUptime
is set, the Operator will wait for a period beyond the requested auto-failover timeout before taking action.
If Couchbase server does not automatically failover pods within this time limit, the Operator will forcefully failover the pods, at which point the Operator is unblocked and becomes able to recover the cluster.
This policy may cause data loss, however is useful for recovery of ephemeral clusters — where no persistent volumes exist and data is lost anyway — and hibernation where Couchbase server refuses to failover ephemeral pods.
Field rules: This field is optional and must be one of
PrioritizeDataIntegrity
orPrioritizeUptime
.
spec.serverGroups
Setting the server groups field enables automatic management of Couchbase server groups. The end user is responsible for adding labels to their Kubernetes nodes which will be used to evenly distribute nodes across server groups so the cluster is tolerant to the loss of an entire data center (or any other desired failure domain). Nodes are labeled with a key of failure-domain.beta.kubernetes.io/zone
and an arbitrary name string. Multiple nodes may have the same server group to allow multiple pods to be scheduled there regardless of anti-affinity settings. An example of applying the label is as follows:
kubectl label nodes ip-172-16-0-10 failure-domain.beta.kubernetes.io/zone=us-east-1a
As the list of server groups to use is explicit the end user has flexibility in controlling exactly where pods will be scheduled e.g. one cluster may reside in one set of server groups, and another cluster in another set of server groups.
At present the scheduling simply stripes pods across the server groups; each new pod is run in a server group with the fewest existing cluster members. This is performed on a per-server configuration basis to ensure individual classes of servers are equally distributed for high-availability. For each class of server configuration you may choose override the set of server groups to schedule across; see the documentation under the spec.servers.serverGroups
configuration key.
The server group feature also not support service redistribution at this time, so scaling the set of server groups will not result in any pods being 'moved' to make best use of the new topology or evacuated from a removed server group.
Field rules: The
serverGroups
field is optional. If set pods will be scheduled across the specified set of server groups. The server groups must be set at cluster creation time, and currently should be assumed to be immutable.
spec.securityContext
This field is a Kubernetes PodSecurityContext
object which is attached to all pods that are created by the Operator. If unspecified, this will default to the couchbase
user, mount attached volumes as that user, and ensure that the containers are running as non-root. You may override the default behavior if using a custom container image or for testing purposes.
For additional information, see the non-root installation how-to.
Field rules: The
securityContext
field is optional. If set, this will be attached to all new pods that are created by the Operator.
spec.platform
This field indicates which underlying cloud platform the Kubernetes cluster is running on and is used as a hint to correctly configure managed resources.
Field rules: The
platform
field is optional. The Value must be one ofaws
,gce
orazure
.
spec.enablePreviewScaling
This field enables autoscaling for stateful server configurations and buckets.
A server configuration is considered stateful if it contains one of data
, index
, search
, eventing
, or analytics
.
A bucket is considered stateful if it is one of CouchbaseBucket
, or CouchbaseMemcachedBucket
.
Server configurations with stateful services or stateful buckets are only permitted to autoscale when enablePreviewScaling
is true
.
A server configuration is considered stateless if it contains only the query
service.
A bucket is considered stateless if is a CouchbaseEphemeralBucket
.
When enablePreviewScaling
is false
only stateless server configurations associated with stateless buckets are permitted to autoscale.
Enabling preview autoscaling is unsupported and for experimental purposes only. |
Field rules: The
enablePreviewScaling
field is an optional boolean. The default value isfalse
.
spec.cluster
This object allows configuration of global Couchbase cluster settings.
spec:
cluster:
clusterName: cb-example
dataServiceMemoryQuota: 256Mi
indexServiceMemoryQuota: 256Mi
searchServiceMemoryQuota: 256Mi
eventingServiceMemoryQuota: 256Mi
analyticsServiceMemoryQuota: 1Gi
indexStorageSetting: memory_optimized # DEPRECATED
indexer:
threads: 1
logLevel: info
maxRollbackPoints: 2
memorySnapshotInterval: 200ms
stableSnapshotInterval: 5s
storageMode: memory_optimized
autoFailoverTimeout: 120s
autoFailoverMaxCount: 3
autoFailoverOnDataDiskIssues: true
autoFailoverOnDataDiskIssuesTimePeriod: 120s
autoFailoverServerGroup: false
autoCompaction:
databaseFragmentationThreshold:
percent: 30
size: 1Gi
viewFragmentationThreshold:
percent: 30
size: 1Gi
parallelCompaction: false
timeWindow:
start: 02:00
end: 06:00
abortCompactionOutsideWindow: true
tombstonePurgeInterval: 72h
spec.cluster.clusterName
This field allows the cluster name — as displayed in the Couchbase UI — to be overridden.
This field will be set to metadata.name
if not specified.
As metadata.name
is required to be a valid DNS hostname, this field allows those limitations to be bypassed.
field rules: This field is optional and must be a string.
spec.cluster.dataServiceMemoryQuota
The amount of memory to assign — per-node — to the data service if it is present on a specific Couchbase node.
This parameter defaults to 256Mi
if it is not set.
Field rules: This field is optional and must be a resource quantity greater than or equal to 256Mi. The sum of all memory quotas must be no more than 80% of a pod’s available memory.
spec.cluster.indexServiceMemoryQuota
The amount of memory to assign — per-node — to the index service if it is present on a specific Couchbase node.
This parameter defaults to 256Mi
if it is not set.
Field rules: This field is optional and must be a resource quantity greater than or equal to
256Mi
. The sum of all memory quotas must be no more than 80% of a pod’s available memory.
spec.cluster.searchServiceMemoryQuota
The amount of memory to assign — per-node — to the search service if it is present on a specific Couchbase node.
This parameter defaults to 256Mi
if it is not set.
Field rules: This field is optional and must be a resource quantity greater than or equal to 256Mi. The sum of all memory quotas must be no more than 80% of a pod’s available memory.
spec.cluster.eventingServiceMemoryQuota
The amount of memory to assign — per-node — to the eventing service if present on a specific Couchbase node in Mi.
This parameter defaults to 256Mi
if it is not set.
Field rules: This field is optional and must be a resource quantity greater than or equal to 256Mi. The sum of all memory quotas must be no more than 80% of a pod’s available memory.
spec.cluster.analyticsServiceMemoryQuota
The amount of memory to assign — per-node — to the analytics service if present on a specific Couchbase node in Mi.
This parameter defaults to 1Gi
if it is not set.
Field rules: This field is optional and must be a resource quantity greater than or equal to 1GI. The sum of all memory quotas must be no more than 80% of a pod’s available memory.
spec.cluster.indexStorageSetting
This field is deprecated and will be removed in a later release.
This functionality is now provided by the |
Specifies the backend storage type to use for the index service. If the cluster already contains a Couchbase Server instance running the index service, then this parameter cannot be changed until all Couchbase instances running the index service are removed.
Field rules: The
indexStorageSetting
is optional and defaults tomemory_optimized
. Values allowed areplasma
ormemory_optimized
. After the Couchbase cluster is deployed, the value of this field can only be changed if there are no Index nodes in the cluster.
spec.cluster.indexer.threads
This field controls the number of threads allocated to the indexer.
When 0, a thread will be created for each CPU.
This defaults to 0
.
Field rules: This field is optional and must be an integer greater than or equal to
0
.
spec.cluster.indexer.logLevel
This field controls indexer logging.
Defaults to info
.
Field rules: This field is optional and must be one of
silent
,fatal
,error
,warn
,info
,verbose
,timing
,debug
ortrace
.
spec.cluster.indexer.maxRollbackPoints
This field controls how many index rollback points to maintain.
This defaults to 2
.
Field rules: This field is optional and must be an integer greater than or equal to
1
.
spec.cluster.indexer.memorySnapshotInterval
This field controls how frequently in-memory indexes should be snapshotted.
This defaults to 200ms
.
Field rules: This field is optional and must be a valid duration greater than or equal to
1ms
.
spec.cluster.indexer.stableSnapshotInterval
This field controls how frequently on-disk indexes should be snapshotted.
This defaults to 5s
.
Field rules: This field is optional and must be a valid duration greater than or equal to
1ms
.
spec.cluster.indexer.storageMode
This field controls the storage mode the indexer uses.
The storage mode can only be modified when no server classes exist that contain the index
service.
Field rules: This field is optional and must be one of
memory_optimized
orplasma
.
spec.cluster.autoFailoverTimeout
Specifies the auto-failover timeout. The Operator relies on the Couchbase cluster to auto-failover nodes before removing them, so setting this field to an appropriate value is important.
Field rules: The
autoFailoverTimeout
is optional and must be a duration in the range 5s-3600s.
spec.cluster.autoFailoverMaxCount
Specifies the maximum number of failover events we can tolerate before manual intervention is required. If a bucket as 2 replicas we can tolerate 2 pods failing over. Applies to entire server groups also.
Field rules: The
autoFailoverMaxCount
is required and must be in the range 1-3.
spec.cluster.autoFailoverOnDataDiskIssues
Specifies whether a node will automatically fail over on data disk issues.
Field rules: The
autoFailoverOnDataDiskIssues
is required and must betrue
orfalse
.
spec.cluster.autoFailoverOnDataDiskIssuesTimePeriod
Specifies the time period to wait before automatically failing over a node experiencing data disk issues. This field’s units are in seconds.
Field rules: The
autoFailoverOnDataDiskIssuesTimePeriod
is only required ifautoFailoverOnDataDiskIssues
is also set totrue
. It must be a duration in the range 5s-3600s.
spec.cluster.autoCompaction
This object allows configuration of global Couchbase auto-compaction settings.
It is far more efficient to append database updates to files rather than rewriting them. This gives rise to stale data within the database file. This is known as fragmentation.
Auto compaction is required periodically to reclaim disk space caused by fragmentation. Auto compaction occurs automatically by default in Couchbase Server.
spec:
cluster:
autoCompaction:
databaseFragmentationThreshold:
percent: 30
size: 1Gi
viewFragmentationThreshold:
percent: 30
size: 1Gi
parallelCompaction: false
timeWindow:
start: 02:00
end: 06:00
abortCompactionOutsideWindow: true
tombstonePurgeInterval: 72h
spec.cluster.autoCompaction.databaseFragmentationThreshold.percent
This field defines the relative amount of fragmentation allowed in persistent database files before triggering compaction.
This field defaults to 30
.
Field rules: This field is optional and must be an integer in the range 2-100.
spec.cluster.autoCompaction.databaseFragmentationThreshold.size
This field defines the absolute amount of fragmentation allowed in persistent database files before triggering compaction. This field is not set by default.
Field rules: This field is optional and must be a Kubernetes resource quantity.
spec.cluster.autoCompaction.viewFragmentationThreshold.percent
This field defines the relative amount of fragmentation allowed in persistent view files before triggering compaction.
This field defaults to 30
.
Field rules: This field is optional and must be an integer in the range 2-100.
spec.cluster.autoCompaction.viewFragmentationThreshold.size
This field defines the absolute amount of fragmentation allowed in persistent view files before triggering compaction. This field is not set by default.
Field rules: This field is optional and must be a Kubernetes resource quantity.
spec.cluster.autoCompaction.parallelCompaction
This field defines whether auto-compaction should be performed in parallel.
Setting this field to true
will potentially yield shorter compaction times.
This comes at the expense of far greater disk IO that may compete with normal database operation.
This field should only be used on storage capable of handling the greater IO requirements e.g. solid state.
Field rules: This field is optional and must be a boolean.
spec.cluster.autoCompaction.timeWindow.start
This field defines when an auto-compaction may start to run.
Auto compaction comes with a cost associated with disk IO. It may be undesirable for it to run during core business hours as it competes with normal database operation. This field allows auto-compaction to be limited to a specific time window when fewer resources are required by normal database operation. This field is not set by default.
Field rules: This field is optional and must be a string in the form
HH:MM
whereHH
is in the range 0-23 andMM
is in the range 0-59.
spec.cluster.autoCompaction.timeWindow.end
This field defines when an auto-compaction may run until.
Auto compaction comes with a cost associated with disk IO. It may be undesirable for it to run during core business hours as it competes with normal database operation. This field allows auto-compaction to be limited to a specific time window when fewer resources are required by normal database operation. This field is not set by default.
Field rules: This field is optional and must be a string in the form
HH:MM
whereHH
is in the range 0-23 andMM
is in the range 0-59.
spec.cluster.autoCompaction.timeWindow.abortCompactionOutsideWindow
This field defines whether to stop auto-compaction after the time window end.
By default an auto-compaction will run until completion.
In some situations it may be desirable to stop auto-compaction to free up resources for normal database operation.
This field defaults to false
if not set.
Field rules: This field is optional and must be a boolean.
spec.cluster.autoCompaction.tombstonePurgeInterval
This field defines how frequently tombstones may be purged.
Couchbase Server maintains tombstones.
These are records of deleted documents.
Tombstones are retained so that eventually consistent consumers can observe document deletion in the event of a disconnection.
This field defaults to 72h
if not set.
Field rules: This field is optional and must be a valid duration in the range 1h-60d.
spec.security
This object allows configuration of global Couchbase security settings and RBAC.
spec:
security:
adminSecret: my-secret
rbac:
managed: true
selector:
matchLabels:
cluster: cb-example
spec.security.adminSecret
This field specifies the name of a Kubernetes Secret that should be used as the user name and password of the Couchbase super-user.
Field rules: The
adminSecret
field is required and should reference the name of a Kubernetes Secret that already exists. The value of this field cannot be changed after the cluster is created.
spec.security.rbac
spec:
security:
rbac:
managed: true
selector:
matchLabels:
cluster: cb-example
spec.security.rbac.managed
This field specifies whether the Operator should manage Couchbase RBAC.
This field defaults to false
allowing the user to manually manage RBAC settings with the Couchbase web console or the Couchbase API.
If set to true
, the Operator will search for CouchbaseRoleBinding
resources in the namespace under the control of the spec.security.rbac.selector
field.
Field rules: This field is optional and must be a boolean.
spec.security.rbac.selector
This field specifies which CouchbaseRoleBinding
resources are considered when the spec.security.rbac.managed
field is set to true
.
If not specified all CouchbaseRoleBinding
resources are selected.
If specified, then only CouchbaseRoleBinding
resources containing the same labels as specified in the label selector are selected.
For additional information, see the Couchbase resource label selection documentation.
Field rules: This field is optional and must be a Kubernetes
LabelSelector
object.
spec.security.ldap
spec:
security:
ldap:
hosts:
- ldap.example.com
port: 389
bindDN: "cn=admin,dc=example,dc=com"
bindSecret: cb-example-auth
authenticationEnabled: true
userDNMapping:
template: "uid=%s,ou=People,dc=example,dc=com"
authorizationEnabled: true
groupsQuery: "cn=users,ou=Groups,dc=example,dc=com??base?"
nestedGroupsEnabled: true
encryption: StartTLSExtension
tlsSecret: ldap-secret
spec.security.ldap.hosts
This field specifies list of LDAP hosts Operator should connect to for authentication.
Field rules: This field is required and must be a list of at least one hostname.
spec.security.ldap.port
This field specifies the port Operator should use connect when connecting to hosts.
Field rules: This field is optional and default to
389
if not specified.
spec.security.ldap.bindDN
This field specifies the Distinguished Name (DN) to bind to on the LDAP server for performing administrative tasks such as query and authentication. If not specified, the operator will attempt to connect anonymously to the specified LDAP host(s).
Field rules: This field is optional and must be a string.
spec.security.ldap.bindSecret
This field specifies the secret containing the password that the Operator should use to perform LDAP queries and authentication.
If specified the secret must refer to a Kubernetes secret containing a key named password
.
If not specified, the operator will attempt to connect anonymously to the specified LDAP host(s).
Field rules: This field is optional and must be a string.
spec.security.ldap.authenticationEnabled
This field specifies whether or not the Couchbase Server allows user authentication.
This defaults to true
and the template specified in the spec.security.ldap.userDNMapping
field will be used to search for the LDAP user to authenticate as an associated Couchbase Server user.
Field rules: This field is optional and must be a boolean. The default value is
true
.
spec.security.ldap.userDNMapping.template
This field specifies list of templates to use for providing username to DN mapping.
The template may contain a placeholder specified as %u
to represent the Couchbase user who is attempting to gain access.
Field rules: This field is optional and must be a string. The
spec.secure.ldap.authenticationEnabled
field must also betrue
.
spec.security.ldap.authorizationEnabled
This field specifies whether or not the Couchbase Server is allowed to authorize authenticated LDAP users.
If set to true
, the value of spec.security.ldap.groupsQuery
will be used to check the LDAP group membership of a user in order to authorize them.
Field rules: This field is optional and must be a boolean. The default value is
true
.
spec.security.ldap.groupsQuery
This field specifies the Distinguished Name (DN) of the LDAP group containing users which can be authorized.
Field rules: This field is optional and must be a string. The
spec.secure.ldap.authorizationEnabled
field must also betrue
.
spec.security.ldap.encryption
This field specifies the type of encryption to use for connection with the LDAP server. If this value is set to TLS
or StartTLSExtension
, and a certificate is not specified for spec.security.ldap.tlsSecret
, then the Couchbase server will attempt to use its own certificate for encryption.
Field rules: This field is optional and must be either
None
,TLS
,StartTLSExtension
. The default value isNone
(to connect without encryption — this is insecure, and therefore is not recommended).
spec.networking
This object allows configuration of network related options. These options allow you to connect to Couchbase from outside the Kubernetes cluster, and secure network traffic with transport layer security (TLS).
spec:
networking:
exposeAdminConsole: true
adminConsoleServiceTemplate:
metadata:
labels:
key: value
spec:
type: LoadBalancer
adminConsoleServices: # DEPRECATED
- data
adminConsoleServiceType: LoadBalancer # DEPRECATED
exposedFeatureServiceTemplate:
metadata:
labels:
key: value
spec:
type: LoadBalancer
exposedFeatureServiceType: LoadBalancer # DEPRECATED
exposedFeatureTrafficPolicy: Local # DEPRECATED
serviceAnnotations: # DEPRECATED
my-annotation: my-value
loadBalancerSourceRanges: # DEPRECATED
- "10.0.0.0/8"
networkPlatform: Istio
dns:
domain: couchbase-0.us-east-1.example.com
tls:
static:
serverSecret: couchbase-server-tls
operatorSecret: couchbase-operator-tls
clientCertificatePolicy: mandatory
clientCertificatePaths:
- path: san.email
prefix: ""
delimiter: "@"
nodeToNodeEncryption: All
spec.networking.exposeAdminConsole
This field specifies whether or not the Couchbase Server Web Console should be exposed externally. Exposing the web console is done using a NodePort
service and the port for the service can be found in the describe output when describing this cluster. This parameter may be changed while the cluster is running and the Operator will create/destroy the NodePort
service as appropriate.
Field rules: The
exposeAdminConsole
field is optional and defaults tofalse
if not specified. If set totrue
the specification must also have thespec.networking.adminConsoleServices
property defined.
spec.networking.adminConsoleServiceTemplate
When the spec.networking.exposeAdminConsole
property is set, the Operator will generate a service that exposes the administrator console.
This field allows the definition of a base Service
resource.
The Operator will merge any configuration it controls on top of this template.
This field allows the definition of annotations and labels in the template metadata, and the service type in the specification for example.
Field rules: The
adminConsoleServiceTemplate
field is optional and must be a KubernetesService
resource.
spec.networking.adminConsoleServices
This field is deprecated and will be removed in a later release. The functionality is now redundant with Couchbase server 6.5.0 or greater. |
When the Couchbase Server Web Console is exposed with the spec.networking.exposeAdminConsole
property, by default, opening a browser session to the Web Console will be automatically load balanced across all pods in the cluster to provide high availability. However, the Web Console will display different features based on the services that are running on the particular pod that it’s connected to.
This property allows the UI service to be constrained to pods running one or more specific services. The services that you specify are subtractive in nature — they will only identify pods running all of those services — so care must be used. For example, if the cluster is deployed with multi-dimensional scaling, the data service on one set of pods, and the analytics service on another mutually exclusive set, then specifying data
and analytics
as your Web Console services list would result in the Web Console not being accessible — no pods match all the constraints, i.e. no pods are running both the data and analytics services.
If you require access to a specific pod running a specific service, this can also be achieved by using the admin
option of the spec.networking.exposedFeatures
property. This will allow access via a node port. You must connect directly to the IP address of the node that the pod is running on. The assigned node port is available via the cluster status structure returned by kubectl describe
, or via the output of kubectl get services
. Refer to the services documentation for more information.
Field rules: The
adminConsoleServices
list is optional and defaults to an empty list. Valid item names aredata
,index
,query
,search
,eventing
, andanalytics
, and must be unique. An empty list means that any node in the cluster may be chosen when connecting to the Couchbase Server Web Console.
spec.networking.adminConsoleServiceType
This field is deprecated and will be removed in a later release.
The functionality is now provided by the |
This field defines how the admin console service is exposed. By default it will use a NodePort
which exposes the console on a random port on all nodes in the cluster. It is accessible only within the cluster or via an IP tunnel into the Kubernetes underlay network. If set to LoadBalancer
it will expose the default admin console port to the public internet via a load balancer on platforms which support this service type. Only the TLS enabled port (18091) will be exposed to public networks.
Field rules: The
adminConsoleServiceType
field is optional and defaults toNodePort
. Allowed values areNodePort
andLoadBalancer
. If this field isLoadBalancer
then you must also define aspec.dns.domain
.
spec.networking.exposedFeatures
This field specified a list of per-pod features to expose on the cluster network (as opposed to the pod network). These define sets of ports which are required to support the specified features. The supported values are as follows:
- admin
-
Exposes the admin API and UI
- xdcr
-
Exposes the ports necessary to support XDCR via L3 connectivity at the cluster network layer.
- client
-
Exposes all client services. These include data, views, query, full text search, analytics.
Field rules: The
exposedFeatures
list is optional, no feature sets are exposed to the cluster network if unset.
spec.networking.exposedFeatureServiceTemplate
When the spec.networking.exposedFeatures
property is set, the Operator will generate a service that exposes Couchbase services per-pod.
This field allows the definition of a base Service
resource.
The Operator will merge any configuration it controls on top of this template.
This field allows the definition of annotations and labels in the template metadata, and the service type in the specification for example.
Field rules: The
exposedFeatureServiceTemplate
field is optional and must be a KubernetesService
resource.
spec.networking.exposedFeatureServiceType
This field is deprecated and will be removed in a later release.
The functionality is now provided by the |
This field defines how the per Couchbase node ports are exposed. By default it will use a NodePort
which exposes the exposedFeatures
on random ports per service on the Kubernetes nodes running the cluster. They are accessible only within the cluster or via an IP tunnel into the Kubernetes underlay network. If set to LoadBalancer
it will expose the default exposed service ports on the public internet via a load balancer on platforms which support this service type. Only the TLS enabled ports will be exposed to public networks.
Field rules: The
exposedFeatureServiceType
field is optional and defaults toNodePort
. Allowed values areNodePort
andLoadBalancer
. If this field isLoadBalancer
then you must also define aspec.dns.domain
.
spec.networking.exposedFeatureTrafficPolicy
This field is deprecated and will be removed in a later release.
The functionality is now provided by the |
This field defines how traffic ingressing into Kubernetes is handled.
By default this is set to Local
meaning that traffic from a public load balancer must be routed directly to the node upon which the destination Pod
is resident.
This enforces consistent network performance by keeping the number of hops from the edge to the pod consistent.
Some environments may have external firewall rules in place that prevent the Operator from connecting to an externally advertised address to verify client connectivity before allowing data to reside on that node.
This field may be set to Cluster
to work around this issue, masquerading connectivity checks as normal node-to-node Kubernetes traffic, but at the expense of inconsistent network paths.
Field rules: The
exposedFeatureTrafficPolicy
field is optional and defaults toLocal
. Allowed values areLocal
andCluster
.
spec.networking.serviceAnnotations
This field is deprecated and will be removed in a later release.
The functionality is now provided by the |
This field defines any custom annotations to be added to console and per-pod (exposed feature) services.
Field rules: The
serviceAnnotations
field is optional. If specified it must be a map of key/value pairs conforming to Kubernetes annotation rules.
spec.networking.loadBalancerSourceRanges
This field is deprecated and will be removed in a later release.
The functionality is now provided by the |
This field defines any IPv4 prefixes that should be added to any load balancer services managed by the Operator. This is a list of address ranges that are allowed to connect to the service.
Field rules: The
loadBalancerSourceRanges
field is optional and must be a list of IPv4 address prefixes e.g.10.0.0.0/8
.
spec.networking.networkPlatform
This field defines the underlying network platform in use. Typically, this field does not need to be defined.
When Istio
, this enables DNS host aliases in Couchbase server pods.
Without this support, Couchbase server may connect to local services over the pod IP address, not localhost
, and will be dropped by Istio when running in mTLS mode.
Field rules: The
networkPlatform
field is optional and must beIstio
.
spec.networking.dns
This section describes the configuration of the dynamic domain name system [DDNS]. When exposing services to the public internet we mandate that TLS is used to protect against plain text user names, passwords and data. DDNS is used to dynamically populate the DNS with Couchbase nodes during the cluster lifecycle so they are addressable with stable names.
Given the example domain couchbase-0.us-east-1.example.com
it will annotate the admin service as requiring the name console.couchbase-0.us-east-1.example.com
to be added to the DNS when using the LoadBalancer
service type. When a pod is created with the name couchbase-0000
it will annotate the per-pod exposed features service as requiring the name couchbase-0000.couchbase-0.us-east-1.example.com
to be added to the DNS when using the LoadBalancer
service type.
The user is required to use these annotations to actually create the DNS entries as the services are created, updated and modified. The recommended solution is the Kubernetes External DNS integration. As such services will be annotated with external-dns.alpha.kubernetes.io/hostname
.
The DNS configuration can also be used with the NodePort
external feature type.
This allows the cluster to be addressed with DNS over private networks e.g. through a VPN.
In this mode of operation the Operator does not enforce the use of TLS.
To create A records for your individual Couchbase server nodes, map the service DNS name annotation to the associated pod’s node IP address.
The individual DNS per-pod exposed service A records should be dynamically collated into an SRV record. This provides a stable name for the cluster as nodes are added and removed during the cluster lifecycle, and service discovery. The resource name for the given examples should be _couchbases._tcp.couchbase-0.us-east-1.example.com , with equal priority and weight and a port value of 11207 .
|
When creating your cluster TLS certificates you will need to add a wildcard entry in the subject alternate names list for your specified domain. For the given examples this would be DNS:*.couchbase-0.us-east-1.example.com .
|
Exposing database services to public networks is inherently dangerous. XDCR and client connectivity can also be achieved with secure IPSEC tunnels. |
spec:
dns:
domain: couchbase-0.us-east-1.example.com
spec.networking.dns.domain
This field defines the DNS A records that should be dynamically added to the public DNS for console and per-pod services. It must be a valid domain name which you have ownership of and the ability to dynamically update as services are created, updated and deleted by the Operator.
Field rules: The
domain
field is required when either theexposedFeatureServiceType
oradminConsoleServiceType
fields are configured asLoadBalancer
.
spec.networking.tls
This field is optional and controls whether the Operator uses TLS for communication with the cluster. It also sets the TLS certificates that are used by Couchbase clients and XDCR. Refer to the TLS documentation for more information.
spec:
networking:
tls:
static:
serverSecret: couchbase-server-tls
operatorSecret: couchbase-operator-tls
clientCertificatePolicy: mandatory
clientCertificatePaths:
- path: san.email
prefix: ""
delimiter: "@"
nodeToNodeEncryption: All
spec.networking.tls.static.serverSecret
This field defines the certificate and private key used by Couchbase Server services.
This field is required if any static TLS configuration is specified under spec.networking.tls.static
.
This field references a secret containing a key chain.pem
whose value is a wildcard certificate chain signed by the cluster CA defined in spec.networking.tls.static.operatorSecret
, and pkey.key
whose value is a private key corresponding to the leaf certificate.
The TLS documentation contains detailed information on generating valid certificates for your cluster.
Field rules: This field is required when
spec.networking.tls.static
is defined and must be a string referencing a KubernetesSecret
resource.
spec.networking.tls.static.operatorSecret
This field defines the CA certificate and optionally a client certificate and private key used to authenticate the Operator.
This field is required if any static TLS configuration is specified under spec.networking.tls.static
.
This field references a secret containing a key ca.crt
whose value is a CA certificate.
The CA certificate must be the root authority for a server certificate defined in spec.networking.tls.static.member.serverSecret
.
The TLS documentation contains detailed information on generating valid certificates for your cluster.
If spec.networking.tls.clientCertificatePolicy
is set to enable
then operatorSecret
may contain a client certificate and private key.
If spec.networking.tls.clientCertificatePolicy
is set to mandatory
then operatorSecret
must contain a client certificate and private key.
When using client certificates the secret must contain a key couchbase-operator.crt
whose value is a client certificate chain signed by the cluster CA defined above, and couchbase-operator.key
whose value is a private key corresponding the leaf client certificate.
The client certificate must resolve — with spec.networking.tls.clientCertificatePaths[]
— to the administrative user name defined in spec.security.adminSecret
.
Field rules: This field is required when
spec.networking.tls.static
is defined and must be a string referencing a KubernetesSecret
resource.
spec.networking.tls.clientCertificatePolicy
This field defines the client certificate policy that Couchbase Server should use.
If not defined Couchbase Server will use basic authentication — username and password — over server side TLS.
If set to enable
then Couchbase Server will request a client certificate and the clients may return one in order to authenticate.
If a client certificate is not returned by the client Couchbase Server will fall back to basic authentication.
If you require certificate authentication by the Operator then you may define it in spec.networking.tls.static.operatorSecret
.
If set to mandatory
then Couchbase Server will request a client certificate and the client must return one in order to authenticate.
You must define the client certificate in spec.networking.tls.static.operatorSecret
.
Field rules: This field is optional and must be either
enable
ormandatory
.
spec.networking.tls.clientCertificatePaths[]
This list defines how Couchbase Server should parse client certificates in order to extract a valid user.
Field rules: If using client certificates this list should contain at least one item.
spec.networking.tls.clientCertificatePaths[].path
This field defines where to extract the username from. Couchbase Server currently allows usernames to be defined in the certificate subject common name (CN), or subject alternative names (SAN). Valid SANs that can be parsed are URI, DNS or EMAIL records.
Field rules: This field is required and must be either
subject.cn
,san.uri
,san.dns
orsan.email
.
spec.networking.tls.clientCertificatePaths[].prefix
This field defines whether the username is prefixed.
This allows, for example, a CN of www.example.com
to be mapped to a username example.com
by specifying a prefix of www.
.
Field rules: This field is optional and must be a string.
spec.networking.tls.clientCertificatePaths[].delimiter
This field defines whether the username is delimited.
This allows, for example, an EMAIL SAN of jane.doe@example.com
to be mapped to the username jane.doe
by specifying a delimiter of @
.
Field rules: This field is optional and must be a string.
spec.networking.tls.nodeToNodeEncryption
This field defines whether node-to-node encryption is enabled.
When set to All
, all data between Couchbase server nodes is encrypted.
When set to ControlPlaneOnly
, only internal Couchbase server messages are encrypted, user data is not.
If not specified, data between Couchbase Server nodes is not encrypted.
As with all encryption protocols, this setting may negatively affect performance with the increased data protection.
Field rules: This field is optional and must be either
All
orControlPlaneOnly
.
spec.logging
This object allows configuration of Operator logging functionality. This ensures any persistent logs do not consume excessive amounts of resource or persist for too long.
spec:
logging:
logRetentionTime: 604800s
logRetentionCount: 20
spec.logging.logRetentionTime
This field is used in conjunction with the logs
persistent volume mount. The value can be any valid golang duration. If specified this controls the retention period that log volumes are kept for after their associated pods have been deleted. If not specified (the default) or zero e.g. 0s, log volumes are retained indefinitely.
It is recommended to specify this field when using ephemeral server classes so that potentially sensitive information is not retained indefinitely. This allows compliance with data privacy legislation.
Field rules: The
logRetentionTime
field is optional and defaults to""
. If specified it must contain a decimal number and a unit suffix such ass
,m
orh
. The decimal number cannot be negative.
spec.logging.logRetentionCount
This field is used in conjunction with the logs
persistent volume mount. If specified this controls the maximum number of log volumes that can be kept after their associated pods have been deleted. If this threshold is passed, log volumes are deleted starting with the oldest first. If not specified or zero, log volumes are retained indefinitely.
It is recommended to specify this field when using ephemeral server classes so an unlimited number of persistent volumes cannot be created. This ensures storage quotas are not exhausted or significant costs are incurred.
Field rules: The
logRetentionCount
field is optional and defaults to0
. The value must be any non-negative integer.
spec.servers[]
This object allows the configuration of the Couchbase cluster topology.
Servers are grouped in to server classes also known as multi-dimensional scaling (MDS) groups.
You must specify at least one server class.
At least one server class must contain the data
service.
Modifying any field in the spec.servers
object that affects the underlying pod will trigger an upgrade, as pods are immutable.
Fields that do not trigger an upgrade include:
-
size
— attribute modification triggers a scaling operation. -
name
— attribute modification replaces the entire server class with a new one. -
services
— attribute is immutable.
spec:
servers:
- size: 1
name: all_services
services:
- data
- index
- query
- search
- eventing
- analytics
serverGroups:
- us-east-1a
autoscaleEnabled: false
env:
- name: ENV1
value: value
envFrom:
- secretRef:
name: environment-secret
resources:
limits:
cpu: 4
memory: 8Gi
requests:
cpu: 2
memory: 8Gi
volumeMounts:
default: couchbase
data: couchbase
index: couchbase
analytics:
- couchbase
- couchbase
pod:
metadata:
labels:
couchbase_services: all
annotations:
couchbase.acme.com: production
spec:
nodeSelector:
instanceType: large
tolerations:
- key: app
operator: Equal
value: cbapp
effect: NoSchedule
priorityClassName: high-priority
automountServiceAccountToken: false
serviceAccountName: couchbase-pods
imagePullSecrets:
- name: my-pull-secret
dnsPolicy: None
dnsConfig:
nameservers:
- "10.0.0.2"
searches:
- cluster.local
spec.servers[].size
This field specifies the number of nodes of this type that should be in the cluster. This allows the user to scale up different parts of the cluster as necessary. If this parameter is changed at run time, the Operator will automatically scale the cluster.
Field rules: The
size
is required and can be set to greater than or equal to 1.
spec.servers[].name
This field specifies a name for this group of servers.
Field rules: The
name
field is required and must be unique in comparison to thename
field of other server definitions. The value of this field should not be changed after a server class has been defined. Seespec.servers[]
for more details.
spec.servers[].services
This field specifies a list of services that should be run on nodes of this type. Users can specify data
, index
, query
, search
, eventing
and analytics
in the list. At least one service must be specified and all clusters must contain at least one node specification that includes the data service.
Field rules: The
services
list is required and must contain at least one service. Valid values for services aredata
,index
,query
,search
,eventing
andanalytics
. The values of this list cannot be changed after a server has been defined.
spec.servers[].serverGroups
This controls the set of server groups to schedule pods in. Functionality is identical to that defined in the top level specification, but overrides it and allows the end user to specify exactly where pods of individual server/service configuration are scheduled. See the main documentation for details.
spec.servers[].autoscaleEnabled
This field defines whether Autoscale is permitted for this specific server configuration.
When set to true
a CouchbaseAutoscale
resource is created by the operator with the name of the enabled server configuration along with the name of the cluster.
For example, a server configuration named default
will result in a CouchbaseAutoscale
resource created named default.cb-example
.
This resource conforms to Kubernetes HorizontalPodAutoscaler
API and can be specified as a scaleTargetRef
according to the HorizontalPodAutoscalerSpec resource specification.
Field rules: This field is optional and must be a boolean. The default value is
false
.
spec.servers[].env
This field specifies the environment variables (as key-values pairs) that should be set when the pod is started. This section is optional.
Specified environment variables will be available in Couchbase server containers only.
The env
field is defined by the Kubernetes EnvVar
type.
Field rules: This field must be a map of key-value pairs where both are strings.
spec.servers[].envFrom
This field specifies a reference to either a Secret
or ConfigMap
which is used to create environment variables when the pod is started.
Specified environment variables will be available in Couchbase server containers only.
The envFrom
field is defined by the Kubernetes EnvFromSource
type.
Field rules: The value of this field must be an array of Kubernetes
EnvFromSource
types. This field is optional.
spec.servers[].resources
Pod resources are specified using a Kubernetes ResourceRequirements
type:
spec:
servers:
- resources:
limits:
cpu: 2
memory: 8Gi
requests:
cpu: 1.5
memory: 8Gi
- limits
-
This field lets you reserve resources on a specific node. It defines the maximum amount of CPU and memory the pods created in this node specification can allocate. Exceeding these limits will cause the offending pod to be terminated.
- requests
-
This field lets you reserve resources on a specific node. The requests section defines the minimum amount of CPU and memory the pods created in this node specification will reserve.
In general, CPU limits are specified as the number of threads required on physical hardware (vCPUs if in the cloud), and memory is specified in bytes. Refer to the Kubernetes documentation for more information about managing compute resources. |
spec.servers[].volumeMounts
spec:
servers:
- volumeMounts:
default: couchbase
data: couchbase
index: couchbase
analytics:
- couchbase
- couchbase
The VolumeMounts
configuration specifies the claims to use for the storage that is used by the Couchbase Server cluster.
spec.servers[].volumeMounts.default
This field is required when using persistent volumes. The value specifies the name of the volumeClaimTemplate
that is used to create a persisted volume for the default
path. This is always /opt/couchbase/var/lib/couchbase
. The claim must match the name of a volumeClaimTemplate
within the spec.
Field rules: The
default
volume mount may be specified for production clusters or those that require support. Please consult the best practices guide for more information. It must be specified with server classes running any of thedata
,index
oranalytics
services. It may however, be used with any service. It cannot be specified at the same time as thelogs
volume mount for a specific server configuration.
spec.servers[].volumeMounts.data
This field is the name of the volumeClaimTemplate
that is used to create a persisted volume for the data
path. When specified, the data path will be /mnt/data
. The claim must match the name of a volumeClaimTemplate
within the spec. If this field is not specified, then a volume will not be created and the data directory will be part of the "default" volume claim.
Field rules: The
data
volume mount can only be specified when thedefault
volume claim is also specified. Thespec.servers.services
field must also contain thedata
service.
spec.servers[].volumeMounts.index
This field is the name of the volumeClaimTemplate
that is used to create a persisted volume for the index
path. When specified, the index path will be /mnt/index
. The claim must match the name of a volumeClaimTemplate
within the spec. If this field is not specified, then a volume will not be created and the data directory will be part of the "default" volume claim.
Field rules: The
index
volume mount can only be specified when thedefault
volume claim is also specified. Thespec.servers.services
field must also contain theindex
service.
spec.servers[].volumeMounts.analytics[]
This field is the name of any volumeClaimTemplate
that is used to create a persisted volume for the analytics
paths. When specified, the analytics paths will be /mnt/analytics-00
(where 00
denotes the first path), with all subsequent paths having incrementing values. The claim must match the name of a volumeClaimTemplate
within the spec. If this field is not specified, then a volume will not be created and the data directory will be part of the "default" volume claim.
Field rules: The analytics
volume mount can only be specified when the default
volume claim is also specified. The spec.servers.services
field must also contain the analytics
service.
spec.servers[].volumeMounts.logs
This field is the name of the volumeClaimTemplate
that is used to create a persisted volume for the logs
path. When specified, the logs path will be /opt/couchbase/var/lib/couchbase/logs
. The claim must match the name of a volumeClaimTemplate
within the spec. If this field is not specified, then a volume will not be created.
Field rules: The
logs
volume mount may be specified for production clusters that require support. Please consult the best practices guide for more information. It may only be specified with server classes not running any of thedata
,index
oranalytics
services. It cannot be specified at the same time as thedefault
volume mount a specific server configuration.
spec.servers[].pod
The pod
attribute is based on the Kubernetes PodTemplateSpec
structure and allows all supported configuration parameters to be modified.
The Operator will override the following fields, therefore must not be explicitly set by the user:
-
spec.command
-
spec.args
-
spec.containers
-
spec.initContainers
-
spec.restartPolicy
-
spec.hostname
-
spec.subdomain
-
spec.securityContext
(seespec.securityContext
) -
spec.antiAffinity
(ifspec.antiAffinity
is set totrue
) -
spec.volumes
(ifspec.servers[].volumeMounts
is set)
The following parameters are modified by the Operator to merge configuration required to function, and caution should be used when setting these parameters as they may be overwritten:
-
metadata.labels
-
metadata.annotations
-
spec.nodeSelector
(ifspec.serverGroups
orspec.servers[].serverGroups
is set)
spec:
servers:
- pod:
metadata:
labels:
couchbase_services: all
annotations:
couchbase.acme.com: production
spec:
nodeSelector:
instanceType: large
tolerations:
- key: app
operator: Equal
value: cbapp
effect: NoSchedule
priorityClassName: high-priority
automountServiceAccountToken: false
serviceAccountName: couchbase-pods
imagePullSecrets:
- name: my-pull-secret
dnsPolicy: None
dnsConfig:
nameservers:
- "10.0.0.2"
searches:
- cluster.local
spec.servers[].pod.metadata.labels
Labels are key-value pairs that are attached to objects in Kubernetes. They are intended to specify identifying attributes of objects that are meaningful to the user and do not directly imply semantics to the core system. Labels can be used to organize and select subsets of objects. They do not need to be unique across multiple objects. This section is optional.
Labels added in this section will apply to all Couchbase server containers created in this cluster.
Note that by default the Operator will add labels to each pod that may override those stated by this field.
User supplied annotations should not use the couchbase.com
key namespace.
For more information, see the Kubernetes documentation about labels.
Field rules: The value of this field cannot be changed after a server has been defined.
spec.servers[].pod.metadata.annotations
Annotations are similar to labels but cannot be used for resource selection.
User supplied annotations should not use the couchbase.com
key namespace.
For more information, see the Kubernetes documentation about annotations.
Field rules: The value of this field must be a map of key value pairs. This field is optional and immutable.
spec.servers[].pod.spec.nodeSelector
This field specifies a key-value map of the constraints on node placement for pods. For a pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). If this section is not specified, then Kubernetes will place the pod on any available node.
For more information on isolating your Couchbase clusters from other workloads, see the pod scheduling concepts document. For more information on node selectors, see the Kubernetes documentation about label selectors.
Field rules: This field is optional and must be a Kubernetes node selector object.
spec.servers[].pod.spec.tolerations
This field specifies conditions — that usually prevent pod scheduling or execution — that should be tolerated when scheduling a pod.
From the sample configuration file referenced in this topic, any node with a taint app=cbapp:NoSchedule
would not usually allow the scheduler to start running any new pods upon it.
You may wish to taint nodes — marking them as dedicated — for running Couchbase Server, where you don’t also want other application pods to be running on the same node and causing interference.
In order to allow Couchbase pods to be scheduled to run on a tainted node the pod must tolerate all taints.
For more information on isolating your Couchbase clusters from other workloads, see the pod scheduling concepts document. For more information about tolerations, see the Kubernetes documentation on taints and tolerations.
Field rules: This field is optional and must be a list of Kubernetes toleration objects.
spec.servers[].pod.spec.priorityClassName
This field allows Couchbase server pods to be run with an increased priority over other pods. Increased priority can evict lower priority pods from a node in order to accommodate resource requirements where a pod cannot be scheduled on any node.
For more information, see the Kubernetes documentation on priority classes and preemption.
Field rules: This field is optional and must be the name of an existing
PriorityClass
resource.
spec.servers[].pod.spec.imagePullSecrets
Pods created by the Operator need to have permission to access the container images they run. Normally Kubernetes will use public images hosted on Docker hub that do not require authentication and authorization. When using private repositories — especially on Red Hat OpenShift — you will need to specify credentials to allow container image access.
This can be achieved in a number of different ways:
-
Associating an image pull secret with the
default
service account. This may be a security risk as it will automatically apply to any containers created in the namespace. -
Using the
spec.servers[].pod.spec.imagePullSecrets
attribute to explicitly specify secrets that are used for credentials. -
Using the
spec.servers[].pod.spec.serviceAccountName
attribute to run the pods as a specified service account. This service account may implicitly be associated with secrets that are used for credentials.
For additional information, see the Kubernetes documentation.
Field rules: The value of this field is optional and must be a Kubernetes image pull secrets list.
spec.servers[].pod.spec.automountServiceAccountToken
By default Kubernetes will mount a secret into all containers in a pod.
This allows Kubernetes to provide a token that authenticates against the Kubernetes API from within a running container.
This behavior may be disabled — for security reasons — by changing this field to false
.
If not specified this field will default to true
.
Field rules: The
automountServiceAccountToken
field is optional and must be a boolean.
spec.servers[].pod.spec.serviceAccountName
Pods created by the Operator need to have permission to access the container images they run. Normally Kubernetes will use public images hosted on Docker hub that do not require authentication and authorization. When using private repositories — especially on Red Hat OpenShift — you will need to specify credentials to allow container image access.
This can be achieved in a number of different ways:
-
Associating an image pull secret with the
default
service account. This may be a security risk as it will automatically apply to any containers created in the namespace. -
Using the
spec.servers[].pod.spec.imagePullSecrets
attribute to explicitly specify secrets that are used for credentials. -
Using the
spec.servers[].pod.spec.serviceAccountName
attribute to run the pods as a specified service account. This service account may implicitly be associated with secrets that are used for credentials.
For additional information, see the Kubernetes documentation.
Field rules: The
serviceAccountName
field is optional and must be string.
spec.servers[].pod.spec.dnsPolicy
This field defines how Kubernetes populates Couchbase Server pods' DNS resolver.
By default, when not specified, this will use the builtin cluster DNS server.
This field allows the operator to inhibit the population of DNS configuration by Kubernetes.
Valid values are None
.
For additional information on using custom DNS servers, see the Couchbase networking concepts documentation. For additional information see the Kubernetes documentation.
Field rules: The
dnsPolicy
field is optional and must be a string.
spec.servers[].pod.spec.dnsConfig
This object defines DNS configuration in addition to that defined by Kubernetes under the control of the DNS policy.
The nameservers
sub-attribute is a list of DNS name server IP addresses.
The searches
sub-attribute is a list of DNS search domains.
For additional information on using custom DNS servers, see the Couchbase networking concepts documentation. For additional information see the Kubernetes documentation.
Field rules: The
dnsConfig
field is optional and must be a valid Kubernetes DNS configuration object.
spec.buckets
This object defines whether to manage buckets and how to select which bucket resources to use.
spec:
buckets:
managed: true
selector:
matchLabels:
cluster: cb-example
spec.buckets.managed
This field specifies whether the Operator should manage Couchbase buckets.
This field defaults to false
allowing the user to manually manage bucket settings with the Couchbase web console or the Couchbase API.
If set to true
, the Operator will search for CouchbaseBucket
, CouchbaseEphemeralBucket
and CouchbaseMemcachedBucket
resources in the namespace under the control of the spec.buckets.selector
field.
Field rules: This field is optional and must be a boolean.
spec.buckets.selector
This field specifies which CouchbaseBucket
, CouchbaseEphemeralBucket
and CouchbaseMemcachedBucket
resources are considered when the spec.buckets.managed
field is set to true
.
If not specified all CouchbaseBucket
, CouchbaseEphemeralBucket
and CouchbaseMemcachedBucket
resources are selected.
If specified, then only CouchbaseBucket
, CouchbaseEphemeralBucket
and CouchbaseMemcachedBucket
resources containing the same labels as specified in the label selector are selected.
For additional information, see the Couchbase resource label selection documentation.
Field rules: This field is optional and must be a Kubernetes
LabelSelector
object.
spec.xdcr
This object defines whether to manage XDCR, any remote clusters and replications to them.
As the Couchbase Server API does not expose XDCR passwords or TLS certificates these cannot be automatically rotated by the Operator. In order to rotate these either manually update or delete and recreate the remote cluster. |
spec:
xdcr:
managed: true
remoteClusters:
- name: remote
uuid: 611e50b21e333a56e3d6d3570309d7e3
hostname: 10.1.0.2:31851
authenticationSecret: my-xdcr-secret
tls:
secret: my-xdcr-tls-secret
replications:
selector:
matchLabels:
cluster: cb-example
spec.xdcr.managed
This field defines whether the Operator should manage XDCR remote clusters and replications.
If set to true
the Operator will assume full control of all XDCR primitives.
Any existing XDCR remote clusters and replications will be deleted by the Operator unless matching configuration is provided.
This field defaults to false
if not set.
Field rules: This field is optional and must be a boolean.
spec.xdcr.remoteClusters[].name
This field specifies a textual name for the remote cluster. The name must be unique within the Couchbase cluster.
Field rules: This field is required and must be a string.
spec.xdcr.remoteClusters[].uuid
This field specifies the remote cluster UUID.
Every Couchbase cluster generates a unique cluster ID.
The UUID of a remote cluster managed by the Operator is available in the CouchbaseCluster
resource in the status.clusterId
field.
Field rules: This field is required and must be a Couchbase UUID string.
spec.xdcr.remoteClusters[].hostname
This field specifies the hostname and port of the remote cluster.
Field rules: This field is required and must be a valid DNS domain name and admin port (either plain text or TLS).
spec.xdcr.remoteClusters[].authenticationSecret
This field specifies basic authentication on the remote cluster.
This field is only relevant if not using client certificate authentication.
If specified the authentication secret must refer to a Kubernetes secret containing the keys username
and password
.
The user on the remote cluster must have have the correct roles to allow XDCR to write to the target bucket(s).
Field rules: This field is optional and must be a string.
spec.xdcr.remoteClusters[].tls.secret
This field defines any TLS configuration required to establish a connection with the remote cluster.
This field references a Kubernetes secret containing remote cluster TLS information.
If replications to the remote cluster are to be encrypted then at least the CA certificate must be provided.
The CA certificate is specified by the key ca
.
If the remote cluster requires client certificate authentication then the certificate
and key
keys are used to specify the client certificate chain and client certificate’s private key respectively.
Client certificate authentication cannot be used with basic authentication.
Field rules: This field is optional and must be a string.
spec.xdcr.remoteClusters[].replications[]
This field is a list of replications from the local cluster to the remote cluster.
spec.xdcr.remoteClusters[].replications[].selector
This field defines a label selector to choose which CouchbaseReplication
resources should be included for this remote cluster.
Field rules: This field is optional and must be a Kubernetes label selector.
spec.backup
This object defines parameters and variables for automated backup.
spec:
backup:
managed: true
image: couchbase/operator-backup:6.6.0
serviceAccountName: couchbase-backup
s3Secret: s3-secret
resources:
requests:
cpu: 100m
memory: 100Mi
selector:
matchLabels:
cluster: cb-example
nodeSelector:
class: couchbase-backup
tolerations:
- key: couchbase-backup
operator: Exists
effect: NoSchedule
spec.backup.managed
This field defines whether the Automated Backup feature is enabled for the cluster.
Field rules: This field is optional and must be a boolean value.
spec.backup.image
This field defines the path to the backup utility image that the Autonomous Operator will run to perform a backup or restore.
If left unspecified, the dynamic admission controller will automatically populate it with the most recent container image that was available when the installed version of the Autonomous Operator was released. The default image for open source Kubernetes comes from Docker Hub, and the default image for OpenShift comes from the Red Hat Container Catalog.
If pulling directly from the the Red Hat Container Catalog, then the path will be something similar to registry.connect.redhat.com/couchbase/operator-backup:6.6.0-1
(you can refer to the catalog for the most recent images).
If image pull secrets are required to access the image, they are inherited from the Couchbase Server Pod and can be set explicitly with the spec.servers[].pod.spec.imagePullSecrets
field or implicitly with a service account specified with the spec.servers[].pod.spec.serviceAccountName
field.
Field rules: This field is optional and must be a valid path to a valid container image.
spec.backup.serviceAccountName
This field specifies the name of a Kubernetes ServiceAccount
that gives the backup and restore Pods the necessary permissions to read and create the required backup objects (running pods, jobs, cron jobs, events, etc).
If left unspecified, the dynamic admission controller will automatically default to the value of couchbase-backup
.
(This is the default name given to the ServiceAccount
when you run cbopcfg
as part of the automated backup setup procedure.
Field rules: This field is optional and must be the name of a valid Kubernetes Service Account.
spec.backup.s3Secret
This field specifies the name of a Kubernetes Secret
that gives the backup and restore Pods the necessary credentials to query buckets in AWS S3.
If left unspecified, CouchbaseBackup
and CouchbaseBackupRestore
objects that reference a S3 bucket in their specification will fail due to missing credentials.
Field rules: This field is optional and must be the name of a valid Kubernetes
Secret
.
spec.backup.resources
This field defines resource requests and limits for the backup pods that performs backup operations.
Some environments may require these fields to be explicitly set when using Kubernetes LimitRange
resources without namespace defaults.
Field rules: This field is optional and must be a valid Kubernetes resource descriptor.
spec.backup.selector
This field defines a label selector to choose which CouchbaseBackup
and CouchbaseBackupRestore
resources should be included for this cluster.
This field specifies which CouchbaseBackup
and CouchbaseBackupRestore
resources are considered when the spec.backup.managed
field is set to true
.
If not specified all CouchbaseBackup
and CouchbaseBackupRestore
resources are selected.
If specified, then only CouchbaseBackup
and CouchbaseBackupRestore
resources containing the same labels as specified in the label selector are selected.
For additional information, see the Couchbase resource label selection documentation.
Field rules: This field is optional and must be a Kubernetes label selector.
spec.backup.nodeSelector
This field defines affinity rules for the scheduling of CouchbaseBackup
and CouchbaseBackupRestore
resources.
This field is formatted as per the Kubernetes node selectors documentation.
A node selector is a set of key/value pairs that a Kubernetes Node
must be labeled with in order for backup and restore jobs to be scheduled onto.
Field rules: This field is optional and must be a Kubernetes node selector.
spec.backup.tolerations
This field defines toleration of anti-affinity for the scheduling of CouchbaseBackup
and CouchbaseBackupRestore
resources.
This field is formatted as per the Kubernetes taints and tolerations documentation. Taints provide anti-affinity rules, forcing all pods to be unschedulable on nodes with that taint. Tolerations allow backup and restore jobs to tolerate taints and be scheduled on these nodes that are now guaranteed to not have other pods running there.
Field rules: This field is optional and must be a list of Kubernetes tolerations.
spec.monitoring
This object defines any integration with third party monitoring software.
spec:
monitoring:
prometheus:
enabled: true
image: couchbase/exporter:1.0.3
authorizationSecret: cb-metrics-token
resources:
requests:
cpu: 100m
memory: 100Mi
spec.monitoring.prometheus.enabled
This field defines whether Prometheus metric collection is enabled for the cluster.
Field rules: This field is optional and must be a boolean value.
spec.monitoring.prometheus.image
This field defines the path to the Prometheus Exporter image that the Autonomous Operator will inject as a "sidecar" container in each Couchbase Server pod.
If left unspecified, the dynamic admission controller will automatically default to the most recent container image from Docker Hub (open source Kubernetes) or Red Hat Container Catalog (OpenShift) that was available when the installed version of the Autonomous Operator was released. Refer to Configure Prometheus Metrics Collection for more detailed instructions.
Field rules: This field is optional and must be a valid path to a valid container image.
spec.monitoring.prometheus.authorizationSecret
This field specifies the name of a Kubernetes Secret
that should contain a bearer token value that clients use to gain access to the Prometheus metrics.
Field rules: This field is optional and must be the name of a valid Kubernetes
Secret
, with a key of "token".
spec.monitoring.prometheus.resources
This field defines resource requests and limits for the Prometheus Exporter "sidecar" container.
Some environments may require these fields to be explicitly set when using Kubernetes LimitRange
resources without namespace defaults.
Field rules: This field is optional and must be a valid Kubernetes resource descriptor.
spec.volumeClaimTemplates[]
This list defines templates of persistent volume claims.
At run time, the Operator will create a persistent volume from these templates for each pod referencing them.
Claims can request volumes from various types of storage systems as identified by the storage class name.
Volume claim templates are based on the PersistentVolumeClaim
Kubernetes type.
spec:
volumeClaimTemplates:
- metadata:
name: couchbase
labels:
my-label: my-value
annotations:
my-annotation: my-value
spec:
storageClassName: "standard"
resources:
requests:
storage: 1Gi
spec.volumeClaimTemplates[].metadata.name
The metadata name identifies the claim template. This name is used by the volumeMounts
to reference which template to fulfill the mount request.
spec.volumeClaimTemplates[].metadata.labels
If set, this attribute defines a set of labels to be merged into persistent volume claims generated by this template. The Operator reserves the right to overwrite any label as necessary for operation.
spec.volumeClaimTemplates[].metadata.annotations
If set, this attribute defines a set of annotations to be merged into persistent volume claims generated by this template. The Operator reserves the right to overwrite any label as necessary for operation.
spec.volumeClaimTemplates[].spec.storageClassName
The storageClassName
is optional for a claim. A storageClassName
provides a way for administrators to describe the classes of storage they offer. If no storageClassName
is specified, then the default storage class is used.
Refer to the Kubernetes documentation for more information about storage classes.