Guidelines and Best Practices
The Couchbase Autonomous Operator makes deploying Couchbase Server incredibly simple. However, there are some external influences and configurations that can cause issues. This topic outlines some of the deployment best practices that can help you avoid some of the most common pitfalls.
Pod scheduling details how to deploy your Couchbase Server clusters to consistently maintain high performance, and also minimize disruption in the case of failure.
The noisy neighbor problem occurs when multiple VMs are running on the same hypervisor, and one virtual machine (VM) adversely affects the other. This effect may result in any of the following conditions:
- CPU Starvation
A highly CPU-intensive neighbor consumes more than its fair share of CPU resources, which results in fewer cycles allocated to the affected pod, higher latencies when processing data and network traffic, and increased cache latencies as the noisy neighbor flushes your entries to slower storage.
- Memory Starvation
There are two common scenarios when considering memory starvation. In the cloud, a hypervisor VM will typically run without swap space to page out less-used memory pages. The result is that when memory is starved, the operating system has no other choice but to choose a process that is using too much memory and terminate it. When the hypervisor is on physical hardware with a swap file, performance will suffer as memory pages are transferred to and from a storage device, resulting in slower computation as memory accesses page-fault and the memory is swapped back in, higher latencies, and degraded I/O performance.
- Disk I/O Starvation
A storage device has a finite number of input or output operations it can perform per second. Typically for a spinning disk this is in the order of 100 IOPs. For solid state storage, this is far higher — in the order of 10,000s of IOPs. If a noisy neighbor is utilizing a high proportion of that capacity, then an application will see slower reads and writes to and from the file system. In a cloud architecture, where a storage volume may be attached via a network protocol (e.g. iSCSI), you also generate increased network load.
- Network Starvation
Like disk I/O, the network only has a limited capacity. Modern data center network interface cards will typically handle 10 Gbps of bandwidth. The problem is however not limited to just the local machine, as top-of-rack leaf switches are often over-subscribed by a ratio of 6:1 (or higher), so a network-intensive process may be affected by a process on a completely separate hypervisor.
To mitigate as many of these factors as is possible, it is recommended that you deploy a Couchbase Server cluster on a set of Kubernetes nodes that are separate from other processes. The Kubernetes cluster administrator should be responsible for labeling nodes as belonging to a specific application type. Developers may then use the
spec.servers.pod.spec.nodeSelector configuration property to control which nodes Couchbase pods are scheduled onto. Take care to ensure that other pods (e.g. stateless web applications) are not scheduled on those nodes that are allocated for use by Couchbase.
Likewise, it is recommended that you deploy Couchbase Server clusters with anti-affinity enabled so that cluster pods are not scheduled on the same nodes. This can be achieved with the
spec.antiAffinity configuration property.
|Anti-affinity is only scoped to a specific cluster. In order to prevent pods from one cluster interfering with another, you should use a separate node label per Couchbase cluster.
The server group feature of the Couchbase data platform allows you group Couchbase Server nodes in such a way that data or index replicas are distributed over failure domains that are larger than that of a single server (e.g. rack, data center, availability zone, etc). When server groups are defined, the Server Group Awareness feature distributes vBuckets on a best-effort basis, while ensuring each instance is under an equal load. As such, under certain circumstances, it’s possible for multiple replicas to be co-located in the same server group.
It is recommended that server groups should have an equal number of pods per server class to make it simpler to maintain server group constraints. It is also recommended that you have more server groups than replicas for the same reason.
For more information, refer to About Using Couchbase Server Groups With the Operator.
Storage options may have both beneficial and detrimental impacts on your clusters.
Persistent volumes are recommended for production deployments. This allows for the quick recovery of a failed pod by warming up from existing data and allowing quicker back-filling via delta-recovery, rather than a full rebalance.
By having some of the pod’s file system persisted, logging data can be extracted from failed pods where this would normally be lost with a fully ephemeral file system. It also aids in data recovery situations which cannot be rectified by the Operator.
|Required Volume Mount(s)
- Production Stateful
These deployments require the
defaultvolume mount to be defined. This encompasses all data generated by Couchbase Server at run time and logs. Pods using this deployment method can recover pods that fail by reusing the persistent data on the
analyticsServices must be deployed with this method to preserve data in the event of a total power failure.
The Operator technically allows the
searchService to be deployed in a stateless configuration as defined below. However, it is strongly encouraged that you deploy the
searchService in a stateful configuration, otherwise a failure or elimination of enough
searchnodes will cause it to become unresponsive for considerably longer. This is because without a volume mount, all of the Full Text Indexes will need to be completely rebuilt.
- Production Stateless
These deployments require the
logsvolume mount to be defined. This covers the Couchbase Server logs only so pods that fail cannot be recovered, however the logs can still be accessed and downloaded with the support tool. The
eventingservices may be deployed with this method, if they are not running with a stateful service. Logs are only retained in the event of a pod failure. They will be automatically deleted due to a user initiated action e.g. scaling the cluster down.
Production clusters can also be deployed as a combination of Production Stateful and Production Stateless node deployments, depending on the services running on the individual server class configurations.
logs mounts in the
CouchbaseCluster configuration are mutually exclusive — if any server class is detected by the cluster validation to be supportable (e.g.
logs is specified under
volumeMounts), then all other server classes must also have supportable configurations in order for the validation check to pass.
It is also highly recommended that in cloud deployments, storage be located in the same data center as the pods that are running in that server group. If a data center were to suffer an outage with storage randomly distributed across a virtual private cloud, not only would you lose all the pods that are resident in the availability zone, but also any pods with backing storage provided by the availability zone.
There will be a slight reduction in performance and latency when using a network-based storage backend.
In some use cases, such as an in-memory cache, it is undesirable to have to provision persistent volumes in order to deploy a cluster. In other situations, it may be impossible, if the platform doesn’t support dynamic persistent volume provisioning for example.
The Operator provides the option to run clusters with ephemeral storage only.
In this mode of operation, the Operator can recover a partially down cluster — where Couchbase server is unable to automatically failover the pods — by forcing the down pods out of the cluster with the
The Operator will also fully rebuild a cluster that has had all pods deleted.
Due to pods being ephemeral in nature, it is highly likely that data will be unrecoverable when a Couchbase server node goes down, so there is little risk in using a forced failover. Ephemeral clusters favor caching use-cases where the data can be repopulated by clients and does not need to be persisted.
Since fully-ephemeral Couchbase clusters only use ephemeral storage, Couchbase Server logs are highly likely to be unavailable in the event of a crash. This can make supporting an ephemeral cluster particularly difficult, and it is recommended that you exercise caution when using this type of deployment.
Starting with version 2.2, the Autonomous Operator supports forwarding Couchbase Server logs. However, the current implementation requires the use of a (persistent) volume.
Keeping your data safe is of paramount importance. This section contains some best practices that you should follow to keep your deployment secure.
When you deploy a cluster, you do so in a namespace. The Operator needs to be given a fairly broad set of permissions in order to dynamically create pods, services, etc. It is recommended that the Operator, and the clusters that it manages, be segregated in their own name space. This limits the set of resources that the Operator can affect, and that support tools can collect data about, which aids in security compliance situations where confidential information may be leaked via the Kubernetes API operations.
How your Kubernetes nodes are configured may also have a detrimental impact on your ability to deploy a Couchbase Server cluster. These are not officially supported but recommended for cluster administrators.
It is recommended that the following services be running on all Kubernetes nodes.
Like all distributed systems, Couchbase Server needs to have synchronized time in order to resolve conflicts. Newer systems should have NTP enabled by default if running
systemd-timesyncd. Older systems should run
ntpdor similar and can be automated via an orchestration tool such as Puppet or Ansible.
Couchbase server has some recommended kernel parameters which should be set. These can be done via an orchestration solution or a Kubernetes
Transparent huge pages (THP) attempt to allocate large areas of contiguous memory so that applications which process data sequentially, or that have high data locality, do so without causing a high number of kernel page faults. For a database such as Couchbase, this actually has a detrimental effect, since its access patterns are typically highly random. It is highly recommended that this parameter be set to
mm.transparent_hugepage.enabled, it is highly recommended that this parameter be set to
Couchbase Server recommends that
For example, you could run something similar to the following:
# Apply to nodes marked as belonging to a couchbase cluster only
- name: couchbase-sysctls
# Run as root on the host system
# Constantly reconcile the state to revert any changes
set -o xtrace
sysctl -w mm.transparent_hugepage.enabled=never
sysctl -w mm.transparent_hugepage.defrag=never
These ulimit settings are necessary when running under heavy load. If you are just doing light testing and development, you can omit these settings, and everything will still work.
Red Hat OpenShift 4.0 and above sets the default limit of the maximum number of process IDs in the cgroup to be
The Couchbase Pod ulimits are inherited from the default ulimits set on the container runtime daemon on each Kubernetes node. Configuration of this daemon varies according to the container runtime daemon used by the distribution and its service management system.
Systemd-compatible hosts can configure ulimits for the daemon by editing its service configuration file.
Run the following command to detect the location of the configuration file for the Docker container runtime:
$ sudo systemctl cat docker
Add the following values to the service file:
Reload the new configuration and restart the docker daemon:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
Upstart-compatible hosts can configure ulimits for the Docker container runtime by editing the
$ vi /etc/init/docker.conf
And the following values to the service configuration:
limit nofile 40960
limit core unlimited unlimited
limit memlock unlimited unlimited
Restart the docker service to apply ulimits to the daemon:
$ sudo /etc/init.d/docker restart