Guidelines and Best Practices

    +

    The Couchbase Autonomous Operator makes deploying Couchbase Server incredibly simple. However, there are some external influences and configurations that can cause issues. This topic outlines some of the deployment best practices that can help you avoid some of the most common pitfalls.

    Pod Scheduling

    Pod scheduling details how to deploy your Couchbase Server clusters to consistently maintain high performance, and also minimize disruption in the case of failure.

    Noisy Neighbors

    The noisy neighbor problem occurs when multiple VMs are running on the same hypervisor, and one virtual machine (VM) adversely affects the other. This effect may result in any of the following conditions:

    CPU Starvation

    A highly CPU-intensive neighbor consumes more than its fair share of CPU resources, which results in fewer cycles allocated to the affected pod, higher latencies when processing data and network traffic, and increased cache latencies as the noisy neighbor flushes your entries to slower storage.

    Memory Starvation

    There are two common scenarios when considering memory starvation. In the cloud, a hypervisor VM will typically run without swap space to page out less-used memory pages. The result is that when memory is starved, the operating system has no other choice but to choose a process that is using too much memory and terminate it. When the hypervisor is on physical hardware with a swap file, performance will suffer as memory pages are transferred to and from a storage device, resulting in slower computation as memory accesses page-fault and the memory is swapped back in, higher latencies, and degraded I/O performance.

    Disk I/O Starvation

    A storage device has a finite number of input or output operations it can perform per second. Typically for a spinning disk this is in the order of 100 IOPs. For solid state storage, this is far higher — in the order of 10,000s of IOPs. If a noisy neighbor is utilizing a high proportion of that capacity, then an application will see slower reads and writes to and from the file system. In a cloud architecture, where a storage volume may be attached via a network protocol (e.g. iSCSI), you also generate increased network load.

    Network Starvation

    Like disk I/O, the network only has a limited capacity. Modern data center network interface cards will typically handle 10 Gbps of bandwidth. The problem is however not limited to just the local machine, as top-of-rack leaf switches are often over-subscribed by a ratio of 6:1 (or higher), so a network-intensive process may be affected by a process on a completely separate hypervisor.

    To mitigate as many of these factors as is possible, it is recommended that you deploy a Couchbase Server cluster on a set of Kubernetes nodes that are separate from other processes. The Kubernetes cluster administrator should be responsible for labeling nodes as belonging to a specific application type. Developers may then use the spec.servers.pod.spec.nodeSelector configuration property to control which nodes Couchbase pods are scheduled onto. Take care to ensure that other pods (e.g. stateless web applications) are not scheduled on those nodes that are allocated for use by Couchbase.

    Likewise, it is recommended that you deploy Couchbase Server clusters with anti-affinity enabled so that cluster pods are not scheduled on the same nodes. This can be achieved with the spec.antiAffinity configuration property.

    Anti-affinity is only scoped to a specific cluster. In order to prevent pods from one cluster interfering with another, you should use a separate node label per Couchbase cluster.

    Server Group Configuration

    The server group feature of the Couchbase data platform allows you group Couchbase Server nodes in such a way that data or index replicas are distributed over failure domains that are larger than that of a single server (e.g. rack, data center, availability zone, etc). When server groups are defined, the Server Group Awareness feature distributes vBuckets on a best-effort basis, while ensuring each instance is under an equal load. As such, under certain circumstances, it’s possible for multiple replicas to be co-located in the same server group.

    It is recommended that server groups should have an equal number of pods per server class to make it simpler to maintain server group constraints. It is also recommended that you have more server groups than replicas for the same reason.

    Storage

    Storage options may have both beneficial and detrimental impacts on your clusters.

    Persistent Volumes

    Persistent volumes are mandatory for production deployments. This allows for the quick recovery of a failed pod by warming up from existing data and allowing quicker back-filling via delta-recovery, rather than a full re-balance. By having some of the pod’s file system persisted, logging data can be extracted from failed pods where this would normally be lost with a fully ephemeral file system. It also aids in data recovery situations which cannot be rectified by the Operator.

    Table 1. Supportability Matrix
    Deployment Type Services Required Volume Mounts Recoverable? Supportable?

    Production Stateful

    any

    default

    Production Stateless

    query, search, eventing

    logs

    Evaluation

    any

    none

    Production Stateful

    These deployments require the default volume mount to be defined. This encompasses all data generated by Couchbase Server at run time and logs. Pods using this deployment method can recover pods that fail by reusing the persistent data on the default volume. The data, index and analytics services must be deployed with this method to preserve data in the event of a total power failure.

    Production Stateless

    These deployments require the logs volume mount to be defined. This covers the Couchbase Server logs only so pods that fail cannot be recovered, however the logs can still be accessed and downloaded with the support tool. The query, search and eventing services may be deployed with this method, if they are not running with a stateful service. Logs are only retained in the event of a pod failure. They will be automatically deleted due to a user initiated action e.g. scaling the cluster down.

    Evaluation

    These deployments do not use any volume mounts. They will not survive a total power failure. As Couchbase Server logs cannot be recovered in the event of a pod failure we cannot provide support for the Couchbase Server product.

    Production clusters may now be deployed as combination of Production Stateful and Production Stateless node deployments, depending on the services running on the individual server class configurations. The default and logs mounts in the CouchbaseCluster configuration are mutually exclusive — if any server class is detected by the cluster validation to be supportable (e.g. default or logs is specified under volumeMounts), then all other server classes must also have supportable configurations in order for the validation check to pass.

    It is also highly recommended that in cloud deployments, storage be located in the same data center as the pods that are running in that server group. If a data center were to suffer an outage with storage randomly distributed across a virtual private cloud, not only would you lose all the pods that are resident in the availability zone, but also any pods with backing storage provided by the availability zone.

    There will be a slight reduction in performance and latency when using a network-based storage backend.

    Security

    Keeping your data safe is of paramount importance. This section contains some best practices that you should follow to keep your deployment secure.

    Cluster Scope

    When you deploy a cluster, you do so in a namespace. The Operator needs to be given a fairly broad set of permissions in order to dynamically create pods, services, etc. It is recommended that the Operator, and the clusters that it manages, be segregated in their own name space. This limits the set of resources that the Operator can affect, and that support tools can collect data about, which aids in security compliance situations where confidential information may be leaked via the Kubernetes API operations.

    Node Configuration

    How your Kubernetes nodes are configured may also have a detrimental impact on your ability to deploy a Couchbase Server cluster. These are not officially supported but recommended for cluster administrators.

    Node Services

    It is recommended that the following services be running on all Kubernetes nodes.

    NTP

    Like all distributed systems, Couchbase Server needs to have synchronized time in order to resolve conflicts. Newer systems should have NTP enabled by default if running systemd-timesyncd. Older systems should run ntpd or similar and can be automated via an orchestration tool such as Puppet or Ansible.

    Kernel Parameters

    Couchbase server has some recommended kernel parameters which should be set. These can be done via an orchestration solution or a Kubernetes DaemonSet resource.

    mm.transparent_hugepage.enabled

    Transparent huge pages (THP) attempt to allocate large areas of contiguous memory so that applications which process data sequentially, or that have high data locality, do so without causing a high number of kernel page faults. For a database such as Couchbase, this actually has a detrimental effect, since its access patterns are typically highly random. It is highly recommended that this parameter be set to never.

    mm.transparent_hugepage.defrag

    Related to mm.transparent_hugepage.enabled, it is highly recommended that this parameter be set to never.

    Couchbase Server recommends that vm.swappiness be set to zero. Kubernetes does not allow swap in containers therefore this parameter can be safely ignored.

    For example, you could run something similar to the following:

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: couchbase-sysctls
      namespace: kube-system
    spec:
      selector:
        matchLabels:
          name: couchbase-sysctls
      template:
        metadata:
          labels:
            name: couchbase-sysctls
        spec:
          # Apply to nodes marked as belonging to a couchbase cluster only
          nodeSelector:
            app: couchbase-cluster
          containers:
          - name: couchbase-sysctls
            image: busybox:latest
            # Run as root on the host system
            securityContext:
              privileged: true
            # Constantly reconcile the state to revert any changes
            command:
            - /bin/sh
            - -c
            - |
              set -o xtrace
              while true
              do
                sysctl -w mm.transparent_hugepage.enabled=never
                sysctl -w mm.transparent_hugepage.defrag=never
                sleep 60
              done

    Ulimits

    These ulimit settings are necessary when running under heavy load. If you are just doing light testing and development, you can omit these settings, and everything will still work.

    The Couchbase Pod ulimits are inherited from the default ulimits set on the docker daemon running on each kubernetes node. Configuration of this daemon varies according to the distributions service management system.

    Systemd

    Systemd compatible hosts can configure ulimits for the daemon by editing its service configuration file. Run the following command to detect the location of the location of the configuration file:

    $ sudo systemctl cat docker
    # /lib/systemd/system/docker.service

    Add the following values to the service file:

    LimitNOFILE=40960
    LimitCORE=infinity
    LimitMEMLOCK=infinity

    Reload the new configuration and restart the docker daemon:

    $ sudo systemctl daemon-reload
    $ sudo systemctl restart docker

    Upstart

    Upstart compatible hosts can configure ulimits for the docker daemon by editing the docker.conf file:

    $ vi /etc/init/docker.conf

    And the following values to the service configuration:

    limit nofile 40960
    limit core unlimited unlimited
    limit memlock unlimited unlimited

    Restart the docker service to apply ulimits to the daemon:

    $ sudo /etc/init.d/docker restart