Couchbase Sizing

      +
      When planning your cluster deployments you need to be aware of resources that the Operator requires to ensure stable operation. You also need to be aware of how memory allocations will affect database performance.

      Kubernetes Sizing

      Automated Cluster Recovery

      One of the key features of the Operator is its ability to automatically recover from a pod failure or Couchbase Server crash.

      When using persistent volume backing storage with the couchbaseclusters.spec.servers.volumeMounts.default mount then the Operator will recreate the pod in the event of a failure. This involves deleting the old pod and replacing it with a new one, reusing the existing storage volumes. Not only is this form of recovery faster — using delta-node recovery — but it also does not require additional Kubernetes resource to re-provision.

      When using ephemeral storage, however, you will need to be aware that additional resource may be required. For example a cluster containing three replicas can tolerate 3 sequential Couchbase crashes and fail-overs before data would be lost. In order to recover in this situation the Operator creates 3 new pods, rebalances data then deletes the old ones. In this situation you would potentially need an additional 3 Kubernetes nodes in order to accommodate the new pods.

      You will need extra Kubernetes node capacity if using the couchbaseclusters.spec.antiAffinity attribute, or your couchbaseclusters.spec.servers.resources scheduling hints are sufficiently large enough to necessitate the use of a new node for each pod.

      Rolling Upgrade

      Upgrade affects not only upgrading the Couchbase Server version but any modification of the cluster that affects Couchbase Server pods. For example, this includes enabling TLS, adding scheduling constraints etc.

      Upgrade uses the standard rolling-upgrade strategy, so for each iteration requires at most one extra node. This is potentially additive with the number of pod’s that can be failed over as discussed previously.

      Server Class Replacement

      It is possible to replace one named server class with another in the couchbaseclusters.spec.servers list . This allows wholesale replacement of servers outside of the rolling upgrade mechanism. For example this can be used to migrate from a homogeneous topology to one that takes advantage of the multi-dimensional scaling feature of Couchbase Server.

      During this process you will need to ensure there are sufficient resources for the new pods to be scheduled onto before the cluster is rebalanced, old pods deleted and resources reclaimed.

      Kubernetes Rolling Upgrade

      Managed cloud platforms may feature one click Kubernetes upgrade in order to make available new features or patch security vulnerabilities. To prevent unconstrained deletion of Couchbase Server pods the Operator controls this process with pod readiness checks and pod disruption budgets. Like rolling upgrade, the pod disruption budget only allows for a single pod to be deleted at a time. Therefore only a single additional Kubernetes node need be available to schedule and replacement pod onto.

      Server Groups

      When using the couchbaseclusters.spec.serverGroups feature, the Operator schedules pods equally across each availability zone defined. A number of Couchbase server crashes in one zone will require sufficient capacity in order to replace them and restore balance. Therefore when using server groups any additional required capacity will need to be replicated across all availability zones.

      Couchbase Sizing

      Couchbase Server is intended to be a high-performance in-memory database. It is therefore critical to determine your memory requirements before deployment as they will have a bearing on both performance and consistency.

      For example if your data set size is 1 GiB and you have 2 replicas, then you will need approximately 3 GiB of memory in order to contain the entire working set in memory and yield the highest possible performance. This in turn affects how the cluster is configured. If the couchbaseclusters.spec.cluster.dataServiceMemoryQuota parameter was set to 256Mi for your cluster, you would require 12 instances to host your data set in memory.

      Persistent storage is orders of magnitude less expensive than memory, so you may wish to adjust the amount of memory required. Your business workload may be able to function adequately with a 20% memory residency. For every 5 documents only 1 is resident in memory. It may be the case that those 20% of documents are used substantially more than others and therefore gain the most from having low latencies. Other data may be used so infrequently that a disk fetch is inconsequential as part of the whole business process. Referring back to our previous example a 1 GiB data set with 2 replicas and 20% residency only requires 614 MiB, and therefore three 256 MiB data nodes.