A newer version of this documentation is available.

View Latest

Deployment Considerations for Virtual Machines and Containers

      +
      Virtualized platforms such as VMware, AWS/Azure/GCP, and Docker (containers) are popular ways of achieving hardware scalability to complement Couchbase Server’s software scalability.

      When deploying Couchbase Server on a virtualized platform, some extra considerations should be made.

      Avoid a Single Point of Failure

      Couchbase Server’s resilience and high-availability are achieved through creating a cluster of independent nodes and replicating data between them so that any individual node failure doesn’t lead to loss of access to your data. In a virtualized environment, if you run multiple nodes on the same host or physical hardware, you are inadvertently re-introducing a single point of failure. In environments where you control the virtual machine (VM) placement, it is recommended that each Couchbase Server node runs on a different piece of physical hardware.

      Sizing a Virtualized Deployment

      The performance characteristics of physical hardware are well understood. Even though VMs insert a lightweight layer between Couchbase Server and the underlying OS, there is still a small overhead when running Couchbase Server on a virtual platform.

      For stability and better performance predictability, you should dedicate at least two CPU cores to a VM in development environments, and four CPU cores to a VM in production.

      Disk performance is also an important factor - and there are many different technologies you can choose. The general rule of thumb is to make sure that the disk has sufficient throughput to handle the necessary CRUD operations, as well as any indexing, compaction, and backup activities that will be required.

      One of the major benefits of virtualization is the ability to share physical resources and even over-commit these resources (meaning each virtual instance can think it has more CPU, RAM, or disk space than is actually physically available). However, in an over-committed environment, you can end up with containers competing with each other causing unpredictable performance and sometimes stability issues.

      For Couchbase Server, physical resources should be dedicated to the VM, rather than shared across multiple VMs. For more information about hardware requirements, see System Resource Requirements.

      Auto-Failover Threshold

      Due to the fact that the VM is running on top of a hypervisor or container engine, there will be a minor CPU performance overhead. Depending on how CPU-intensive your workload is, you may wish to change the auto-failover threshold from 120 seconds (the default) to a different value. See Automatic Failover.

      Live Migration

      Some virtualization environments allow VMs to be migrated between physical nodes and/or between storage backends. When moving the VM, there is the potential for small pauses and network disruption, which has the potential to affect the scheduler, which in turn could trigger a failover. When changing the backend storage, disk queues, compaction, or indexing may be affected, which could have a knock-on effect on general performance. Therefore, it is recommended that use Couchbase’s built-in rebalance mechanism for maintenance.

      If it is absolutely necessary to perform a migration, then disable auto-failover beforehand and be prepared for a performance impact during the migration. Pausing, resuming, and snapshotting virtual machines can also have the same effect. These actions should only be performed on a Couchbase Server node which has been removed from the cluster.

      Additional Considerations for Containers

      This section lists the additional considerations that are applicable only to containers.

      Map Couchbase Node Specific Data to a Local Folder

      A Couchbase Server container writes all persistent and node-specific data to /opt/couchbase/var by default. It is recommended to map this directory to a directory on the host file system using the -v option to get persistence and performance when using docker run.

      By mapping the directory /opt/couchbase/var to a directory outside the container with the -v option, you can delete the container and recreate it later without losing your data from Couchbase Server. You can even update to a container running a later release/version of Couchbase Server without losing your data.

      In a standard Docker environment using a union file system, leaving /opt/couchbase/var inside the container results in some amount of performance degradation.

      If you have SELinux enabled, mounting the host volumes in a container requires an extra step. Assuming you’re mounting the ~/couchbase directory on the host file system, you need to run the following command once before running your first container on that host:

      mkdir ~/couchbase && chcon -Rt svirt_sandbox_file_t ~/couchbase

      Increase ULIMIT in Production Deployments

      Couchbase Server normally expects the following changes to ulimits:

      ulimit -n 40960        # nofile: max number of open files
      ulimit -c unlimited    # core: max core file size
      ulimit -l unlimited    # memlock: maximum locked-in-memory address space

      These ulimit settings are necessary when running under heavy load. If you are just doing light testing and development, you can omit these settings, and everything will still work. To set the ulimits in your container, you need to run Couchbase Docker containers with the following additional --ulimit flags:

      docker run -d --ulimit nofile=40960:40960
      --ulimit core=100000000:100000000 --ulimit memlock=100000000:100000000
      --name db -p 8091-8096:8091-8096 -p 11210-11211:11210-11211 couchbase

      Since unlimited is not supported as a value, it sets the core and memlock values to 100 GB. If your system has more than 100 GB RAM, increase this value to match the available RAM on the system.

      The --ulimit flags only work on Docker 1.6 or later.