A newer version of this documentation is available.

View Latest


    This section defines the key terms and concepts used in the Couchbase Server architecture documentation.

    A single Couchbase Server instance running on a physical server, virtual machine, or a container. All nodes are identical: they consist of the same components and services and provide the same interfaces.


    A cluster is a collection of nodes that are accessed and managed as a single group. Each node is an equal partner in orchestrating the cluster to provide facilities such as operational information (monitoring) or managing cluster membership of nodes and health of nodes.

    Clusters are scalable. You can expand a cluster by adding new nodes and shrink a cluster by removing nodes.

    The Cluster Manager is the main component that orchestrates the cluster level operations. For more information, see Cluster Manager.


    A bucket is a logical container for a related set of items such as key-value pairs or documents. Buckets are similar to databases in relational databases. They provide a resource management facility for the group of data that they contain. Applications can use one or more buckets to store their data. Through configuration, buckets provide segregation along the following boundaries:

    • Cache and IO management

    • Authentication

    • Replication and Cross Datacenter Replication (XDCR)

    • Indexing and Views


    Item is the basic unit of data in a Couchbase Server. An item is a key-value pair where each stored value is identified by a unique key within the bucket.

    This is different from relational databases which store data in databases grouped by tables. Tables have a strict schema (set of columns) and data is stored in rows in tables.

    Values for an item can be anything from a single bit, to a decimal measurement, to JSON documents. Storing data as a JSON document allows the Couchbase Server to provide extended features such as indexing and querying. Items are also referred to as documents, objects, or key-value pairs.


    vBuckets are physical partitions of the bucket data. By default, Couchbase Server creates a number of master vBuckets per bucket (typically 1024) to store the bucket data. Buckets may store redundant copies of data called replicas. Each replica also creates another set of vBuckets to mirror the active vBucket. The vBuckets that maintain replica data are called replica vBuckets. Every bucket has its own set of active and replica vBuckets and those vBuckets are evenly distributed across all nodes within the data service.

    Cluster map

    The cluster map contains a mapping of which services belong to which nodes at a given point in time. This map exists on all Couchbase nodes as well as within every instantiation of the client SDK. Through this map, the application is able to transparently identify the cluster topology and respond when that topology changes. Cluster map contains a vBucket map.

    vBucket map

    A vBucket map contains a mapping of vBuckets to nodes at a given point in time. This map exists on all Couchbase nodes as well as within every instantiation of the client SDK. Through this map, the application is able to transparently identify the nodes that contain the vBuckets for a given key and respond when the topology changes.


    Replication is the process of creating additional copies of active data on alternate nodes. Replication is at the heart of the Couchbase Server architecture enabling high availability, disaster recovery, and data exchange with other big data products. It is the core enabler for

    • Moving data between nodes to maintain replicas.

    • Geo-distribution of data with cross datacenter replication (XDCR)

    • Queries with incremental map-reduce and spatial views

    • Backups with full or incremental snapshots of data

    • Integration with Hadoop, Kafka and text search engines based on Lucene like Solr

    For more information about replication, see High availability and replication architecture.


    The topology of a cluster can change as nodes are added or removed due to capacity requirements or node failures. As the number of nodes changes, the rebalance operation is used to redistribute the load and adapt to the new topology of nodes. At its core, a rebalance operation for the data service is the incremental movement of vBuckets from one node to another. By moving vBuckets onto or off of nodes, these nodes become responsible for more or less data and begin handling more or less traffic from the applications. A rebalance operation also brings in or takes out nodes from the various services. While the rebalance operation is in progress, it also updates the cluster map on all clients with any topology changes. The Cluster Manager coordinates the movement and hand off of vBuckets and services during the rebalance operation. Rebalance is performed completely online and with minimal impact to the incoming workload.


    Failover is the process that diverts traffic away from failing nodes to the remaining healthy nodes. Failover can be done automatically by the Couchbase cluster based on the health status of a node, or can be done manually by the administrator or an external script. A node that is failed over does not accept any new traffic.

    Graceful failover

    Graceful failover is the proactive ability to remove a Data service node from the cluster in an orderly and controlled fashion. It is an online operation with zero downtime, which is achieved by promoting replica vBuckets on the remaining cluster nodes to active and the active vBuckets on the node to failover to dead. This type of failover is primarily used for planned maintenance of the cluster.

    Hard failover

    Hard failover is the ability to drop a node quickly from the cluster when it has become unavailable or unstable. This is achieved by promoting replica vBuckets on the remaining cluster nodes to active. Hard failover is primarily used when there is an unplanned outage to a node in the cluster.

    Automatic failover

    Automatic failover is the built-in ability to have the Cluster Manager detect and determine when a node is unavailable and then initiate a hard failover.

    Node lifecycle

    As the cluster topology changes, nodes in the cluster go through a set of state transitions. Operations such as Add Node, Remove Node, Rebalance, and Failover cause state transitions. The following diagram lists the states and state transitions of the nodes in the cluster.

    cb rebalance4
    Figure 1. Node lifecycle