A newer version of this documentation is available.

View Latest

Durability

Data durability refers to the fault tolerance and persistence of data in the face of software or hardware failure. Even the most reliable software and hardware might fail at some point, and along with the failures, introduce a chance of data loss.

Couchbase’s architecture guards against most forms of failure and protects against data loss. Buckets can be configured for replication, which makes additional copies of the data redundant and allows for the failure of copies: as long as the data is available somewhere, it is not lost.

Data is also written to the disk, so in the case of a power outage or software crash, data can be retrieved from the disk during recovery.

By default, Couchbase does not immediately replicate or write data to disk. After an item is modified, Couchbase immediately stores these modifications in memory on the node that is active for the given item. It then places the item inside a disk queue and a replication queue. The disk queue is served by another thread that persists the change to the disk, and the replication queue is served by another thread that replicates the change to any replicas within the cluster. This is done to improve performance, because modifying an item in memory is typically very quick, whereas sending an item over the network or writing it to the disk may take longer. In most cases items are replicated or persisted within a few milliseconds although the operations may take longer if the storage medium is slow or the network is congested.

If a node fails, any modifications in its queues are lost, and when it recovers it will only contain modifications that had already been stored to disk. If the failed node is failed over, a replica node (more precisely, a replica for the given vBucket) will take it over, but will only know about the modifications that were successfully replicated to it: any modifications in the failed node’s replication queue are lost.

Durability requirements

As mentioned before, modifications to items are done synchronously in memory and asynchronously on the disk and network. The synchronous modification in memory involves modifying the item and sending a success response to the client. This tells your application that the item has successfully been stored/modified, and allows your application to continue operating.

Applications may specify requirements for durability to the Couchbase client to wait until the item has also been persisted and replicated. This ensures greater durability of your data against failure in certain edge cases, such as a slow network or storage media. Operations performed using the durability requirement options will appear as failures to the application if the modification could not be successfully stored to disk and/or replicated to replica nodes. Upon receiving such a failure response, the application may retry the operation or mark it as a failure. In any event, the application will never be under the mistaken assumption that the modification was performed when it fact it was lost inside one of the aforementioned queues.

Configuring durability in SDKs

Durability requirements are typically present as an extra option to mutation operations (such as upsert). The level of durability is configurable depending on the needs of the application.

The level of durability may be expressed in two values:

  1. The replication factor of the operation: how many replicas must this operation be propagated to. This is specified as the ReplicateTo option in most SDKs.

  2. The persistence of the modification: how many persisted copies of the modified record must exist. Often found as a PersistTo option.

Examples of these settings include:

  • Ensure the modification is stored (on disk) on the active (master) node.

  • Ensure the modification exists (in memory) on at least one replica.

  • Ensure the modification is stored (on disk) on at least one replica.

  • Ensure the modification exists (in memory) on all replicas.

  • Ensure the modification is stored (on disk) on the active node, and exists (in memory) on all replicas.

  • Ensure the modification is stored (on disk) on the active node, and on all replicas.

When an operation is propagated to a replica, it is first propagated to the replica’s memory and then placed in a queue (as above) to be persisted to the replica’s disk. You can control whether you desire the operation to only be propagated to a replica’s memory (ReplicateTo), or whether to wait until it is fully persisted to the replica’s disk (PersistTo).

PersistTo receives a value total to the number of nodes that an operation should be persisted. The maximum value is the number of replica nodes plus one. A value of 1 indicates master-only persistence (and no replica persistence).

ReplicateTo receives a value total to the number of replica nodes the operation should be replicated. Its maximum value is the number of replica nodes.

Performance considerations

Note that Couchbase’s design of asynchronous persistence and replication is there for a reason: The time it takes to store an item in memory is several orders of magnitude quicker than persisting it to disk or sending it over the network. Operation times and latencies will increase when waiting on the speed of the network and the speed of the disk.

A common requirement is that an operation should be propagated to at least one additional location before continuing. An additional copy can either be in the master’s storage (master persistence), or in the memory of a replica node. Depending on your environment either persistence or replication may be quicker: replication will be quicker on a 10GE network with commodity magnetic drives, but persistence will be quicker on a 1GE network with SSD drives.

Durability operations may also cause increased network traffic since it is implemented at the client level by sending special probes to each server once the actual modification has completed.

Setting durability requirements in SDKs

All SDKs (except C) expose durability requirements through a persist_to and replicate_to option. Below are examples showing their use

C | Java | Python | Go | node.js | .NET