A newer version of this documentation is available.

View Latest


      Couchbase Server provides persistence, whereby certain items are stored on disk as well as in memory; and reliability is thereby enhanced.

      Understanding Couchbase Storage

      Couchbase Server stores certain items in compressed form on disk; and, whenever required, removes them. This allows data-sets to exceed the size permitted by existing memory-resources; since undeleted items not currently in memory can be restored to memory from disk, as needed. It also facilitates backup-and-restore procedures.

      Generally, a client’s interactions with the server are not blocked during disk-access procedures. However, if a specific item is being restored from disk to memory, the item is not made available to the client until the item’s restoration is complete.

      Not all items are written to disk: Ephemeral buckets and their items are maintained in memory only. See Buckets for information.

      Items written to disk are always written in compressed form. Based on bucket configuration, items may be maintained in compressed form in memory also. See Compression for information.

      Items can be removed from disk based on a configured point of expiration, referred to as Time-To-Live. See Expiration for information.

      For illustrations of how Couchbase Server saves new and updates existing Couchbase-bucket items, thereby employing both memory and storage resources, see Memory and Storage.


      Synchronized, multi-threaded readers and writers provide simultaneous, high-performance operations for data on disk. Conflicts are avoided by assigning each thread (reader or writer) a specific subset of the 1024 vBuckets for each Couchbase bucket.

      Couchbase Server allows the number of threads allocated per node for reading and writing to be configured by the administrator. The maximum thread-allocation for each is 64, the minimum 4.

      A high thread-allocation may improve performance on systems whose hardware-resources are commensurately supportive (for example, where the number of CPU cores is high). In particular, a high number of writer threads on such systems may significantly optimize the performance of durable writes: see Durability, for information.

      Note, however, that a high thread-allocation might impair some aspects of performance on less appropriately resourced nodes. Consequently, changes to the default thread-allocation should not be made to production systems without prior testing. A starting-point for experimentation is to establish the numbers for reader threads and writer threads as each equal to the queue depth of the underlying I/O subsystem.

      See the General-Settings information on Data Settings, for details on how to establish appropriate numbers of reader and writer threads.

      Thread-status can be viewed, by means of the cbstats command, specified with the raw workload option. See cbstats for information.


      Items can be deleted by a client application: either by immediate action, or by setting a Time-To-Live (TTL) value: this value is established through accessing the TTL metadata field of the item, which establishes a future point-in-time for the item’s expiration. When the point-in-time is reached, Couchbase Server deletes the item.

      Following deletion by either method, a tombstone is maintained by Couchbase Server, as a record (see below).

      An item’s TTL can be established either directly on the item itself, or via the bucket that contains the item. For information, see Expiration.


      A tombstone is the record of a deleted item. Each tombstone includes the item’s key and metadata. Tombstones are maintained in order to provide eventual consistency both between nodes and between clusters.

      The Metadata Purge Interval establishes the frequency with which Couchbase Server purges itself of tombstones: which means, removes them fully and finally. The Metadata Purge Interval setting runs as part of auto-compaction (see Append-Only Writes and Auto-Compaction, below).

      For more information, see Post-Expiration Purging, in Expiration.

      Disk Paths

      At node-initialization, Couchbase Server allows up to four custom paths to be established, for the saving of data to the filesystem: these are for the Data Service, the Index Service, the Analytics Service, and the Eventing Service. Note that the paths are node-specific: consequently, the data for any of these services may occupy a different filesystem-location, on each node.

      For information on setting data-paths, see Initialize a Node.

      Append-Only Writes and Auto-Compaction

      Couchbase Server uses an append-only file-write format; which helps to ensure files' internal consistency, and reduces the risk of corruption. Necessarily, this means that every change made to a file — whether an addition, a modification, or a deletion — results in a new entry being created at the end of the file: therefore, a file whose user-data is diminished by deletion actually grows in size.

      File-sizes should be periodically reduced by means of compaction. This operation can be performed either manually, on a specified bucket; or on an automated, scheduled basis, either for specified buckets, or for all buckets.

      For information on performing manual compaction with the CLI, see bucket-compact. For information on configuring auto-compaction with the CLI, see setting-compaction.

      For all information on using the REST API for compaction, see the Compaction API.

      For information on configuring auto-compaction with Couchbase Web Console, see Auto-Compaction.

      Disk I/O Priority

      Disk I/O — reading items from and writing them to disk — does not block client-interactions: disk I/O is thus considered a background task. The priority of disk I/O (along with that of other background tasks, such as item-paging and DCP stream-processing) is configurable per bucket. This means, for example, that one bucket’s disk I/O can be granted priority over another’s. For further information, see Create a Bucket.