Storage Settings

  • concept
    +
    A Secondary Index can be saved in either of two ways: memory-optimized or standard.

    Both standard and memory-optimized indexes implement multi-version concurrency control (MVCC) to provide consistent index scan results and high throughput.

    Memory-Optimized Index Storage

    Memory-optimized index-storage is supported by the Nitro storage engine, and is only available in Couchbase Server Enterprise Edition.

    A memory-optimized index uses a lock-free skiplist to maintain the index and keeps all the index data in memory. A memory-optimized index has better latency for index scans and processes the mutations of the data much faster.

    Memory-optimized indexes can be created for both Couchbase and Ephemeral buckets. See Buckets.

    Memory-optimized index-storage allows high-speed maintenance and scanning; since the index is kept fully in memory at all times. A snapshot of the index is maintained on disk, to permit rapid recovery if node-failures are experienced. To be consistently beneficial, memory-optimized index-storage requires that all nodes running the Index Service have a memory quota sufficient for the number and size of their resident indexes, and for the frequency with which the indexes will be updated and scanned.

    Memory-optimized index-storage may be less suitable for nodes where memory is constrained; since whenever the Index Service memory-quota is exceeded, indexes on the node can neither be updated nor scanned.

    If indexer RAM usage goes above 75% of the Index Service memory-quota, an error-notification is provided. If indexer RAM usage then goes above 95% of the Index Service memory-quota, the indexer goes into the Paused mode on that node; and although the indexes remain in Active state, traffic is routed away from the node.

    Before index-operations can resume, memory must be freed. When the indexer RAM usage drops below 80% of the Index Service memory-quota, the indexer goes into Active mode again on that node.

    To resume indexing operations on a node where the Indexer has paused due to low memory, consider taking one or more of the following actions:

    • Increase the index-memory quota, to give indexes additional memory for request-processing.

    • Remove less important indexes from the node, to free up memory.

    • Remove buckets with indexes: removing a bucket automatically removes all the dependent indexes.

    • Flush buckets that have indexes: flushing a bucket deletes all data in a bucket; and even if there are pending updates not yet processed, flushing causes all indexes to drop their own data.

      Note that attempting to delete bucket-data selectively during an out-of-memory condition does not succeed in decreasing memory-usage; since without memory, such requested deletions cannot themselves be processed.

    In cases where recovery requires an Index-Service node to be restarted, the node’s resident memory-optimized indexes are rebuilt from the snapshots retained on disk. Following the node’s restart, these indexes remain in the Warmup state until all information has been read into memory: then, final updates are made with the indexes in Active state. Note that once a rebuilt index is thus available, queries with consistency=request_plus or consistency=at_plus fail, if the specified timestamp exceeds the last timestamp processed by given index. [1] However, queries with consistency=unbounded execute normally. For information on these settings, see Index Availability and Performance.

    Standard Index Storage

    Standard is the default storage-setting for Secondary Indexes: the indexes are saved on disk; in a disk-optimized format that uses both memory and disk for index-update and scanning. The performance of standard index storage depends on overall I/O performance.

    In versions of Couchbase Server prior to 7.0.2, standard index storage supports indexes for Couchbase Buckets, but not for Ephemeral Buckets. In version 7.0.2 and after, standard index storage supports indexes for both Couchbase buckets and Ephemeral buckets. This applies both to Enterprise and to Community Edition. See Buckets.

    The standard global secondary index uses a B-Tree index and keeps the optimal working set of data in the buffer. This means the total size of the index can be much bigger than the amount of memory available in each index node.

    In Couchbase Server Enterprise Edition, standard index-storage is supported by the Plasma storage engine. In this case, compaction is handled automatically.

    However, for installations of Couchbase Server Enterprise Edition running on Linux, note that hole punching is required to enable auto-compaction of Global Secondary Indexes using standard index-storage.

    In Couchbase Server Community Edition, standard index-storage is supported by the ForestDB storage engine. In this case, each index saved with the standard option has two write modes:

    Circular Write Mode

    Writes changes to the end of the index-file, until the relative index fragmentation exceeds 65%. Block reuse is then triggered: new data is written into stale blocks where possible, rather than to the end of the file, so as to optimize I/O throughput. Full compaction runs in accordance with the value of the Circular write mode with day + time interval trigger setting: see Index Fragmentation. Note, however, that the index-fragmentation data-size is not significantly changed by compaction.

    Append-only Write Mode

    Writes changes to the end of the index-file, invalidating existing pages within the index file, and requiring frequent, full compaction.

    By default, Couchbase Server Community Edition uses Circular Write Mode for standard index storage. Append-only Write Mode is provided for backwards compatibility with previous versions.

    Other storage-settings are described in detail in Configuring Auto-Compaction.

    Changing Index-Storage Settings

    Settings are established at cluster-initialization for all indexes on the cluster, across all buckets. Following cluster-initialization, to change from one setting to the other, all nodes running the Index Service must be removed. If the cluster is single-node, uninstall and reinstall Couchbase Server. If the cluster is multi-node, and only some of the nodes host the Index Service, proceed as follows:

    1. Identify the nodes running the Index Service.

    2. Remove each of the nodes running the Index Service. Note that as Index-Service nodes are removed, so are the indexes they contain; and in consequence, any ongoing queries fail.

    3. Perform a rebalance.

    4. Change the Index-Storage Settings for the cluster.

    5. Add new Index-Service nodes, and confirm the revised storage mode.

    For information on adding and removing nodes, and on rebalancing a cluster, see Manage Nodes and Clusters.

    Plasma Memory Enhancements

    Couchbase Server provides the following enhancements for Plasma:

    • Per page Bloom Filters.

    • In-memory compression.

    These are described below.

    Per Page Bloom Filters

    A Bloom filter gives guidance as to whether a searched-for item resides on disk. By indicating that the item is not on disk, the filter prevents unnecessary on-disk searches.

    If Bloom filters are enabled (which is the default), when a lookup occurs, and the correct Plasma page is located, the Bloom filter indicates either that the item is not on the page, or that it may be on the page. If the filter indicates that:

    • The item is not on the page, then the item is not on disk, and no disk read need occur.

    • The item may be on the page, then the item can continue to be searched for, and a disk read must therefore occur.

    The consequent reduction in disk reads promotes the efficiency of mutation processing, when the mutations are insert heavy.

    Bloom filters can be enabled or disabled by means of the Couchbase Web Console UI, or the REST API. See the information provided on establishing General settings for the cluster.

    In-Memory Compression

    Plasma memory-management routinely performs the compression of a subset of items, in order to free memory; and thereby, due to the additional memory made available, keep a greater number of items in memory overall. By keeping more items in memory, the need for disk reads is reduced, as are corresponding latencies.

    The selection of items to be compressed occurs periodically. Only items that have already been flushed to disk are compressed: after compression, such items are principal candidates for subsequent ejection.

    Disk-flushing occurs every ten minutes: items not yet flushed to disk are not compressed; nor is any recently used item. In consequence, items most likely to be accessed remain uncompressed in memory, and are therefore accessible with the least latency; while items less likely to be accessed are retained in memory in compressed form, until their ejection; beyond which, they must be accessed through disk reads.

    This new model of memory usage leads to higher residence ratios, and greater access-efficiency; at the cost of some additional CPU utilization, due to the more frequent performance of compression and decompression routines.


    1. In fact, queries in this case wait for a consistent snapshot to be available and time out, rather than fail immediately.