A newer version of this documentation is available.

View Latest

Release Notes for Couchbase Server 4.1

Couchbase Server release 4.1 extends Couchbase lead as the most performant and scalable NoSQL database for mission critical enterprise applications.

Release Notes for 4.1.2

Released in August 2016, Couchbase Server 4.1.2 is the second maintenance release in the 4.1.x series. This maintenance release has several bug fixes related to DCP, indexing, querying, XDCR, and security.

Fixed issues in 4.1.2

Here are the notable fixes in the 4.1.2 release:

Issue Description


Key value engine task will run at HIGH or LOW, but not guaranteed at the requested priority.


Indexer process uses large amounts of CPU and RAM when no indexes are configured.


When upgrading from 2.5.x to 4.x.x, TAP followed by DCP can result in a failure.


When running multiple N1QL clients with request cancellation or low scan timeout values, a race condition in connection shutdown causes the indexer to crash.


Purge Seqno would be reset in case there was a compaction call that was not purging any items, causing tombstone items not to be purged.


Purge Seqno is incorrectly passed to the compactor for development design documents preventing Couchbase Server from reclaiming the disk space.


When using cbdocloader and loading files from disk, the documents are loaded except for the design documents, which are silently skipped.


Security upgrade for curl to version curl 7.49.1.


The View engine fails with DCP start sequence number that is greater than end sequence number error and fails to roll back. This behavior can cause view engine indexing issues.


By default, increase the number of non-IO threads to stay on top of all ready tasks.


XDCR would temporarily get stuck in a loop when a node, which was previously removed from a cluster, rejoins the cluster in the situation where there are multiple Couchbase Server versions in the cluster and no mutations had previously been replicated.


When XDCR getMeta receives an error that is not a network timeout type, it repairs network connection and continues in a loop to read from the client until a timeout occurs after 2 minutes. In this window, XDCR replication appears stuck.



TLS configuration on the port 11207 is lost on server restart.

TLS configuration on the port 11208 is lost on server restart.


After lots of document UPSERT operations, high CPU utilization remains for nodes with GSI indexes for an indefinite time even if there are no more UPSERT operations.


More than one XDCR replication instance might start after replication is resumed, resulting in incorrect functional behavior and performance impact.


select count(*) does not work correctly when used in a prepared statement.


An offline upgrade from Couchbase Server version 2.5 to any version of a Couchbase Server release 3.x could result in a potential data loss.


The Couchbase Web Console and REST API over HTTPS do not work for center web clients such as Chrome 50 or higher that are sending elliptic curve X25519 requests for TLS.


Under a low mutation rate, long pauses can occur when updating View indexes.


Fixed incorrect an error message for Go-XDCR when upgrading with swap-rebalance from Couchbase Server version 3.x.


When switching from full eviction to value eviction, if the total metadata is bigger than the memory quota of the bucket keys will not be loaded even when warmup is complete and the bucket is online.


Memcached bucket will cause the query select * from system:keyspaces to fail.


In a multi-node cluster, two cbq shell sessions connected to distinct nodes have a window where index metadata is not fully synchronized.


GOXDCR needs to handle NOT_MY_VBUCKET errors more gracefully, without restarting.


When a high-priority task is busy, tasks of lower priorities never have a chance to run and have been observed waiting for many hours. This behavior can, for example, result in a node failover caused by the lack of response to statistics calls.


If a DCP consumer is hit with more data than it can consume, it will run the Processor task for as long as this situation continues resulting in NON-IO threads being held and preventing other NON-IO tasks from running.


The Cluster manager continuously crashes on an undersized system

See the following link for a full list of issues.

Known issues in 4.1.2

The following are the known issues:

Issue Description


Summary: When creating a Global Secondary Index (GSI), the interactive query shell cbq can time out if the result is not returned within 2 minutes. Although the index is successfully created, the error message is unclear.

Workaround:Create the index with defer_build:true and then build the index separately.


Summary: When the View queries with the options reduce and group set to true are parameterized by a list of keys that are not listed in ascending order, the produced results might not be properly reduced.

Workaround: Ensure that the list of keys passed to the query is in ascending order.

Release Notes for 4.1.1

Couchbase Server 4.1.1, which is released in April 2016, is the first maintenance release in the 4.1.x series for Couchbase Server.

Fixed issues in 4.1.1

The following issues are now fixed:

Issue Description


goxdcr terminated during the longevity test with an error: unexpected empty bucketName.


MutationQueue was waiting for Node Alloc (hung indexer).


During the longevity test, when failover and rebalance were performed the XDCR stream stopped with errors in logs.


Hanging process was observed in n1ql_lat_Q2_20M_18Kops_450Qops_gsi_false.test.


The Indexer would pick up a wrong file if restarted during compaction.


Handle write failures in the mutation log more gracefully.


The cbq-engine consumed all 11210 connections on Windows.


mcd aborted during rebalance.


XDCR needs to add a timeout to call out to the remote cluster.


A corrupted max_cas value in the vbucket breaks the HLC semantics.


NMVB should not contain a cluster_config body if the client has already received the same cluster_config version


High intra-cluster XDCR bandwidth usage was reported. [4.1]


When applying a new configuration, the janitor agent sets up new replication streams against vbuckets while they are still dead (before they have been activated).


The DCP Producer could miss streaming items from certain streams.


cbcollect_info has a long duration and takes space due to couch_dbinfo.


Rebalance exited with this reason: {badmatch, {error, {failed_nodes.


Long pauses have been observed during the N1QL performance regression tests.


A crash was observed during the secondary (not N1QL) stale=false throughput tests.


Couchbase Server occasionally fails to restart.


DELETE with the WHERE clause is not consistent when used right after INSERT.


Prepared Statement failing for SELECT COUNT(*) AS test1_count FROM default


The calendar gets hours of day fetching -1.


GoXDCR: DCP was stuck for more than 13 minutes.

Known issues in 4.1.1

The following are the known issues:

Issue Description


Indexer data loss at the restart was observed.


Handle write failures in mutation log more gracefully.


cbbackupwrapper needs a path to cbbackup.exe with no spaces.


Task scheduling: when a high priority task is busy, tasks of lower priorities never get a chance to run and wait for many hours.


If a DCP consumer is hit with more data than it can consume, it runs the Processor task for as long as this continues and the NONIO threads are held preventing other NONIO tasks from running.


Memory based accounting for the Indexer Mutation Queue.


IA user should not disable the firewall during Windows installation.


GSI indexes might survive the bucket deletion in some cases.


Couchbase Server version 4.0.0 was crashing regularly on the Ubuntu AWS Instance.


[Windows] Results from Q1 - Q3 tests were below KPI’s (compared to the KPI’s for Linux).

Release Notes for 4.1

Couchbase Server 4.1 was released in December 2015.

Known Issues

The following table lists the known issues in the 4.1 release:




Summary: When using queries backed by GSI to perform singleton lookups and range scans, occasional processing of index compaction can incur long pauses affecting concurrent query throughput.


Summary: Prepared encoded plan for N1QL statements with system catalog queries in WHERE clause may not be recognized.

Workaround: To avoid this issue, do not execute certain queries with prepared statements (known as .adhoc(false) or similar in SDK APIs). Instead, use regular queries with system catalog queries.


Summary: Kernel futex wait call can cause ForestDB to hang during initial index build.

Workaround: If you are running RHEL 6x or CentOS 6.x, we highly recommend upgrading to the latest kernel (2.6.32-504.16.2 or higher). With Centos 7.1, you should upgrade to Linux kernel 3.18 at least.


Summary: Latency on queries using the request_plus option on scan consistency may be abnormally high during index compaction, leading to application timeouts of queries. The response times may occasionally be in the 10s of seconds or the query may return an error due to timeout. The default timeout interval is 75 seconds.



Summary: When creating a global secondary index (GSI), the interactive query shell cbq, can timeout if the result is not returned within 2 minutes. Although the index is successfully created, the error message is unclear.

Workaround:Create the index with defer_build:true, and then build the index separately.


Summary: View queries with reduce and group set to true, and parameterized by a list of keys that are not in ascending order, can produce results that are not properly reduced.

Workaround: Ensure that the list of keys passed to the query is in ascending order.


Summary: When the indexer settings are changed, the connections from the query shell cbq can sometimes become stale causing an EOF errors.

Workaround: Restart the query engine before executing the query again.


Summary: Replication over SSL encryption from a source 4.0 cluster to a destination 2.5.x cluster may result in slow performance (rate of data transfer).

Workaround: We recommend upgrading the destination cluster to 3.x version.

Fixed issues

Here are some of the notable fixes in the 4.1 release:




Memcached process crashed if it ran out of file descriptors during log rotation.


If delta-node recovery was started after updating the bucket configuration, but before the bucket was loaded into memcached, a rebalance operation sometimes ejected the node from the cluster and the cluster vBucket map still contained the node


Couchbase Server failed to start on OS X 10.11 (El Capitan).


If a getMeta was issued at the destination cluster during XDCR followed by a GET request by the client, the background fetch operation for the item did not complete and caused a large number of disk reads and client side timeouts.


When deletion of a large bucket happened in the background, rebalance was disabled, and the status of the ongoing background task was shown in the UI.


Querying a view with a reduce function based on a subset of partitions resulted in a massive memory usage.


If a vBucket state changed from active to replica while performing compaction, the race condition between the compaction thread and memcached thread sometimes caused an assertion and triggered a crash.


Running the Elasticsearch connector sometimes resulted in high CPU usage.


DCP consumer would consistently take 6 seconds to acknowledge a 20Mb mutation.


Memcached would sometimes hang during shutdown.


On a Windows system, the XDCR remote cluster reference was not updated after a node was removed from the cluster.


XDCR based on DCP consumed a large amount of RAM with large mutations.


When using XDCR with SSL, replication to an older cluster failed after an online upgrade to 4.0 and an error message that the pipeline failed to start was received.


The mapping phase of the view MapReduce operation took a lot of memory if lots of key-value pairs were emitted per document.

For the complete list of issues fixed in 4.1 release, see the following JIRA query.