Release Notes for Couchbase Server 6.6
Couchbase Server 6.6 adds features and enhancements to improve developer productivity, foster Cloud deployments, and enable operational analytics on globally distributed data.
Take a look at What’s New? for a list of new features and improvements in this release.
Release 6.6.1 (December 2020)
Couchbase Server 6.6.1, released in December 2020, is the first maintenance release in the 6.6.x series for Couchbase Server.
In addition to bug fixes in multiple components, this release also includes a few enhancements in Eventing and Search services.
Quick Links: New Features | Deprecated Features and Platforms | Fixed Issues
New Features
-
Support for additional advanced bucket operations (which support CAS and TTL operations) and distributed atomic counters from Eventing functions. For details, see Eventing Language Constructs.
-
Full text search queries now support pagination and scoring. For details, see Understanding Queries.
Known Issues
This section highlights the notable known issues in this release.
Eventing Service
Issue | Description |
---|---|
Summary: The Eventing Metadata bucket is not being cleared when handlers are undeployed from the paused state. If the handler is using timers, this can also result in timers not being removed as expected, that can then fire and execute on a subsequent deployment. Workaround: Do not undeploy handlers from the paused state in version 6.6.1. |
|
Summary: Handlers can hang in the deploying state due to a race condition during rebalance-in of an Eventing node if more than one function has the same source bucket in version 6.6.1. Workaround: Ensure that you pause handlers before any rebalance. |
Fixed Issues
This section highlights the notable issues fixed in this release.
Analytics Service
Issue | Description |
---|---|
Summary: After upgrading to 6.6.1 a rebalance might be required to repair composite secondary indexes that contain NULL or MISSING. |
|
Summary: Fixed an issue where the Analytics service threw an error when creating a link from an IPv4 configured cluster to an IPv6 configured cluster. |
|
Summary: If an identifier for a metadata entity (e.g. a dataverse or a dataset) contained characters that require URL encoding (percent-encoding) when used in a URI, requests that used this identifier failed with an URISyntaxException. This has been fixed. |
Cluster Manager
Issue | Description |
---|---|
Summary: For audit events from memcached, "peername" and "sockname" have been renamed to "local" and "remote" with the syntax: {"ip":"hostname","port":1234}. |
Cross Data Center Replication (XDCR)
Issue | Description |
---|---|
Summary: Fixed an issue where the user intent heuristic was incorrect for full-encryption when XDCR reference did not provide a port number. |
|
Summary: Fixed an incorrect XDCR stream request rollback caused by a consumer ahead of producer error. |
Data Service
Issue | Description |
---|---|
Summary: Fixed an infinite loop due to HdrHistogram being reset. |
|
Summary: The HTCleaner in Ephemeral is responsible for purging tombstones and also Completed (Committed / Aborted) SyncWrites. A bug in that component led to removing in-flight SyncWrites from internal data-structures, which would cause a crash on the node when/if it tried to complete the SyncWrite. |
Eventing Service
Issue | Description |
---|---|
Summary: The |
|
Summary: The Web Console UI did not display the very first line of Eventing logs and has been fixed. (Note that the logs files in the file system contained the correct information without any truncation). |
|
Summary: Fixed an issue where the Eventing debugger crashed when using toLocaleString in JS. |
|
Summary: The debugger link has been updated, from |
|
Summary: Fixed an exception thrown when data sent in the request body to deploy a handler was null. |
|
Summary: Fixed the function handler so that a paused handler can only be resumed using |
|
Summary: Fixed an issue where upon upgrading from version 6.0.x to 6.6, a handler that uses N1qlQuery would stop working on nodes that were upgraded and threw an error ( |
|
Summary: Improved automation of failover handling in Eventing service in several scenarios. |
|
Summary: Fixed an issue where delete mutation on a |
|
Summary: Eventing service was not retrying bucket ops failures that were retryable like ETMPFAIL that could be retried. This has been fixed and will now retry until the script timeout. |
|
Summary: Fixed an issue so that a function action does not deploy and execute on mutations after a REST API validation error. |
Index Service and Views
Issue | Description |
---|---|
Summary: Starting with version 6.5.0, VbSeqnosReader has been updated to process two types of requests: VbSeqnosRequest and VbMinSeqnosRequest. When processing VbSeqnosRequest, if there are any VbMinSeqnosRequest’s, then the VbMinSeqnosRequest’s will be queued back into the requestCh of VbSeqnosReader. However, if the VbSeqnosReader closed by this time, then enqueue would fail and the caller would be waiting for a response indefinitely. This has been fixed to respond to outstanding requests upon exit of VbSeqnosReader. |
|
Summary: Fixed an issue where rebalance failed due to timestamp mismatch between snapshots. |
|
Summary: Fixed an issue where multiple partition tombstones for an index during rebalance could lead to partition cleanup on restart. |
|
Summary: Fixed an issue in the waitForIndexBuild routine which caused it not to terminate at the end of the batch and remain active till the end of rebalance. As a result, rebalance caused a very large number of TIME_WAIT connections and subsequently failed. |
|
Summary: Added per index |
|
Summary: The statistic |
|
Summary: Fixed an issue where the gsi index resident ratio showed a value greater than 100% due to num_rec_swapin being larger than num_rec_swapout (num_rec_swapin > num_rec_swapout). This is a rare and transient condition that may occur sometimes as the stats are updated asynchronously and will become correct eventually. |
|
Summary: Improved array indexing performance by optimizing the ComputeArrayEntriesWithCount method. |
|
Summary: When bloomDelta is added after recovery when page is found without a bloom filter, the stat NumRecordAllocs is over counted. However, NumRecordAllocs is only supposed to track the insert/delete deltas. This has been fixed. |
|
Summary: Fixed an issue with memory optimized indexes where indefinite disk snapshotting led to increasing disk usage. |
|
Summary: Fixed a memory growth issue observed when processing many metadata operations. |
|
Summary: Log replay will skip data blocks if a more recent header was already recovered by checkpoint recovery. When skipping the stale data blocks, page op stats due to that stale data block were not being cleared and the stats kept accumulating. This caused incorrect stats for PageBytes and ItemCnt after recovery. This has been fixed by discarding page ops stats during log replay. |
|
Summary: Index creation failed when the bucket name contained a |
|
Summary: The projector went into a stream termination loop when trying to stream a near 20 MB document due to redundant doc size checks in projector. This has been fixed. |
Install and Deploy
Issue | Description |
---|---|
Summary: On Windows, when upgrading to 6.6.1 or later from any earlier version, configuration changes such as custom data directories may be lost. To avoid this, before running the MSI installer, copy the file |
Query Service
Issue | Description |
---|---|
Summary: Fixed an issue where the intersect scan under inner of nested-loop join sometimes returned incorrect results. |
Search Service
Issue | Description |
---|---|
Summary: The percentage completion stat for Search service did not reflect updates in the UI. This has been fixed. |
Tools, Web Console (UI), and REST API
Issue | Description |
---|---|
Summary: There is a rare case where |
|
Summary: Fixed an issue where |
Release 6.6.0 (August 2020)
Couchbase Server 6.6 was released in August 2020.
Quick Links: New Supported Platforms | Deprecated Features and Platforms | Known Issues | Fixed Issues
Major Changes in Behavior from Previous Releases
This section notes major changes in behavior from previous releases.
-
Search queries from N1QL
Previously, for SEARCH queries from N1QL, you could use any analyzer for queries that do not use an analyzer (Term, Phrase, Multiphrase, Fuzzy, Prefix, Regexp, WildCard queries). However, this caused inconsistent results between covered and non-covered queries. To ensure consistent results with covering and non-covering index queries, a keyword analyzer for queries that don’t use an analyzer is mandated.
New Supported Platforms
This release adds support for the following platforms:
-
Red Hat Enterprise Linux (RHEL) 8.2
See Supported Platforms for the complete list of supported platforms.
Deprecated Features and Platforms
Known Issues
This section highlights some of the known issues in this release.
Analytics Service
Issue | Description |
---|---|
Summary: When creating a secondary index with composite fields, and one or more of these fields have a numeric type (int, double), the Analytics service may run into repeated ingestion failure when a document is updated such that the indexed numeric field value changes between a real value and NULL or MISSING. Workaround: To avoid running into this issue, make sure the indexed numeric fields always have values (i.e. not NULL or MISSING), or drop any composite fields indexes that have numeric fields. |
|
Summary: The Analytics service throws an error when creating a link from an IPv4 configured cluster to an IPv6 configured cluster. Workaround: Set the jvmArgs on the Analytics Service to "-Djava.net.preferIPv4Stack=false" and restart the Analytics cluster.
For example, |
|
Summary: If an identifier for a metadata entity (e.g. a dataverse or a dataset) contains characters that require URL encoding (percent-encoding) when used in a URI, requests that use this identifier can fail with an URISyntaxException. Workaround: Construct identifiers using characters that do not require URL encoding. |
|
Summary: When using alternate addresses for remote links, at least one node in the remote cluster must have the management[SSL] port exposed, and ALL data(KV) nodes have the kv[SSL] port exposed. Failure to do so will result in a 400 (Bad Request) when creating or altering a link. |
|
Summary: Currently, the roles, However,these roles should not be able to read any data and this behavior is planned to be fixed in an upcoming release. Note that once the fix is implemented, the |
|
Summary: In cases where the input to IN subclause with EVERY quantifier is MISSING or NULL, Analytics and Query engines differ in behavior. The Analytics service treats MISSING or NULL input values(in this case) as equivalent to an empty array, which results in the whole Workaround: Use the IS KNOWN predicate to test whether the IN value is not NULL/MISSING.
|
Search Service
Issue | Description |
---|---|
Summary: Using negate(NEG) match and match_phrase queries WITHOUT the “analyzer” setting can lead to no results being returned. This issue can happen for non-covered queries only when either of the following are NOT specified:
This is because, in such a non-covering query, the context of what index to use is missing in the verification phase and the default "standard" analyzer is used instead of the "keyword" analyzer which was used in the index. Workaround: Specify the analyzer to use with the non-covering queries, or the index name within the options explicitly. |
Query Service
Issue | Description |
---|---|
Summary: While adding support for explicit connections to IPv4, IPv6, or both for external communications for both HTTP and TLSUnique listeners, a considerable degradation in throughput was observed on Windows platform when using IPv6. This is caused by an underlying issue in Golang. |
Fixed Issues
This section highlights some of the issues fixed in this release.
Cluster Manager
Issue | Description |
---|---|
Summary: To help troubleshoot issues, the cluster manager now reports information on |
Cross Data Center Replication (XDCR)
Issue | Description |
---|---|
Summary: XDCR does not apply the correct alternate address heuristic |
Eventing Service
Issue | Description |
---|---|
Summary: Fixed an issue where recursion detection caused an Out-of-Memory exception when |
|
Summary: Following a KillAndRespawn restart, the "from-now" directive was ignored and started from 0 instead of the expected start from current sequence number. This has been fixed. |
|
Summary: The Eventing service crashed due to a race condition between undeploy and delete. This has been fixed. |
|
Summary: To help distinguish slow performing queries from Eventing JavaScript code, Eventing service now adds a default clientContextId to every N1QL query fired from an Eventing function. |
|
Summary: To avoid inter-function recursion through N1QL statements, Eventing service now performs recursion checks for static N1QL statements in Eventing functions. |
|
Summary: Fixed an issue where the timer scan time kept increasing on an idle cluster with a timer handler. |
|
Summary: Fixed an issue where the eventing consumer RSS did not honor Eventing memory quota for bucket operations with small documents. |
|
Summary: Fixed an issue where cbevent failed to run with localhost. |
|
Summary: The Eventing log files permissions were excessively restrictive (0600), which prevented them from being processed by third-party tools. The log files permissions have been updated (0640). |
|
Summary: The Eventing status is now displayed right alongside the handlers in the web console(UI). |
|
Summary: Added the ability to cancel timers. |
|
Summary: Fixed an issue where a timer created during a timer execution was not triggered. |
|
Summary: Fixed an issue where timers were not cancelled if multiple timers were created with the same reference. |
|
Summary: When slow eventing functions were deployed first with feed boundary set to "everything", subsequent functions on the same source bucket were starved due to DCP backing up. This has been fixed. |
|
Summary: Eventing timers can now be cancelled using cancelTimer() function, or by creating a new timer with same reference as an existing timer. In addition, a function that is invoked by a timer callback can create fresh timers. |
Index Service and Views
Issue | Description |
---|---|
Summary: To help troubleshoot memory usage issues with the storage engine, lastGCSn and currSn will now be exposed as MOI storage stats. |
|
Summary: Fixed a runtime error caused by invalid memory address or nil pointer derefernce by adding compression correctness checks. |
|
Summary: The index service now sets a more contextual user-agent in HTTP requests to the cluster manager(ns_server). |
|
Summary: Fixed the index service to re-generate protobuf files (.pb.go) files when .proto files are updated. |
|
Summary: During index definition operations, the cluster info cache is updated multiple times. In a cluster with large number of buckets, refreshing the cluster info cache took a long time and slowed down these operations. This has been fixed. |
|
Summary: Fixed a rare race condition that caused the index service to be stuck in the warmup state. This has been fixed by increasing the default size of the feed’s backch. |
|
Summary: During bulk inserts of heavy workloads, index sync was observed to take a long time. This has been addressed by optimizing indexing of incremental workloads for insert heavy scenarios. |
Query Service
Issue | Description |
---|---|
Summary: The Index Advisor now supports virtual keyspace for DELETE, MERGE, and UPDATE statements. |
|
Summary: The Query service now supports explicit connections to IPv4 or IPv6 or both for extexternal communications for both HTTP and TLSUnique listeners. And the Query service will fail to start if it cannot listen on all required ports. Note that when using IPv6 on Windows platform, this can cause a considerable degradation in throughput due to an underlying issue in Golang. |
Search Service
Issue | Description |
---|---|
Summary: Fixed an issue where the document mapping’s analyzer was not inherited by child fields. |
|
Summary: To ensure consistent results with covering and non-covering flex index queries, we mandate a keyword analyzer for queries that don’t use an analyzer. For non-covering flex index queries, we recommend that you specify the index name, or use a match query and explicitly specify the analyzer to be used. |