Release notes for the 1.2 version of the Spark connector.
Version 1.2.1 is the second stable release of the 1.2.x series. It fortifies support for Spark 1.6 and brings the following enhancements and bug fixes:
(META_)IDcan now be any type which can be converted into a string instead of being an actual string. This allows for more flexibility in working with types in the first place.
SPARKC-48: Filters that are
LIKEbased now escape
SPARKC-53: Deeply nested attributes on filters are properly parsed and escaped.
FromNowoption now works, allowing you to start streaming from the current point in time without getting all the previous mutations first.
Version 1.2.0 is the first stable release of the 1.2.x series. It brings support for Spark 1.6 and the following enhancements and bugfixes:
SPARKC-41: Manual override of the
schemaFilterhas been fixed. It is now possible to define a filter like this:
sqlContext.read .option("bucket","travel-sample") .option("schemaFilter", "type = 'airline'") .couchbase()
SPARKC-42: The SparkSQL
count()operator now works, a bug has been fixed so that if SparkSQL doesn’t pass in required columns they are transformed to a
SPARKC-30: It is now possible to provide a manual schema as well as a custom schema filter.
SPARKC-43: The Java API can now use Spark SQL directly. See the Java API documentation for more information.
SPARKC-47: SparkSQL Filter expression support has been extended to support all Spark Filters, including nested ones.
The internal implementation has been updated to the latest release but is still experimental. Note that the
FromBeginningis implemented, but
FromNowstill has some known issues which will be fixed in later releases. Also, cluster rebalance support is not yet available and will follow.