Release notes for the 1.2 version of the Spark connector.
Version 1.2.0 is the first stable release of the 1.2.x series. It brings support for Spark 1.6 and the following enhancements and bugfixes:
SPARKC-41: Manual override of the
schemaFilterhas been fixed. It is now possible to define a filter like this:
sqlContext.read .option("bucket","travel-sample") .option("schemaFilter", "type = 'airline'") .couchbase()
SPARKC-42: The SparkSQL
count()operator now works, a bug has been fixed so that if SparkSQL doesn’t pass in required columns they are transformed to a
SPARKC-30: It is now possible to provide a manual schema as well as a custom schema filter.
SPARKC-43: The Java API can now use Spark SQL directly. See the Java API documentation for more information.
SPARKC-47: SparkSQL Filter expression support has been extended to support all Spark Filters, including nested ones.
The internal implementation has been updated to the latest release but is still experimental. Note that the
FromBeginningis implemented, but
FromNowstill has some known issues which will be fixed in later releases. Also, cluster rebalance support is not yet available and will follow.