A newer version of this documentation is available.

View Latest

Migrating from Go SDK 1.x to SDK 2.0

The SDK 3.0 API used in Go SDK 2.0 breaks the existing 2.0 APIs (used in Go SDK 1.6) in order to provide a number of improvements. Collections and Scopes are introduced. The Document class and structure has been completely removed from the API, and the returned values are now typically Result type objects. Retry behaviour is more proactive, and lazy bootstrapping moves all error handling to a single place. Individual behaviour changes across services are explained here.

Go SDK 2.0 implements the SDK 3.0 API found in the 3.0 versions of the C, .NET, Java, PHP, and Python SDKs.

Fundamentals

The Couchbase SDK team takes semantic versioning seriously, which means that API should not be broken in incompatible ways while staying on a certain major release. This has the benefit that most of the time upgrading the SDK should not cause much trouble, even when switching between minor versions (not just bugfix releases). The downside though is that significant improvements to the APIs are very often not possible, save as pure additions — which eventually lead to overloaded methods.

To support new server releases and prepare the SDK for years to come, we have decided to increase the major version of each SDK and as a result take the opportunity to break APIs where we had to. As a result, migration from the previous major version to the new major version will take some time and effort — an effort to be counterbalanced by improvements to coding time, through the simpler API, and performance. The new API is built on years of hands-on experience with the current SDK as well as with a focus on simplicity, correctness, and performance.

Before this guide dives into the language-specific technical component of the migration, it is important to understand the high level changes first. As a migration guide, this document assumes you are familiar with the previous generation of the SDK and does not re-introducing SDK 2.0 concepts. We recommend familiarizing yourself with the new SDK first by reading at least the getting started guide, and browsing through the other chapters a little.

Terminology

The concept of a Cluster and a Bucket remain the same, but a fundamental new layer is introduced into the API: Collections and their Scopes. Collections are logical data containers inside a Couchbase bucket that let you group similar data just like a Table does in a relational database — although documents inside a collection do not need to have the same structure. Scopes allow the grouping of collections into a namespace, which is very usfeul when you have multilpe tenants acessing the same bucket. Couchbase Server is including support for collections as a developer preview in version 6.5 — in a future release, it is hoped that collections will become a first class concept of the programming model. To prepare for this, the SDKs include the feature from SDK 3.0.

In the previous SDK generation, particularly with the KeyValue API, the focus has been on the codified concept of a Document. Documents were read and written and had a certain structure, including the id/key, content, expiry (ttl), and so forth. While the server still operates on the logical concept of documents, we found that this model in practice didn’t work so well for client code in certain edge cases. As a result we have removed the Document class/structure completely from the API. The new API follows a clear scheme: each command takes required arguments explicitly, and an option block for all optional values. The returned value is always of type Result. This avoids method overloading bloat in certain languages, and has the added benefit of making it easy to grasp APIs evenly across services.

As an example here is a KeyValue document fetch:

getResult, err := collection.Get("key", GetOptions{
	Timeout: 2 * time.Second,
})

Compare this to a N1QL query:

queryResult, err := cluster.Query("select 1=1", QueryOptions{
	Timeout: 3 * time.Second,
})

Since documents also fundamentally handled the serialization aspects of content, two new concepts are introduced: the Serializer and the Transcoder. Out of the box the SDKs ship with a JSON serializer which handles the encoding and decoding of JSON. You’ll find the serializer exposes the options for methods like N1QL queries and KeyValue subdocument operations,.

The KV API extends the concept of the serializer to the Transcoder. Since you can also store non-JSON data inside a document, the Transcoder allows the writing of binary data as well. It handles the object/entity encoding and decoding, and if it happens to deal with JSON makes uses of the configured Serializer internally. See the Serialization and Transcoding section below for details.

What to look out for

The SDKs are more proactive in retrying with certain errors and in certain situations, within the timeout budget given by the user — as an example, temporary failures or locked documents are now being retried by default — making it even easier to program against certain error cases. This behavior is customizable in a RetryStrategy, which can be overridden on a per operation basis for maximum flexibility if you need it.

Note, most of the bootstrap sequence is now lazy (happening behind the scenes). For example, opening a bucket is not raising an error anymore, but it will only show up once you perform an actual operation. The reason behind this is to spare the application developer the work of having to do error handling in more places than needed. A bucket can go down 2ms after you opened it, so you have to handle request failures anyway. By delaying the error into the operation result itself, there is only one place to do the error handling. There will still be situations why you want to check if the resource you are accessing is available before continuing the bootstrap; for this, we have the diagnostics and ping commands at each level which allow you to perform those checks eagerly.

Language Specifics

Now that you are familiar with the general theme of the migration, the next sections dive deep into the specifics. First, installation and configuration are covered, then we talk about exception handling, and then each service (i.e. Key/Value, Query,…​) is covered separately.

Installation and Configuration

The Go SDK 2.x is available for download using the go modules system. All releases are posted to the couchbase/gocb GitHub repository and can be used by simply importing github.com/couchbase/gocb/v2 and invoking go get.

Go SDK 2.x has a minimum required Go version of 1.13, although we recommend running the latest LTS version with the highest patch version available.

Almost all configuration for the SDK can be specified through the ConnectOptions which are passed to the gocb.Connect call in the SDK. In addition to this, as with SDK 2.0, the majority of these options can also be specified through the connection string. See the appropriate documentation for more information.

Authentication

Since Go SDK 1.x supports Couchbase Server clusters older than 5.0, it had to support both Role-Based access control as well as bucket-level passwords. The minimum cluster version supported by SDK 3 is Server 5.0, which means that only RBAC is supported. This is why you can set the username and password when directly connecting:

opts := gocb.ClusterOptions{
	Username: "Administrator",
	Password: "password",
}
cluster, err := gocb.Connect("10.112.193.101", opts)
if err != nil {
	panic(err)
}

This is just shorthand for:

opts := gocb.ClusterOptions{
	Authenticator: gocb.PasswordAuthenticator{
		Username: "Administrator",
		Password: "password",
	},
}
cluster, err := gocb.Connect("10.112.193.101", opts)
if err != nil {
	panic(err)
}

The reason why you can pass in a specific authenticator is that you can also use the same approach to configure certificate-based authentication:

cert, err := tls.LoadX509KeyPair("mycert.pem", "mykey.pem")
if err != nil {
	panic(err)
}

opts := gocb.ClusterOptions{
	Authenticator: gocb.CertificateAuthenticator {
		ClientCertificate: cert,
	},
}
cluster, err := gocb.Connect("10.112.193.101", opts)
if err != nil {
	panic(err)
}

Please see the documentation on certificate-based authentication for detailed information on how to configure this properly.

Connection Lifecycle

From a high-level perspective, bootstrapping and shutdown is very similar to Go 1.x. One notable difference is that the Collection is introduced and that the individual methods like Bucket immediately return, and cannot error. Compare SDK 2: the OpenBucket method would return an error if it could not open the bucket.

The reason behind this change is that even if a bucket can be opened, a millisecond later it may not be available any more. All this state has been moved into the actual operation so there is only a single place where the error handling needs to take place. This simplifies error handling and retry logic for an application.

In SDK 2, you connected, opened a bucket, performed a KV op, and disconnected like this:

cluster, _ := gocb.Connect("127.0.0.1")
cluster.Authenticate(PasswordAuthenticator{
    Username: "user",
    Password: "pass"
})

bucket, _ := cluster.OpenBucket("travel-sample")

getResult, _ := bucket.Get("airline_10", nil)

bucket.Close()

Here is the SDK 3 equivalent:

getResult, err := collection.Get("key", GetOptions{
	Timeout: 2 * time.Second,
})

Collections will be generally available with an upcoming Couchbase Server release, but the SDK already encodes it in its API to be future-proof. If you are using a Couchbase Server version which does not support Collections, always use the DefaultCollection() method to access the KV API; it will map to the full bucket.

You’ll notice that Bucket(string) returns immediately, even if the bucket resources are not completely opened. This means that the subsequent Get operation may be dispatched even before the socket is open in the background. The SDK will handle this case transparently, and reschedule the operation until the bucket is opened properly. This also means that if a bucket could not be opened (say, because no server was reachable) the operation will time out. Please check the logs to see the cause of the timeout (in this case, you’ll see socket connect rejections).

Also note, you will now find Query, Search, and Analytics at the Cluster level. This is where they logically belong. If you are using Couchbase Server 6.5 or later, you will be able to perform cluster-level queries even if no bucket is open. If you are using an earlier version of the cluster you must open at least one bucket, otherwise cluster-level queries will fail.

Serialization and Transcoding

In SDK 2 the main method to control transcoding was through specfying unique Transcoder instances at the top-level. This concept has been extended to enable developers to specify per-operation Transcoder instances.

Additionally, the default transcoder has been modified to no longer transcoder byte-arrays as a precaution against accidentally encoding strings as JSON or JSON as strings. A new LegacyTranscoder has been implemented which mirrors Go SDK 1.x’s behaviour.

Error Handling

How to handle errors has remained relatively unchanged from Go SDK 1.x and continues to follow the idiomatic Go ideology of returning errors via a parameter.

However in Go SDK 2.x, we have updated our code to follow the latest Go error handling best practices and provide an improved error interface using the errors.As and errors.Is methods.

In version 1.x of the SDK, you may receive an error and compare it directly to one of the gocb.ErrSomething errors:

res, err := bucket.Get("airline_10", nil)
if err == gocb.ErrKeyNotFound {
  // handle the error
}

In version 2.x of the SDK, you should instead now check using the errors.Is method:

if errors.Is(err, ErrDocumentNotFound) {
	// handle your error
}

In addition, 2.x of the SDK provides the ability to gather additional contextual information about why your operation failed through the various error types:

	if errors.Is(err, ErrDocumentNotFound) {
		var kverr gocb.KeyValueError
		if errors.As(err, &kverr) {
			log.Printf("Error Context: %+v\n", kverr)
		}
	}
	log.Printf("Get Result: %+v\n", getResult)

	queryResult, err := cluster.Query("select 1=1", QueryOptions{
		Timeout: 3 * time.Second,
	})
	log.Printf("Query Result: %+v\n", queryResult)

	cluster.Close()
}

func passauthenticator() {
	opts := gocb.ClusterOptions{
		Authenticator: gocb.PasswordAuthenticator{
			Username: "Administrator",
			Password: "password",
		},
	}
	cluster, err := gocb.Connect("10.112.193.101", opts)
	if err != nil {
		panic(err)
	}

	cluster.Close()
}

func certauthenticator() {
	cert, err := tls.LoadX509KeyPair("mycert.pem", "mykey.pem")
	if err != nil {
		panic(err)
	}

	opts := gocb.ClusterOptions{
		Authenticator: gocb.CertificateAuthenticator {
			ClientCertificate: cert,
		},
	}
	cluster, err := gocb.Connect("10.112.193.101", opts)
	if err != nil {
		panic(err)
	}

	cluster.Close()
}

func main() {
	basic()
	passauthenticator()
	certauthenticator()
}