Couchbase Networking

    +
    Connecting to a Couchbase cluster in Kubernetes is challenging. This section outlines supported strategies and key concepts.

    Couchbase Server is a high performance database, with data is distributed across all pods in a cluster. Clients are aware of which pod a data item should reside on and perform client-side load balancing. By performing the load balancing in the client, this avoids unnecessary network hops and improves performance. For this reason Couchbase Server cannot be accessed using normal Kubernetes Service or Ingress resources.

    The following options depict the possible networking topologies for connecting clients to a Couchbase cluster. Clients includes all Couchbase Client SDKs, Couchbase Mobile and Couchbase XDCR connections unless otherwise stated.

    Intra-Kubernetes Networking

    Intra-Kubernetes networking is the simplest and allow clients running in the same Kubernetes cluster as the Couchbase server instance. This method of communication can be used by any client.

    networking basic intra
    Figure 1. Basic Intra-Kubernetes Networking

    The client can use endpoint DNS entries to connect to individual Couchbase nodes. Stable service discovery is provided by SRV records. TLS can be used to secure communications.

    Inter-Kubernetes Networking with Forwarded DNS

    Inter-Kubernetes networking allows clients to connect to Couchbase server instances in a remote Kubernetes cluster. It uses the DNS service offered by the remote Kubernetes cluster to provide addressing.

    A local DNS server is used by Couchbase Server and clients to forward DNS requests for a remote namespace to a remote Kubernetes cluster. Requests that do not fall into this DNS zone are forwarded to the local DNS service. Any DNS server that supports forwarding of zones to remote DNS servers can be used, but we recommend CoreDNS as the defacto cloud native standard.

    This method of communication can only be used if clients in one cluster can communicate with pods in another — it uses routed networking. For example:

    • Google GKE allows multiple Kubernetes clusters in the same virtual private cloud, and has routed networking by default.

    • Amazon AWS can use the VPC CNI plugin to create routed networks that can be peered together.

    networking basic forwarded dns
    Figure 2. Basic Inter-Kubernetes Networking

    The client can use endpoint DNS entries to connect to individual Couchbase nodes. Stable service discovery is provided by SRV records. TLS can be used to secure communications.

    Public Networking with External DNS

    Public networking allows clients to connect to Couchbase server instances from anywhere with an Internet connection.

    External DNS is a service that can be run in a Kubernetes namespace. It allows services to advertise load-balancer service public IP addresses with public DDNS services. This networking type requires Exposed Features and has some client limitations.

    networking basic external dns
    Figure 3. Basic Public Networking

    The client can use DNS names for load-balancer service public IP addresses to connect to individual Couchbase nodes. Stable service discovery is provided by a load-balanced HTTP connection to the cluster admin service. TLS must be used to secure communications.

    Generic Networking

    Generic networking use is discouraged for production deployments. It should be avoided in preference for one of the prior methods of communication. This networking type requires Exposed Features and has some client limitations.

    networking basic nodeport
    Figure 4. Generic Networking

    The client uses Kubernetes node ports to connect to individual Couchbase nodes. Stable service discovery is not possible. TLS cannot be used to secure communications.

    Exposed Features

    Both Public Networking with External DNS and Generic Networking require client traffic to cross as DNAT boundary. In both cases clients connect to different IP addresses than those of the underlying Couchbase pods. As a result when clients initially contact Couchbase Server to get a map of nodes and their hostnames then Operator must override the internal Kubernetes DNS names.

    Setting spec.networking.exposedFeatures when creating a Couchbase cluster will instruct the Operator to override node mappings so that clients can connect to the correct IP address that a client can connect to. It will also cause the Operator to create separate services that are visible external to the Kubernetes cluster.

    Additionally spec.networking.dns.domain is specified when using Public Networking with External DNS, populating the cluster node maps with DNS names and annotating the services so External DNS can replicate this to a cloud DDNS service provider.

    Supported Client Versions for use with Exposed Features

    As Public Networking with External DNS and Generic Networking require exposed features, clients need to be able to support this feature. The minimum versions are specified below.

    Table 1. Supported Clients for Exposed Features
    Client Version

    Couchbase Server (XDCR)

    6.0.1+

    5.5.3+

    Node.js SDK

    2.5.0+

    When not using the embedded libcouchbase and choosing to build with an external installation, libcouchbase 2.9.2+ is required.

    PHP SDK

    Any supported version using libcouchbase 2.9.2+

    Python SDK

    Any supported version using libcouchbase 2.9.2+

    C SDK (a.k.a. libcouchbase)

    2.9.2+

    Java SDK

    2.7.7+

    .NET SDK

    2.7.9+

    Go SDK

    1.6.1+

    Couchbase Sync Gateway

    2.7.0+[1]

    Couchbase Elasticsearch Connector

    4.1.0+

    Couchbase Kafka Connector

    3.4.5+


    1. A known issue exists (K8S-1585) where lookup may fail when using DNS SRV over TLS to connect to a Couchbase Cluster in the same Kubernetes cluster. In such cases, the workaround is to add wildcard matches to the Subject Alternate Names (SANs) as discussed in the Creating TLS Certificates tutorial.