A newer version of this documentation is available.

View Latest

Using Cross Data Center Replication (XDCR) with the Operator

      One of the Couchbase data platform’s key features is the ability to replicate bucket data in another geographically remote location to provide resilience in the event of a network or power fault.

      This page outlines a few Kubernetes deployment scenarios and how to configure your Couchbase clusters accordingly to accommodate XDCR.

      Deployment Scenarios

      XDCR Within the Same Kubernetes Cluster

      This is by far the simplest deployment. Since both Couchbase Server clusters are in the same pod network, and both have access to DNS, XDCR will just work out of the box. TLS can also be used to secure communications with a wildcard certificate.

      XDCR to a Different Kubernetes Cluster with Public DNS

      This option allows public connectivity and is the most flexible as it allows establishment of XDCR and use of clients from anywhere. It requires that the Kubernetes cluster is able to create publicly-addressable load balancer services, and the use of a dynamic DNS (DDNS) provider. Use of TLS is mandatory.

      Ensure that your clients are supported with public connectivity before using this network model.

      XDCR to a Different Kubernetes Cluster with Routed Networking

      If the pod networks are able to communicate with each other, and you are using a service that is able to replicate the cluster DNS records, then XDCR will work once the services created for the Couchbase pods by the Operator have been replicated in the remote cluster. As DNS is available, TLS can also be used to secure communications with a wildcard certificate.

      XDCR to a Different kubernetes Cluster with Overlay Networking

      This option is a catch-all if any of the other options aren’t possible. It requires that the two Kubernetes clusters can route between their host networks.

      You will need to enable XDCR under exposedFeatures in the cluster specification. TLS is currently not supported since addressing is via IP address in this scenario.

      Establishing XDCR Connections

      XDCR Within the Same Kubernetes Cluster

      As the pods from one network are routable from the other and you have dynamic DNS, establishing an XDCR connection with the target is as easy as specifying something similar to cb-example-0000.cb-example.default.svc:8091 as per usual.

      XDCR to a Different Kubernetes Cluster with Public DNS

      The diagram below depicts the network flow when establishing an XDCR connection via DNS and the spec.exposedFeatures property.

      XDCR between Kubernetes clusters

      With this topology, the features are exposed to the public internet by setting the spec.exposedFeatureServiceType to LoadBalancer in the CouchbaseCluster specification. This creates a load balancer service in your cloud that will be allocated a public IP address. A load balancer will be created for every pod in your cluster since Couchbase clients shard data across the cluster and expect to have access to individual pods. Since this service is publicly exposed, the Operator will filter out any unencrypted ports.

      The use of TLS is mandatory in this mode of operation, as data and passwords are publicly exposed. Please consult the TLS documentation in order to correctly generate TLS certificates and Kubernetes secrets for your cluster.

      When generating server certificates, the *.${cluster}.${namespace}.svc DNS subject alternate name (SAN) is still required for the Operator to communicate with the cluster. An additional DNS SAN is required for the external DNS name of the cluster. In the above diagram, this would be *.couchbase0.cluster2.example.com, or more generally, *.${cluster}.${domain}.

      The external DNS domain must be configured using the spec.dns.domain parameter. This domain parameter is used to generate annotations on the load balancer service so that a 3rd-party controller can install the DNS name into a cloud-based DDNS provider when the load balancer IP address is allocated. The recommended controller is the external-dns project. You can also provide your own DDNS management by monitoring services created by the Operator and adding an A record for the name as advertised in the service external-dns.alpha.kubernetes.io/hostname annotation mapped to the external load balancer IP.

      When establishing an XDCR connection to a remote cluster using DNS, the connection string in this example would be pod0.couchbase0.cluster2.example.com:18091 but may be that of any pod in the Couchbase cluster. If spec.adminConsoleServiceType is also set to LoadBalancer, you can also use the stable name console.couchbase0.cluster2.example.com:18091. Set the connection type to full TLS encryption and fill in the target cluster’s CA certificate only.

      XDCR to a Different Kubernetes Cluster with Overlay Networking

      The diagram below depicts the network flow when establishing an XDCR connection via IP and the spec.exposedFeatures property.

      XDCR between overlay networks

      This assumes two nodes in separate Kubernetes clusters can communicate with each other. The router depicted could be a switch, router, VPN connection or any other infrastructure which allows layer 3 communication.

      The server ports used for communication between clusters for XDCR may need to be explicitly allowed, depending on your environment. By default, these ports will be in the range 30000-32767, but they may differ between cloud providers, or depend on your cluster configuration. If a connection fails to establish, please ensure that this port range is allowed by firewall and security group rules. This applies to all Kubernetes nodes that can host Couchbase Server pods.

      In the diagram, the pod in Cluster 1 is only able to connect to the node port exposed in Cluster 2. Furthermore, that port is only exposed on the node on which the pod is running. To determine the correct connection string on the XDCR target cluster:

      1. List all Couchbase pods

        $ kubectl get pods -l couchbase_cluster=cb-example
        NAME              READY     STATUS    RESTARTS   AGE
        cb-example-0000   1/1       Running   0          1h
        cb-example-0001   1/1       Running   0          1h
        cb-example-0002   1/1       Running   0          1h
      2. Choose one of the Couchbase pods and get its underlying node’s IP address:

        $ kubectl get pod cb-example-0000 -o yaml | grep hostIP
      3. Get the port number that maps to the admin port (8091)

        kubectl get service cb-example-0000-exposed-ports
        NAME                            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                                         AGE
        cb-example-0000-exposed-ports   NodePort   <none>        8091:31202/TCP,18091:30641/TCP,8092:31515/TCP,18092:30362/TCP,11210:30065/TCP,11207:32093/TCP   1h

      If you were logged into the Couchbase Server Web Console on Cluster 1, and establishing the XDCR connection to Cluster 2, you’d use the connection string based on the example above.