Installing GKE

      +
      Prepare Google Kubernetes Engine (GKE) for the Couchbase Autonomous Operator.

      Tutorials are accurate at the time of writing but rely heavily on third party software. Tutorials are provided to demonstrate how a particular problem may be solved. Use of third party software is not supported by Couchbase. For further help in the event of a problem, contact the relevant software maintainer.

      Examples will be illustrated with the Google Cloud SDK, however all steps can be completed via the Google Cloud web console.

      Prerequisites

      When the gcloud command has been installed and added to your PATH, log in with the following command:

      $ gcloud auth login

      The gcloud command can support multiple logins. You can select the user to run as with:

      $ gcloud config set account john.doe@acme.com

      You can also remove the login authentication locally with:

      $ gcloud auth revoke john.doe@acme.com

      The project must also be set so that resources are provisioned into it:

      $ gcloud config set project my-project

      GKE Setup

      Like all cloud providers there are two main steps in configuring your cloud environment - the virtual network provisioning then the Kubernetes cluster provisioning.

      Network Setup

      For most users, it will suffice to use automatic subnet provisioning with the following command:

      $ gcloud compute networks create my-network

      For the purposes of this document we will manually configure our subnets so we are able to add in the necessary firewall rules to allow XDCR between Couchbase clusters in different GKE clusters. We create two non-overlapping subnets in the 10.0.0.0/8 RFC-1918 private address space in different regions, then allow all ingress traffic from the 10.0.0.0/8 prefix via a firewall rule. By default network traffic is dropped between different GKE clusters.

      $ gcloud compute networks create my-network \
        --subnet-mode custom
      $ gcloud compute networks subnets create my-subnet-us-east1 \
        --network my-network \
        --region us-east1 \
        --range 10.0.0.0/12
      $ gcloud compute networks subnets create my-subnet-us-west1 \
        --network my-network \
        --region us-west1 \
        --range 10.16.0.0/12
      $ gcloud compute firewall-rules create my-network-allow-all-private \
        --network my-network \
        --direction INGRESS \
        --source-ranges 10.0.0.0/8 \
        --allow all

      Kubernetes Cluster Setup

      After checking with GKE for an updated Kubernetes cluster version, the next step is to create our Kubernetes clusters in each region. For example:

      $ gcloud container clusters create my-cluster-us-east1 \
        --cluster-version 1.14.10-gke.27 \
        --region us-east1 \
        --network my-network \
        --subnetwork my-subnet-us-east1
      $ gcloud container clusters create my-cluster-us-west1 \
        --cluster-version 1.14.10-gke.27 \
        --region us-west1 \
        --network my-network \
        --subnetwork my-subnet-us-west1

      When the clusters are running (the gcloud command will block and complete when the clusters are healthy) you can install credentials into your Kubernetes configuration with the following:

      $ gcloud container clusters get-credentials my-cluster-us-east1 \
        --region us-east1 \
        --project my-project
      $ gcloud container clusters get-credentials my-cluster-us-west1 \
        --region us-west1 \
        --project my-project

      You can select which cluster context to use by default with the following Kubernetes command:

      $ kubectl config use-context gke_my-project_us-east1_my-cluster-us-east1

      Other contexts you may have created can be seen with the kubectl config get-contexts command.

      Kubernetes Environment Setup

      By default users on a new GKE cluster have limited privileges and cannot perform necessary operation to deploy the Couchbase Operator. To enable these privileges run the following command on each cluster:

      $ kubectl create clusterrolebinding john-doe-admin-binding \
        --clusterrole cluster-admin \
        --user john.doe@acme.com