Installing EKS

      +
      Prepare Amazon Elastic Kubernetes Service (EKS) for the Couchbase Autonomous Operator.

      Tutorials are accurate at the time of writing but rely heavily on third party software. Tutorials are provided to demonstrate how a particular problem may be solved. Use of third party software is not supported by Couchbase. For further help in the event of a problem, contact the relevant software maintainer.

      This guide is meant to supplement the regular Amazon EKS setup documentation, and provides recommendations up to setting up worker nodes.

      Create the Cluster and Configure kubectl

      Create the EKS cluster(s) according to the Amazon documentation, with either eksctl or via the AWS Management Console. Actual cluster creation in EKS can take some time (around 10-15 minutes per cluster).

      With Couchbase Autonomous Operator 2.2.0, we added support for Bottlerocket OS. Bottlerocket is a Linux-based open-source operating system that is purpose-built by Amazon Web Services for running containers. To create an EKS cluster with Bottlerocket OS, follow the steps.

      Configure kubectl

      The following section applies only to when using the AWS Management Console method in the Amazon EKS setup documentation.

      When configuring kubectl via the AWS Management Console, pay particular attention to the following command:

      $ aws eks --region region-code update-kubeconfig --name cluster_name

      This command is vital as it sets the relevant Amazon Resource Name (ARN) variables in ~/.kube/config. Optionally, you can add --region regionName to specify a cluster in a region that is different than the default. (Your default region should have been specified when you first setup the AWS CLI through aws configure command.)

      Validate kubectl Configuration

      Once you have followed the setup instructions for either method, view the configuration at ~/.kube/config to ensure that all variables look correct:

      $ kubectl config view

      The following example shows two AWS context configurations appended to an existing Minikube one:

      apiVersion: v1
      clusters:
      - cluster:
          certificate-authority-data: <REDACTED>
          server: https://E8A3753F682B8098CAFFAEDCF1339471.yl4.eu-west-1.eks.amazonaws.com
        name: arn:aws:eks:eu-west-1:705067739613:cluster/test1
      - cluster:
          certificate-authority-data: <REDACTED>
          server: https://1B9C893E96EAA59350290F34D41A417E.yl4.us-west-2.eks.amazonaws.com
        name: arn:aws:eks:us-west-2:705067739613:cluster/oregon
      - cluster:
          certificate-authority: /Users/.../.minikube/ca.crt
          server: https://192.168.99.100:8443
        name: minikube
      contexts:
      - context:
          cluster: arn:aws:eks:eu-west-1:705067739613:cluster/test1
          user: arn:aws:eks:eu-west-1:705067739613:cluster/test1
        name: arn:aws:eks:eu-west-1:705067739613:cluster/test1
      - context:
          cluster: arn:aws:eks:us-west-2:705067739613:cluster/oregon
          user: arn:aws:eks:us-west-2:705067739613:cluster/oregon
        name: arn:aws:eks:us-west-2:705067739613:cluster/oregon
      - context:
          cluster: minikube
          user: minikube
        name: minikube
      current-context: arn:aws:eks:us-west-2:705067739613:cluster/oregon
      kind: Config
      preferences: {}
      users:
      - name: arn:aws:eks:eu-west-1:705067739613:cluster/test1
        user:
          exec:
            apiVersion: client.authentication.k8s.io/v1alpha1
            args:
            - token
            - -i
            - test1
            command: aws-iam-authenticator
            env: null
      - name: arn:aws:eks:us-west-2:705067739613:cluster/oregon
        user:
          exec:
            apiVersion: client.authentication.k8s.io/v1alpha1
            args:
            - token
            - -i
            - oregon
            command: aws-iam-authenticator
            env: null
      - name: minikube
        user:
          client-certificate: /Users/.../.minikube/client.crt
          client-key: /Users/.../.minikube/client.key

      To switch between Kubernetes contexts, use kubectl config use-context <context-name>. For example:

      $ kubectl config use-context arn:aws:eks:eu-west-1:705067739613:cluster/test1

      Configuring for XDCR

      If you wish to take advantage of XDCR capabilities using the Operator, there are a few extra steps to complete.

      Create the Peering Connection

      Follow the Amazon documentation to create a peering connection between two VPCs in the availability zones of your choosing. Bear in mind that each cluster needs to have different CIDR ranges, in the private ranges of 192.168.0.0/16, 172.16.0.0/12, or 10.0.0.0/8 as per RFC 1918.

      You will need to then accept the connection request in the region of the accepting VPC to establish the connection.

      Configure Route Tables and Security Groups

      Once the peering connection is accepted, you must add a route to the route tables of each of the VPCs so that they can send and receive traffic across the VPC peering connection. To do this, go to Route Tables and select the route table associated with the public subnet of one of your VPCs. Select Routes and then Edit. Add the other VPC’s CIDR block as the Destination, and add the pcx- (peering connection) as the Target. Repeat these steps for the other VPC Public Subnet.

      Next, find the security groups for the worker node CloudFormation stacks and edit their Inbound Rules.

      • Allow TCP traffic from ports 30000 - 32767 from the other cluster’s CIDR. Do this for both clusters. This will allow nodes in each cluster to talk to each other.

      • Allow ICMP from the same cluster’s CIDR to be able to ping the other cluster’s nodes to see if it has the desired network setup.