Installing on Kubernetes
This guide walks through the recommended procedure for installing the Couchbase Autonomous Operator on an open source Kubernetes cluster that has RBAC enabled.
If you are looking to upgrade an existing installation of the Operator, see Upgrading the Autonomous Operator.
-
You have a working knowledge of Kubernetes
-
You have reviewed the prerequisites
-
You are installing on a new Kubernetes cluster that has RBAC enabled
-
See the setup documentation for public Kubernetes services if you are using services like EKS, AKS, and GKE.
-
-
You have administrative privileges for the Kubernetes cluster
If you have a Kubernetes environment that already has a custom setup (non-default), you should be able to modify a few of the parameters in the various commands and configuration files mentioned in this guide to fit your requirements.
This guide makes certain assumptions about the role-based access control (RBAC) settings in your Kubernetes environment. Refer to the RBAC doumentation before you install the Operator, as your Kubernetes environment may differ from the environment upon which this guide is based. |
Prerequisites
-
Make sure you have installed the admission controller and that it is running.
-
Download the Operator package and unpack it on the same computer where you normally run
kubectl
.The Operator package contains YAML configuration files and command-line tools that you will use to install the Operator.
After you unpack the download, the resulting directory will be titled something like couchbase-autonomous-operator-kubernetes_x.x.x-linux_x86_64
. Make sure tocd
into this directory before you run the commands in this guide.
Install the CRD
The first step in installing the Operator is to install the custom resource definition (CRD) that describes the CouchbaseCluster
resource type.
This can be achieved with the following command:
kubectl create -f crd.yaml
Create a Role
The next step to installing the Operator is to create a role that allows the Operator to access the resources that it needs to run.
If you plan to run the Operator in more than one namespace, it may be preferable to create a cluster role because you can assign that role to a service account in any namespace. Note that if you chose to to use a cluster role, making modifications to it will affect all instances of the Operator that use it. This has the potential to cause adverse affects in production environments, such as removing required premissions used by older instances of the Operator. |
To create the role for the Operator, run the following command:
kubectl create -f operator-role.yaml --namespace default
Create a Service Account
After the role is created, you need to create a service account in the namespace where you are installing the Operator, and then assign the cluster role to that service account using a cluster role binding. (In this example, the service account is created in the namespace called default
.)
To create the service account:
kubectl create -f operator-service-account.yaml --namespace default
To assign the role to the service account:
kubectl create -f operator-role-binding.yaml --namespace default
If you choose to install the Operator in a namespace other than default, be sure to modify the namespace defined in the |
Create the Operator
Now that the service account is set up with the appropriate permissions, you can create and start the Operator by running the following command:
kubectl create -f operator-deployment.yaml --namespace default
Running this command downloads the Operator Docker image (specified in the operator-deployment.yaml
file) and creates a deployment, which manages a single instance of the Operator. The Operator uses a deployment so that it can restart if the pod it’s running in dies.
After you run the kubectl create
command, it generally takes less than a minute for Kubernetes to deploy the Operator and for the Operator to be ready to run.
If you encounter a The images that are specified in the provided YAML files are hosted in common public container registries. Couchbase regularly publishes new container images that incorporate the latest security updates and patches. In rare cases, a registry’s security policy may force the removal of older container images, causing the provided YAML definitions to become invalid. It’s recommended that you check the registry for new and discontinued container images, and update your deployment accordingly. To use a newer container image, simply update the container image URL in |
Check the Status of the Deployment
You can use the following command to check on the status of the deployment:
kubectl get deployments
If you run the this command immediately after the Operator is deployed, the output will look something like the following:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
couchbase-operator 1 1 1 0 10s
In this case, the deployment is called couchbase-operator
. The DESIRED
field in the output shows that this deployment will create one pod running the Operator. The CURRENT
field shows that one Operator pod has been created. However, the AVAILABLE
field indicates that the pod is not ready yet since its value is 0 and not 1. That means that the Operator is still establishing a connection to the Kubernetes master node to allow it to get updates on CouchbaseCluster objects. Once the Operator has completed this task, it will be able to start managing Couchbase Server clusters and the status will be shown as AVAILABLE
.
You should continue to poll the status of the Operator until the output looks similar to the following:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
couchbase-operator 1 1 1 1 47s
Check the Status of the Operator
You can use the following command to verify that the Operator has started successfully:
kubectl get pods -l app=couchbase-operator
If the Operator is up and running, the command returns an output where the READY
field shows 1/1
, such as:
NAME READY STATUS RESTARTS AGE
couchbase-operator-1917615544-t5mhp 1/1 Running 0 57s
You can also check the logs to confirm that the Operator is up and running. Look for the message: CRD initialized, listening for events… module=controller
.
kubectl logs couchbase-operator-1917615544-t5mhp
You should see output similar to the following:
time="2018-04-25T03:01:56Z" level=info msg="Obtaining resource lock" module=main
time="2018-04-25T03:01:56Z" level=info msg="Starting event recorder" module=main
time="2018-04-25T03:01:56Z" level=info msg="Attempting to be elected the couchbase-operator leader" module=main
time="2018-04-25T03:02:13Z" level=info msg="I'm the leader, attempt to start the operator" module=main
time="2018-04-25T03:02:13Z" level=info msg="Creating the couchbase-operator controller" module=main
time="2018-04-25T03:02:13Z" level=info msg="Event(v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"default\", Name:\"couchbase-operator\", UID:\"9b86c750-47e7-11e8-866e-080027b2a68d\", APIVersion:\"v1\", ResourceVersion:\"23482\", FieldPath:\"\"}): type: 'Normal' reason: 'LeaderElection' couchbase-operator-75ddfdbdb5-bz7ng became leader" module=event_recorder
time="2018-04-25T03:02:13Z" level=info msg="CRD initialized, listening for events..." module=controller
time="2018-04-25T03:02:13Z" level=info msg="starting couchbaseclusters controller"
Uninstalling the Operator
-
Delete the Operator.
kubectl delete deployment couchbase-operator
-
Delete the CRD.
Make sure all instances of the Operator have been deleted from the Kubernetes cluster before you delete the CRD. Once all instances of the Operator have been deleted, run the following command to delete the CRD:
kubectl delete crd couchbaseclusters.couchbase.com