Configure Prometheus Metrics Collection
You can set up the Autonomous Operator to use the Couchbase Exporter or Couchbase Server’s native support for metrics collection. The Couchbase Exporter is a "sidecar" container that is injected into each Couchbase Server pod. Couchbase native support for metric collection exposes an Prometheus compatible endpoint on all Pods without the use of 3rd party tools. Couchbase native support is available for Couchbase Server versions 7.0 or higher and is the recommended way for collecting metrics with Prometheus.
Overview
Prometheus is the de facto metrics collection platform for use in Kubernetes environments. Prometheus can be used in conjunction with Couchbase server for several use cases such as monitoring, metering, and auto scaling. The "Prometheus Operator" is the recommended project to refer to for deploying and managing Prometheus alongside of Couchbase.
Couchbase Prometheus Endpoint
As of Couchbase Server 7.0, a metric endpoint exists on each Couchbase Pod that is compatible with Prometheus metric collection.
This endpoint exists by default and does not require any configuration within the CouchbaseCluster
resource to begin receiving metrics into Prometheus.
Therefore, the only setup required is to properly direct Prometheus to the Couchbase metric endpoint that requires the creation of a dedicated metric service along with a ServiceMonitor
resource as outlined below.
The ServiceMonitor
is a Custom Resource created during installation of the Prometheus Operator that will direct Prometheus to monitor the dedicated metric service.
Installing Prometheus Operator
Once you have your Couchbase Cluster running, fetch and install the Prometheus Operator stack. To install the Prometheus Operator stack follow the "Quickstart Documentation".
Integrating Couchbase Metrics
Create a Kubernetes service which targets the metrics endpoint on the Couchbase Clusters administrative console.
This is the raw service that will be used by Prometheus for metric collection.
Only one service is necessary for each Couchbase cluster since Couchbase aggregates statistics across all of the Pods.
Save the following configuration as a file named couchbase-prometheus-service.yaml
and make any modifications as described:
apiVersion: v1
kind: Service
metadata:
labels:
app.couchbase.com/name: couchbase (1)
name: cb-metrics
spec:
ports:
spec:
ports:
- name: metrics
port: 8091 (2)
protocol: TCP
selector:
app: couchbase
couchbase_cluster: cb-example (3)
couchbase_server: "true"
sessionAffinity: ClientIP
1 | Label allows service to be selected by the Prometheus ServiceMonitor resource. |
2 | Port to fetch metrics. When Couchbase Cluster has TLS enabled, port 18091 must be used. |
3 | Replace cb-example with the name of your Couchbase cluster to ensure the correct Pods are selected for metric collection. |
Create the service resource in the same namespace as the CouchbaseCluster
you wish to monitor, in this example we assume default
:
$ kubectl create -f couchbase-prometheus-service.yaml
Now create a ServiceMonitor
resource to direct Prometheus to monitor this metric service.
The To use password authentication you will also need to add your authentication secret in the same namespace as the The default location of the Prometheus Operator is the |
The following is an example ServiceMonitor
configuration.
Save as couchbase-prometheus-monitor.yaml
after making any necessary modifications as described below:
apiVersion: v1
kind: Secret
metadata:
name: cb-example-auth
namspace: monitoring
stringData:
username: my-username
password: my-password
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: couchbase-prometheus
namespace: monitoring (1)
spec:
endpoints:
- interval: 5s
port: metrics
basicAuth: (2)
password:
name: cb-example-auth
key: password
username:
name: cb-example-auth
key: username
tlsConfig: {} (3)
namespaceSelector: (4)
matchNames:
- default
- monitoring
selector:
matchLabels:
app.couchbase.com/name: couchbase (5)
1 | The ServiceMonitor must be installed in the same namespace as the Prometheus Operator. |
2 | Prometheus Operator can authenticate to Couchbase using the same username and password as the Couchbase Operator as long as the authentication Secret also exists in the same namespace as the ServiceMonitor resource. |
3 | The tlsConfig is required when the metric target is exposed over TLS.
Refer to the TLS Configuration Reference documentation for supported attributes.
In production you should ensure that your client certificates are signed by the same root CA that is installed within the Couchbase Server. |
4 | You need to explicitly state what namespaces to poll for services. |
5 | Selects the metric services to monitor.
This corresponds to the Service labels used in the previous step. |
Commit the service monitor configuration:
$ kubectl create -f couchbase-prometheus-monitor.yaml -n monitoring`
Verify
A quick way to check that metrics are being collected is to forward the Prometheus service port to your local machine and query the dashboard for Couchbase related metrics.
$ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
Try entering kv_ops
into the metric search bar.
If no results are returned, check Status → Targets for any unhealthy endpoints and also check the Prometheus Operator logs for additional information.
Couchbase Prometheus Exporter
The Autonomous Operator provides Prometheus integration for collecting and exposing Couchbase Server metrics via the Couchbase Prometheus Exporter.
Prometheus metrics collection is enabled in the CouchbaseCluster
resource.
The configuration allows you to specify a Couchbase-provided container image that contains the Prometheus Exporter.
The Autonomous Operator injects the image as a "sidecar" container in each Couchbase Server pod.
The Couchbase-supplied Prometheus Exporter container image is only supported on Kubernetes platforms in conjunction with the Couchbase Autonomous Operator. |
Couchbase Exporter Configuration
Prometheus metrics collection is enabled in the CouchbaseCluster
resource.
apiVersion: couchbase/v2
kind: CouchbaseCluster
spec:
monitoring:
prometheus:
enabled: true (1)
image: couchbase/exporter:1.0.6 (2)
authorizationSecret: cb-metrics-token (3)
1 | Setting couchbaseclusters.spec.monitoring.prometheus.enabled to true enables injection of the sidecar into Couchbase Server pods. |
2 | If the couchbaseclusters.spec.monitoring.prometheus.image field is left unspecified, then the dynamic admission controller will automatically populate it with the most recent container image that was available when the installed version of the Autonomous Operator was released.
The default image for open source Kubernetes comes from Docker Hub, and the default image for OpenShift comes from Red Hat Container Catalog.
If pulling directly from the the Red Hat Container Catalog, then the path will be something similar to |
3 | You can optionally specify a Kubernetes Secret that contains a bearer token value that clients will need to use to gain access to the Prometheus metrics. |
If you wish to create a Kubernetes secret with a bearer token, simply edit and create the following definition:
The |
-
You can enable/disable Prometheus metric collection at any time during the cluster’s lifecycle. However, since
Pod
resources are immutable, enabling or disabling metric collection will require a rolling upgrade of the cluster. -
Only official Prometheus images provided by Couchbase are supported (with the exception of user-supplied images that specifically enable customized metrics).
In addition, you should ensure that your image source is trusted. The sidecar container requires access to the Couchbase cluster administrative credentials in order to login and perform collection. Granting these credentials to arbitrary code is potentially harmful.
Customizing Metrics
By default, the exporter collects all metrics from Couchbase Server, with each metric having a default name and default metadata associated with it. The exporter supports certain user-configurable customizations to these defaults.
The Couchbase Prometheus Exporter currently supports the following customizations:
-
Change the namespace, subsystem, name, and help text for each metric.
-
Enable and disable whether a metric is exported to Prometheus.
These customizations are enabled by building your own custom version of the Couchbase Prometheus Exporter container image.
For instructions on how to create a custom metrics configuration and build it into a container image, refer to the couchbase-exporter
README.
Using Exported Metrics
Once configured, active metrics can be collected from each Couchbase Server pod on port 9091.
The Autonomous Operator does not create or manage resources for third-party software. Prometheus scrape targets must be manually created by the user.
Couchbase Cluster Auto-scaling
The Autonomous Operator supports auto-scaling Couchbase clusters. In order to properly take advantage of this feature, users must expose Couchbase metrics through the Kubernetes custom metrics API.
Discovery of available metrics can be performed through Prometheus queries. However, the Couchbase Exporter repository contains a convenient list of the Couchbase metrics being exported.
Some metric names are further explained in the Couchbase Server documentation:
-
Data Service
-
Index Service
-
Query Service
-
Search Service
-
Eventing Service
-
XDCR
-
Clusters and Nodes