A newer version of this documentation is available.

View Latest

Deployment

      +

      When deploying your application for production use you will need to use Sync Gateway and Couchbase Server. This article covers different aspects of using Sync Gateway and Couchbase Server during production.

      Where to Host

      Whether hosting on-premise, or in the cloud, you will want to have your Sync Gateway and Couchbase Server sit closely to each other for optimal performance between these two systems. In a production environment, they are expected to be deployed on separate machines. This is because the Sync Gateway is typically deployed to be internet facing and sits in the "Application tier" whilst Couchbase Server is deployed in the "Database tier".

      Sizing and Scaling

      Your physical machine, containers or VMs determines how many active, concurrent users you can comfortably support for a single Sync Gateway.

      Alternatively, instead of scaling vertically, you can also scale horizontally by running Sync Gateway nodes as a cluster. (In general, you will want to have at least two Sync Gateway nodes to ensure high-availability in case one should fail.) This means running multiple instances of Sync Gateway on each of several machines, and load-balancing them by directing each incoming HTTP request to a random node.

      The Sync Gateway nodes in a cluster have a homogeneous configuration with the exception of import node and replicator nodes.

      Import node

      Under convergence/shared bucket access, it is recommended that one Sync Gateway node in a cluster be configured for handling document import processing. For high availability, you can configure more than one Sync Gateway node in your cluster to be the import node, although it is strongly discouraged for multiple Sync Gateway nodes in the cluster to be configured for import processing. The configuration of the Sync Gateway import node is slightly different than the "regular" or "non-import" Sync Gateway nodes (see databases.$db.import_docs).

      Replicator node

      if you are using inter-cluster replication using sg-replicate then there will be one designated replicator node whose configuration is different than the rest of the nodes.

      Sync Gateway nodes are "shared-nothing," so they don’t need to coordinate any state or even know about each other. With multiple Sync Gateways, we recommend placing this cluster behind a load balancer server to coordinate connection requests in clients (see the Load Balancer guide).

      Performance Considerations

      Keep in mind the following notes on performance:

      • Sync Gateway nodes don’t keep any local state, so they don’t require any disk.

      • Sync Gateway nodes maintain a channel and revision metadata cache in RAM. Tuning the cache values in the configuration file can speed up the performance (see the databases.$db.cache properties).

      • Sync Gateway is designed for multiprocessing. It uses lightweight threads and asynchronous I/O. Therefore, adding more CPU cores to a Sync Gateway node can speed it up.

      • As is typical with databases, writes are going to put a greater load on the system than reads. In particular, every write operation gets processed by the Sync Function and triggers notifications to other clients with read access, who then perform reads to get the new data.

      • Each client running a continuous replication has an open socket to be notified of changes, these sockets remain idle most of the time (unless documents are being modified at a very high rate), so the actual data traffic is low — the issue is just managing that many sockets. We recommend developers to optimize how many connections they need to open to the sync tier (see the OS Level Tuning guide).

      • In a Sync Gateway deployment with GSI/N1QL indexing, the resources allocated to the Couchbase Server index node must be sufficient to support Sync Gateway operations.