When deploying your application for production use you will need to use Sync Gateway and Couchbase Server.
This article covers different aspects of using Sync Gateway and Couchbase Server during production.
Where to Host
Whether hosting on-premise or in the cloud, you will want to have your Sync Gateway and Couchbase Server sit close to each other for optimal performance between these two systems. In a production environment, they are expected to be deployed on separate machines. This is because the Sync Gateway is typically deployed to be internet-facing and sits in the "Application tier" whilst Couchbase Server is deployed in the "Database tier".
Sizing and Scaling
Your physical machine, container or VM, determines how many active concurrent users you can comfortably support for a single Sync Gateway.
Alternatively, instead of scaling vertically, you can also scale horizontally by running Sync Gateway nodes as a cluster. (In general, you will want to have at least two Sync Gateway nodes to ensure high-availability in case one should fail.) This means running multiple instances of Sync Gateway on each of several machines, and load-balancing them by directing each incoming HTTP request to a random node.
The Sync Gateway nodes in a cluster have a homogeneous configuration with the exception of import node and replicator nodes.
- Import node
An import node is a node designated to handle document import processing. You can have zero, one or more of these nodes in a cluster . Use the database.import_docs  property to configure an import node — see also: Sync with Couchbase Server
It is recommended that one Sync Gateway node in a cluster be configured for handling document import processing. For high availability, you can configure more than one Sync Gateway node in your cluster to be the import node, although it is strongly discouraged for multiple Sync Gateway nodes in the cluster to be configured for import processing. The configuration of the Sync Gateway import node is slightly different than the "regular" or "non-import" Sync Gateway nodes — see database.import_docs.
- Replicator node
If you are using inter-Sync Gateway replication then you will have a designated replicator node whose configuration is different than the rest of the nodes — see Inter Sync Gateway Sync - Overview.
Sync Gateway nodes are "shared-nothing," so they don’t need to coordinate any state or even know about each other. With multiple Sync Gateways, we recommend placing this cluster behind a load balancer server to coordinate connection requests in clients (see the Load Balancer guide).
Channel and Revision Caches
Enterprise Edition onlyTuning the channel and revision cache is an Enterprise Edition feature.
The Community Edition is configured with default values. Changing these values has no effect.
Applies to cases where the number of channels can potentially grow unbounded — see: database.cache.channel_cache The size of channel cache will grow unbounded with the number of channels, regardless of the number of active channels.
A properly sized Sync Gateway can scale to deployments with a small to moderate number of channels (in the order of hundreds to tens of thousands of channels). However, since the channel cache can grow unbounded, the Sync Gateway can hit vertical scaling limits, especially as deployments grow in size, in the order of millions of channels.
Applies to cases with large document sizes — see: database.cache.rev_cache.
Keep in mind the following notes on performance:
Sync Gateway nodes don’t keep any local state, so they don’t require any disk.
Sync Gateway nodes maintain a channel and revision metadata cache in RAM. Tuning the cache values in the configuration file can speed up the performance (see Channel and Revision Cache).
Sync Gateway is designed for multiprocessing. It uses lightweight threads and asynchronous I/O. Therefore, adding more CPU cores to a Sync Gateway node can speed it up.
As is typical with databases, writes are going to put a greater load on the system than reads. In particular, every write operation gets processed by the Sync Function and triggers notifications to other clients with read access, who then perform reads to get the new data.
Each client running a continuous replication has an open socket to be notified of changes. These sockets remain idle most of the time (unless documents are modified at a very high rate), so the actual data traffic is low — the issue is just managing that many sockets. We recommend developers to optimize how many connections they need to open to the sync tier (see the OS Level Tuning guide).
In a Sync Gateway deployment with GSI/N1QL indexing, the resources allocated to the Couchbase Server index node must be sufficient to support Sync Gateway operations.
As Sync Gateway is optimized to use RAM, performance can be gained (or at minimum not lost) by changing the Linux swappiness value to 0 (see the Swap Space and Kernel Swappiness guide).