A newer version of this documentation is available.

View Latest

Removal

      +
      Node removal allows a node to be taken out of a cluster in a highly controlled fashion, using rebalance to redistribute data, indexes, event processing, and query processing among available nodes.

      Understanding Removal

      Removal provides the most highly controlled means of taking a node out of a cluster. Any node, whatever its service-configuration, can be removed. However, removal should be used only when all nodes in the cluster are responsive, and those intended to remain in the cluster after removal have the capacity to support the results.

      Removal essentially means using Rebalance to redistribute data across a subset of pre-existing cluster-nodes. It can be performed with the UI, the CLI, or the REST API. When the CLI or REST API is used, a single command initiates a rebalance, specifying which nodes are to be excluded. When the UI is used, nodes to be removed are first identified, then rebalance is initiated. When the rebalance is complete, the cluster map is correspondingly updated and distributed to clients. The process occurs while the cluster continues to service requests for data.

      Note that when a node is removed from a cluster, its configuration is deleted. If the removed node is subsequently re-added to the cluster, it is added as a new node, with a new definition of its configuration.

      For information on how to perform node-removal with the UI, the CLI, and the REST API, see Remove a Node and Rebalance.

      Removal and Cluster Resources

      Whenever a node is removed from a cluster, the resources of the cluster are necessarily diminished. When the removed node hosted the Data Service, the reduced cluster necessarily has less memory, storage, and processing power to support the maintenance of data; and may also have insufficient nodes to support the previously established number of bucket-replicas. The actions taken by the Cluster Manager in response to Data-Service node-removal are summarized below.

      Removal Without Replication-Constraint

      A bucket can be configured with replicas. The maximum number of replicas permitted is 3. The number of Data Service nodes required to support replication, given a configuration of n replicas, is n + 1. If node-removal does not reduce the number of Data Service nodes below n + 1, the number of replicas for the bucket is maintained, following rebalance. Correspondingly, the volume of data on each of the surviving Data Service nodes is increased.

      This is illustrated by the following two tables, which respectively represent a cluster of Data Service nodes before and after the removal of one node.

      Table 1 shows the number of items, on each of four individual nodes and for the entire cluster, for a single bucket with two replicas. (Note that some numbers are approximated: these are displayed in italics.)

      Table 1. Four Data Service Nodes, One Bucket with 31,591 Items, Two Replicas
      Host Active Items Replica Items

      Node 1

      7,932

      15,800

      Node 2

      7,895

      15,800

      Node 3

      7,876

      15,700

      Node 4

      7,888

      15,700

      Total

      31,591

      63,000

      As Table 1 shows, each of the four nodes takes a roughly equal share of the bucket-items kept in active vBuckets. It also takes a roughly equal share of the replica bucket-items, kept in replica vBuckets. Since the bucket has two replicas, the ratio of active to replica items, both on each node and in the total for the cluster, is approximately 1:2.

      Table 2 shows the results on the cluster of the removal of node 4 and subsequent rebalance.

      Table 2. Three Surviving Data Service Nodes, One Bucket with 31,591 Items, Two Replicas
      Host Active Items Replica Items

      Node 1

      10,497

      21,000

      Node 2

      10,500

      21,000

      Node 3

      10,400

      21,000

      Node 4

      NA

      NA

      Total

      31,591

      63,000

      As Table 2 shows, following removal and rebalance, all data is hosted on the three surviving Data Service nodes. The ratio of active to replica items remains 1:2 throughout: this is because the number of Data Service nodes has been reduced from n + 2 to n + 1, and is therefore still sufficient to maintain the specified number of replicas. On each individual node, however, the numbers of active and replica items are now correspondingly higher.

      Removal With Replication-Constraint

      If node-removal reduces the number of Data Service nodes below n + 1, the number of replicas for the bucket is reduced to the maximum possible, following rebalance. This is shown by the following table, which represents the cluster of Data Service nodes shown above in Table 2, after the removal of a further node and subsequent rebalance.

      Table 3. Two Surviving Data Service Nodes, One Bucket with 31,591 Items, One Surviving Replica
      Host Active Items Replica Items

      Node 1

      15,897

      15,700

      Node 2

      15,700

      15,800

      Node 3

      NA

      NA

      Node 4

      NA

      NA

      Total

      31,591

      63,000

      As this shows, following removal and rebalance, all data is hosted on the two surviving Data Service nodes. The approximate ratio, on each node, of active to replica items is now 1:1, indicating that a single replica has been retained; this being the maximum number permitted by the new hardware configuration.

      Note that since multiple buckets may have been configured, and different replication-levels applied, removal and rebalance may result in the replica-count for some buckets being reduced, but for others maintained.

      For further examples of rebalance, in the context of failover, see Failover.

      Removal versus Graceful Failover

      As an alternative to removal, a responsive Data Service node can be taken out of a cluster by means of Graceful Failover. This may be faster, but the consequences do not maintain previous availability-levels. An account of the advantages and disadvantages is provided in Graceful Failover.