Configuration Settings for Multi-Server

 

Deprecated

With Storage 10.0, Swarm is single-process. This information applies to Storage 9.x and earlier.

Use the following configuration settings in the node or cluster configuration file to implement the Swarm multi-server feature:

Chassis Processes Setting

The chassis.processes setting allows specifying the number of independent Swarm server processes to be started in a physical chassis.

Use n-1 processes for a chassis with n CPU cores for best results, as too many Swarm server processes impairs performance. The setting value must be an integer greater than 1.

Add the following entry to the node or cluster configuration file to implement two server processes within a single physical chassis:

chassis.processes = 2

Note

A memory overhead exists with running multiple Swarm processes in a single chassis. A single Swarm process indexes more objects than multiple processes sharing the same amount of RAM.

Verify the number of IP addresses specified by the network.ipAddress setting matches the number of processors specified by the chassis.processes setting. Verify three IP addresses are specified if chassis.processes = 3.

Important

The disk.volumes setting is left blank or set to all if chassis.processes is set.

Network Setup Settings

Assign each node a static IP address because DHCP cannot be implemented in multi-server mode when implementing a multi-server Swarm storage cluster. The network.ipAddress setting syntax is extended by appending the IP address for each process in a list, separated by a space when enabling the multi-server mode.

The number of IP addresses equals the number of server processes specified in the chassis.processes setting. The network.netmask, default network.gateway, and all other network settings are shared for all processes. They need to be specified once, as in a single-process implementation.

Use the following node or cluster configuration file entries for a chassis with two processes:

chassis.processes = 2  network.ipAddress = 192.10.11.200 192.10.11.201  network.netmask = 255.255.0.0  network.gateway = 192.10.1.1

Using the disk.volumes Setting

The disk.volumes setting in the node or cluster configuration file specifies the volume or volumes Swarm can use for cluster storage when configuring a multi-server chassis. The recommended setting (disk.volumes = all) causes Swarm to use all volumes greater than the configured disk.minGB (64 GB by default). A USB flash drive is automatically excluded from the volume list if booting Swarm from it.

The disk.volumes = all setting causes all volumes in a chassis to be distributed evenly among the number of Swarm processes as defined by the chassis.processes setting. This distribution occurs automatically at boot time and adapts to any changes in the number of drives or processes from the previous boot. This allows Swarm to redistribute volumes among Swarm processes as needed each reboot.

Using the node.subcluster Setting

All nodes in a chassis are assigned a subcluster name when using the multi-server mode, either by the administrator or automatically at boot time. Optionally set the subcluster name using the node.subcluster setting.

An automatic value is assigned when using multi-server mode in Swarm if the node.subcluster setting is blank or not set. The automatic subcluster assignment sets the subcluster name to the value of the first IP address for the chassis. The automatic assignment of subcluster creates a different subcluster for each multi-server chassis.

Set node.subcluster = to a value up to 16 characters the subcluster is named. Special characters such as quotation marks and dashes cannot be used. Enter the following in the node or cluster configuration file to name the chassis ServerXYZ:

node.subcluster = ServerXYZ

Include more than one multi-server chassis in the same subcluster provided more than one subcluster is used for the whole cluster when assigning the subcluster.

Use subclusters for other purposes such as data protection. Create subclusters to copy content to data centers in both wings of a building to provide high availability in case of a partial building loss if data centers reside in separate wings of a building. A loss can be events like a fire, flooding, or air conditioning problems.

See Local Area Replication Using Subclusters.

© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.