Adding Nodes to an ES Cluster

VM Users when Cloning ES Servers

Before starting ES on the new (cloned) node, delete all data under the configured data location for the cloned node (e.g., /var/lib/elasticsearch). ES generates an error stating a conflicting node store cannot be used if the data is not cleared out before ES on the cloned node is started.

The symptom of this condition is the ES service shows as running per systemd and the network table (netstat) shows ES listening on ports 9200 and 9300, but any connection to port 9200 on the cloned ES node is refused.

Complete these steps to add a new node to a running Elasticsearch cluster:

  1. Install the new ES server.

    1. Verify the new server meets the prerequisites in https://perifery.atlassian.net/wiki/spaces/public/pages/2443809661.

    2. From the Swarm bundle download, get the latest Elasticsearch RPM and Swarm Search RPM, which installs plugins and support utilities. 

      elasticsearch-VERSION.rpm caringo-elasticsearch-search-VERSION.noarch.rpm
    3. Install the Caringo RPM public key included with the distribution bundle by running the following command: 

      rpm --import RPM-GPG-KEY
    4. Install the RPMs. 

      yum install elasticsearch-VERSION.rpm yum install caringo-elasticsearch-search-VERSION.noarch.rpm
  2. Configure the ES server (https://perifery.atlassian.net/wiki/spaces/public/pages/2443809957) using the installation script: /usr/share/caringo-elasticsearch-search/bin/configure_elasticsearch_with_swarm_search.py --no-bootstrap

    1. Install as if this was the first of x ES servers, where x is how many ES servers exist in the ES cluster.

    2. Choose No when it prompts to start ES services.

    3. It prompts for information on all other ES nodes, and it creates configuration files for each. Save these configuration files, which are useful for any future redeployment.

  3. In the Swarm UI, pause the https://perifery.atlassian.net/wiki/spaces/public/pages/2443814403.

  4. Stop the ES services on each of the existing nodes.

  5. SSH into each existing node and edit /etc/elasticsearch/elasticsearch.yml:

    1. Add a comma and the new ES server to the list for discovery.seed_hosts

      The equivalent ES 6.8.6 config was named discovery.zen.ping.unicast.hosts and required setting discovery.zen.minimum_master_nodes to be (total number of nodes)/2 + 1 but is no longer necessary with ES 7.

    2. Set to the new number of nodes in the ES cluster for gateway.expected_nodes.

    3. Adjust the value as appropriate for gateway.recover_after_nodes. This is the minimum number of running ES nodes before going in to the operation status:

      • Set to 1 if total nodes are 1 or 2.

      • Set to 2 if total nodes are 3 or 4.

      • Set to the number – 2 if the total nodes are 5 to 7.

      • Set to the number – 3 if total nodes 8 or more.

  6. Start the ES services on all ES servers.

  7. Check the status to verify they all show the correct number of nodes and have a status of green. 

  8. In the Swarm UI, resume the .

© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.