Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Elasticsearch requires configuration and settings file changes made consistently across the Elasticsearch cluster.

  1. On each Elasticsearch node, run the provided configuration script (/usr/share/caringo-elasticsearch-search/bin/configure_elasticsearch_with_swarm_search.py), which automates the configuration changes described below.

  2. Resume the installation, to turn on the service if no customizations below are needed: Installing Elasticsearch

  3. Proceed as follows if any settings need to be customized, such as to change Elasticsearch's path.data (data directory). Edit the configuration file directly and update log files accordingly.

Customizing Elasticsearch

...

Info

Caution

  • Errors in adding and completing these settings can prevent the Elasticsearch service from working properly.

  • Adjust all references to the Elasticsearch's path.data location below to reflect the new location if customized from the default.

Elasticsearch Config File

...

cluster.name: <ES·cluster·name>

Provide the cluster a unique name. Do not use periods in the name.

Info

Important

It must differ from the cluster.name of the 1.7.1 cluster if one exists to prevent conflict to prevent conflict.

node.name: <ES·node·name>

Setting node.name is optional. Elasticsearch supplies a node name if not set. Do not use periods in the name.

network.host: <ES·host>

Assign a specific hostname or IP address, which requires clients to access the ES server using that address. Update /etc/hosts if using a hostname. Defaults to the special value, _site_.

Info

Metrics requirement

The Elasticsearch host for Metrics in /etc/caringo-elasticsearch/metrics/metrics.cfg must match if the Elasticsearch host is configured to a specific hostname or IP address. The host in metrics.cfg can be a valid IP address or hostname for the Elasticsearch server if network.host is configured in elasticsearch.yml to be "_site_”.

discovery.zen.ping.unicast.hosts: ["es2", "es3"]

Set to the list of node names/IPs in the cluster, include ES servers. Multicast is disabled by default.

discovery.zen.minimum_master_nodes: 3

Set to (number of master-eligible nodes / 2, rounded down) + 1

Prevents split-brain scenarios by setting the minimum number of ES nodes online before deciding on electing a new master.

gateway.expected_nodes: 4

Add and set to the number of nodes in the ES cluster. Recovery of local shards starts as soon as this number of nodes have joined the cluster. It falls back to the recover_after_nodes value after 5 minutes. This example is for a 4-node cluster.

gateway.recover_after_nodes: 2

Set to the minimum number of ES nodes started before going in to operation status. This example is for a 4-node cluster.

index.max_result_window: 50000

Add to support queries with very large result sets (it limits start/from and size in queries). Elasticsearch accepts values up to 2 billion, but more than 50,000 consumes excessive resources on the ES server.

index.translog.sync_interval: 5s

For best performance, set how often the translog is fsynced to disk and committed, regardless of write operations.

index.translog.durability: async

For best performance, change to async so ES fsyncs and commit the translog in the background every sync_interval. In the event of hardware failure, all acknowledged writes since the last automatic commit are discarded.

bootstrap.mlockall: true

Set to lock the memory on startup to guarantee Elasticsearch never swaps does not swap (swapping makes it perform poorlyleads to poor performance). Verify enough system memory resources are available for all processes running on the server.

The RPM installer makes these edits to/etc/security/limits.d/10-caringo-elasticsearch.conf to allow the elasticsearch user to disable swapping and to increase the number of open file descriptors: 

Code Block
languagebash
# Custom for Caringo Elasticsearch and CloudGateway
elasticsearch soft nofile 65535
elasticsearch hard nofile 65535
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

threadpool.bulk.queue_size: 1000

Add to increase the indexing bulk queue size to compensate for bursts of high indexing activity that can exceed Elasticsearch’s rate of indexing.

script.inline: true
script.indexed: true

(SwarmNFS users only) Add to support dynamic scripting.

http.cors.enabled: true
http.cors.allow-origin: "*"

Add to support metrics in the Swarm Storage UI.

path.data: <path·to·data·directory>

By default, path.data goes to /var/lib/elasticsearch with the needed ownership. Choose a separate, dedicated partition of ample size to move the log directory, and assign ownership of the directory to the elasticsearch user:

Code Block
languagebash
chown -R elasticsearch:elasticsearch <path- to- data- directory>

...