Setting Up the Network Services

This section describes how to set up the network services for a storage cluster.

Platform Server

Skip this section if using : network services are set up.

Setting Up NTP for Time Synchronization

The Network Time Protocol (NTP) server provides time synchronization between the cluster nodes, which is critical for many Swarm components. For best results, configure multiple NTP servers in close proximity to a cluster. For example, use the NTP Pool Project's continental zones, which are pools of NTP servers.

One or more trusted NTP servers, such as dedicated hardware solutions on the internal network or publicly available NTP servers, are required in a storage cluster. This configuration is required, verifying the internal clocks in all nodes are synchronized with each other.

Add trusted NTP servers to the cluster by adding the IP addresses or host names in the network.timeSource parameter located in the node configuration files if available. The parameter value is a list of one or more NTP servers (either host names or IP addresses) separated by spaces. For example, to add a second NTP server IP address, use the following syntax: 

network.timeSource = 10.20.40.21 10.20.50.31

The node must be able to resolve host names using a DNS server to add an NTP server host name. Use this syntax: 

network.timeSource = ntp1.example.com ntp2.example.com

See

NTP 3.0

NTP 3.0 included a design limitation causing the time value to wrap in the year 2036. NTP cannot correct the time if the BIOS clock in a cluster node is set beyond this wrap point. Verify the BIOS clocks in all nodes are set to a year prior to 2036 before booting Swarm in a cluster. This issue was resolved in NTP 4.0.

The node does not boot if the configured NTP server(s) cannot be reached. See Configuring a Node without NTP if the cluster nodes cannot access an external or internal NTP server.

Setting Up DHCP for IP Address Administration

The Dynamic Host Configuration Protocol (DHCP) server provides IP addresses to the cluster nodes and other devices enabled as DHCP clients. While Swarm nodes are not required to have static IP addresses to discover and communicate with each other, administrators may find it easier to manage and monitor a cluster where each node receives a predetermined IP address.

Configure this option using DHCP:

  1. Map the Ethernet media access control (MAC) address of each node to a static IP address.

  2. Configure the DHCP server to provide each node with an IP address for each of these:

    • network mask

    • default gateway

    • DNS server

Setting Up DNS for Name Resolution

The Domain Name Service (DNS) is used to resolve host names into IP addresses. While DNS is not required for Swarm nodes to communicate with each other, DNS can be very useful for client applications to reach the cluster. DNS is one method to use to enable access to objects over the Internet if using named objects.

Best Practice

Although client applications can initiate first contact with any node in the storage cluster – even choosing to access the same node every time – best practice is for the node of first contact to be distributed evenly around the cluster.

For example:

  • Define multiple DNS entries ("A" or "CNAME" records) that specify the IP address for the same Swarm first contact node.

  • Use multiple IP addresses for a DNS entry to create a DNS round-robin that provides client request balancing.

See the DNS software documentation for how to use "A" records and "CNAME" (alias) records.

Swarm requires a DNS server to resolve host names in the configuration file. For example, add a host name to the NTP list or the log host (such as ntp.pool.org) for name resolution. The DNS server needs to be set in the Swarm configuration file. In contrast, applications must resolve Swarm domain names to find the storage cluster. These unique requirements are most likely addressed using different DNS servers.

The following example shows the entries in the Internet Systems Consortium (ISC) BIND DNS software configuration file for three node IP addresses tied to one name.

Swarm 0 IN A 192.168.1.101 0 IN A 192.168.1.102  0 IN A 192.168.1.103

In this example, the Time To Live (TTL) value for each of the records in the round-robin group is very small (0-2 seconds). This configuration is necessary so clients who cache the resolution results quickly flush them. This process allows the first contact node to be distributed and allows a client to move quickly to another node if the first contact node is unavailable.

Preparing for Domains

To allow clients to access named objects over the Internet, enable incoming HTTP requests to resolve to the correct domain. (A cluster can contain many domains, each of which can contain many buckets, each of which can contain many named objects.) Cluster and domain names should both be Internet Assigned Numbers Authority (IANA) compatible host names, such as cluster.example.com.

For example, a client application can create an object with a name such as:

In this example, cluster.example.com is the domain name, marketing is the name of a bucket, and photos/ads/object-naming.3gp is the name of an object. Set up the network so the host name in the HTTP request maps correctly to the object's domain name. The cluster name is not required.

To enable clients to access a named object:

  1. Set up host files to map domain names to IP address(es) of the first contact node.

    • For a Linux system, configure the /etc/hosts file.

    • For a Windows system, configure the %SystemRoot%\system32\drivers\etc\hosts file.
      Example of a configured hosts file: 

  2. Define multiple DNS entries ("A" or "CNAME" records) that identify the IP address(es) of the first contact node in the storage cluster. This process creates a DNS round-robin that provides client request load balancing.

    • For help setting up DNS for Swarm, see Setting up DNS for name resolution, above.

    • For information about setting up a DNS server, see the DNS software documentation.

Setting Up a Syslog Server for Critical Alerts

A syslog server must be set up to capture critical operational alerts from the nodes in a storage cluster. The server captures messages sent by the Swarm nodes on UDP port 514.

See on configuring an rsyslog server and the log.host and log.level parameters used to send Swarm messages to a syslog server.

Setting Up SNMP for Monitoring

Swarm provides monitoring information and administrative controls using the Simple Network Management Protocol (SNMP). Using an SNMP console, an administrator can monitor a storage cluster from a central location.

Swarm uses an SNMP management information base (MIB) definition file to map SNMP object identifiers (OIDs) to logical names. The MIB can be located in one of two locations, depending on the configuration:

  • The aggregate MIB for the entire cluster is located at /usr/share/snmp/mibs if cluster nodes boot from a Platform Server.

  • The MIB is located in the root directory of the Swarm software distribution if cluster nodes do not boot from a Platform Server.

See

Setting Up Network Load Balancing

Although the Swarm Storage Cluster nodes interact with client applications using the HTTP communication protocol, the nodes operate differently from traditional web servers. Placing storage nodes behind an HTTP load balancer is usually an unnecessary configuration. A properly configured load balancer can add value-added services like SSL off-load and centralized certificate management.

During normal operations, a storage node routinely redirects a client to another node within the cluster. The client must initiate another HTTP request to the redirected node when this process occurs. Any process that virtualizes the storage node IP addresses or attempts to control the nodes connected to the client generates communication errors.

Setting Up the Network Interfaces

Gigabit Ethernet or faster NICs provide the recommended 1000 Mbps data communications speed between storage cluster nodes. Swarm automatically uses multiple NICs to provide a redundant network connection.

Connect the NICs to one or more interconnected switches in the same subnet to implement this feature.

See Switching Hardware.

© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.