New Features

Additional Changes

These items are other changes and improvements including those that come from testing and user feedback.

Upgrade Impacts

These items are changes to the product function that may require operational or development changes for integrated applications.

Impacts for 10.0

  • Upgrading Elasticsearch: You may continue to use Elasticsearch 2.3.3 with Storage 10.0 until you are able to move to 5.6 (see Migrating from Older Elasticsearch). Support for ES 2.3.3 ends in a future release.

  • Configuration Settings: Run the Storage Settings Checker to identify these and other configuration issues.

    • Changes for the new single-IP dense architecture:

      • network.ipAddress - multiple IP addresses now disallowed

      • chassis.processes - removed; multi-server configurations are no longer supported

      • ec.protectionLevel - new value "volume"

      • ec.subclusterLossTolerance - removed

    • Changes for security (see next section)

      • security.administrators, security.operators - removed 'snmp' user

      • snmp.rwCommunity, snmp.roCommunity - new settings for 'snmp' user

      • startup.certificates - new setting to hold any and all public keys

    • New settings:

      • disk.atimeEnabled

      • health.parallelWriteTimeout

      • search.pathDelimiter

  • Required SNMP Security Change: Remove the snmp key from the security.administrators setting, and update snmp.rwCommunity with its value. Nodes that contain only the snmp key in the security.administrators setting does not boot. If you changed the default value of the snmp key in the security.operators setting, update snmp.roCommunity with that value and then remove the snmp key from security.operators. In the security.operators setting, 'snmp' is a reserved key, and it cannot be an authorized console operator name. (SWAR-8097)

  • EC Protection

    • Best practice: Use ec.protectionLevel=node, which distributes segments across the cluster's physical/virtual machines. Do not use ec.protectionLevel=subcluster unless you already have subclusters defined and are sure the specified EC encoding is supported. A new level, ec.protectionLevel=volume, allows EC writes to succeed if you have a small cluster with fewer than (k+p)/p nodes. (Swarm always seeks the highest protection possible for EC segments, regardless of the level you set.)

    • Optimize hardware for EC by verifying there are more than k+p subclusters/nodes (as set by ec.protectionLevel); for example, with policy.ecEncoding=5:2, you need at least 8 subclusters/nodes. When Swarm cannot distribute EC segments adequately for protection, EC writes can fail despite ample free space. (SWAR-7985)

    • Setting ec.protectionLevel=subcluster without creating subclusters (defining node.subcluster across sets of nodes) causes a critical error and lowers the protection level to 'node'. (SWAR-8175)

  • Small Clusters: Verify the following settings if using 10 or fewer Swarm nodes. Do not use fewer than 3 in production.
    Important: If you need to change any, do so before upgrading to Swarm 10.

    • policy.replicas: The min and default values for numbers of replicas to keep in your cluster must not exceed your number of nodes. For example, a 3-node cluster may have only min=2 or min=3.

    • EC Encoding and Protection: For EC encoding, verify you have enough nodes to support the cluster's encoding (policy.ecEncoding). For EC writes to succeed with fewer than (k+p)/p nodes, use the new level, ec.protectionLevel=volume.

    • Best Practice: Keep at least one physical machine in your cluster beyond the minimum number needed. This allows for one machine to be down for maintenance without compromising the constraint.

  • Cluster in a Box: Swarm supports a "cluster in a box" configuration as long as that box is running a virtual machine host and Swarm instances are running in 3 or more VMs. Each VM boots separately and has its own IP address. Follow the recommendations for small clusters, substituting VMs for nodes. If you have two physical machines, use the "cluster in a box" configuration, but move to direct booting of Swarm with 3 or more.

  • Offline Node Status: Because Swarm 10's new architecture reduces the number of IP addresses in your storage cluster, you may see the old IPs and subclusters reporting as Offline nodes until they timeout in 4 days (crier.forgetOfflineInterval), which is expected.

Info

The Multipath support is obselete from Swarm 10 onward.

For Swarm 9 impacts, see Swarm Storage 9 Releases.

Watch Items and Known Issues

The following operational limitations and watch items exist in this release.

Upgrading from 9.x

Important

Do not begin the upgrade until you complete the following:

  1. Plan Upgrade Impacts: Review and plan for the 10.0 upgrade impacts (above) and the impacts for each of the releases since the version you are running. For Swarm 9 impacts, see Swarm Storage 9 Releases.

  2. Finish Volume Retires: Do not start any elective volume retirements during the upgrade. Wait until the upgrade is complete before initiating any retires.

  3. Run Checker Script: Swarm 10 includes a migration checker script to run before upgrading from Swarm 9; it reports configuration setting issues and deprecations to be addressed. (SWAR-8230) See Storage Settings Checker.

If you need to upgrade from Swarm 8.x or earlier, contact DataCore Support for guidance.

  1. Download the correct bundle for the site. Swarm distributions bundle together the core components needed for implementation and updates; the latest versions are available in the Downloads section on the DataCore Support Portal.
    There are two bundles available:

note

Note

Contact DataCore Support for new installs of Platform Server and for optional Swarm client components, such as SwarmFS Implementation, that have separate distributions.

Note

Contact DataCore Support for new installs of Platform Server and for optional Swarm client components, such as SwarmFS Implementation, that have separate distributions.

  1. Download the comprehensive PDF of Swarm Documentation that matches your bundle distribution date, or use the online HTML version from the Documentation Archive.

  2. Select your type of upgrade. Swarm supports rolling upgrades (a single cluster running mixed versions during the upgrade process) and requires no data conversion unless noted for a release. Upgrades can be performed without scheduling an outage or bringing down the cluster. Restart the nodes one at a time with the new version and the cluster continues serving applications during the upgrade process.

  3. Choose whether to upgrade Elasticsearch 2.3.3 at this time. 

  4. Note these installation issues:

  5. Review the Application and Configuration Guidance.