Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
maxLevel2

...

With the Swarm 12.0 release, SwarmFS object uploads that are stalled “in progress” now timeout to allow consolidation and clean up of the uploaded parts. (SWAR-7699)

The prior Upgrading and Known Issues sections below apply.

...

The scope of this release does not include unnamed objects, caching, folder locking/leasing, or client notification of namespace changes.

Upgrading

Best practice is to upgrade to Elasticsearch 6 and Gateway 7.0, which is the platform that supports the new listing service and removes dependency on versions of Elasticsearch. a critical error is logged if SwarmFS runs with a version of Gateway older than 6.4.

  1. Follow the guidance in in SwarmFS Deployment for what specific configuration is required across components.

  2. Complete the section for SwarmFS when migrating Elasticsearch: Migrating from Older Elasticsearch.

Known Issues

  • If, instead of updating, you perform a yum remove of SwarmFS and also remove its artifacts ("rm -rf /etc/ganesha"), the configuration (/etc/ganesha/ganesha.conf) is not recreated on install, causing the SwarmFS-config script to fail. Workaround: Save the ganesha.conf and restore it to that directory. (NFS-778)

  • If application file handling fails to clean up after unlinked files, 'silly' files (of form .nfsXXXX) may persist in directories, consuming space. Workaround: Add a cron job that periodically looks for and removes such files. (NFS-764)

  • Do not use SwarmFS with a bucket that has versioning enabled. File writes can commit the object multiple times, resulting in an excessive number of versions. (NFS-753)

  • Externally-written custom headers may not appear in :metadata reads. Workaround: To trigger ES to pick up an external update, also set the X-Data-Modified-Time-Meta header to the current time (in seconds since epoch). (NFS-692)

  • Exports defined with different domains but the same bucket name do not operate as unique exports. (NFS-649)

  • An invalid bucket name entered for an export in the UI silently fails in SwarmFS (config reads, export generates, client mounts, 0-byte writes and directory operations appear to succeed) but fails on requests to Swarm Storage. (NFS-613)

  • The SwarmFS configuration script does not work with config URLs that use HTTPS and contain auth credentials for accessing Swarm through Gateway. (NFS-406)

  • On startup, SwarmFS may generate erroneous and harmless WARN level messages for configuration file parameters, such as config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:17): Unknown parameter (Path) (NFS-289)

...