Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
minLevel1
maxLevel3

...

2
outlinefalse
typelist
printablefalse

NFS server groups and exports can be created and managed via the Swarm Storage UI allows creating and managing NFS server groups and exports's NFS page.

Info

Important

The storage cluster's default domain must be created before configuring SwarmFS. This domain has the same name as the cluster.name setting's value. The domain can be created with the Content UI or an HTTP utility like curl (see Manually Creating and Renaming Domains).

Create separate groups (sets) of SwarmFS that are configured in pools; this enables support for different clients and optimization for different roles. Set some configuration settings locally , to override global configuration settings.

Why have different server groups?

These are situations for which it may be ideal to keep groups separate:

...

Info

Important

Restart NFS services after making any configuration changes. The NFS server does not support dynamic updates to the running configuration.

Adding Server Groups

Server Groups are created with the + Add button at top right.

...

Infotip

Best

practice

Practice

Verify the default domain is specified and the existence of the domain and bucket are defined in the scope before creating a Server Group.


The resulting group is a container for exports sharing a common configuration:

...

Name

Supply a name, which is a description, when adding a Server Group; the unique identifier is the count (such as /2, above) at the end of the Configuration URL.

The new group appears at or near the end of the listing, ready to be configured with exports.

Configuration URL

Each NFS Server Group has a unique Configuration URL, which can be clicked to view the current export definitions. These are the auto-generated and auto-maintained JSON settings being stored by Swarm for the group.

The configuration is empty until one or more exports is added.

info

Note

An sptid parameter is the encrypted form of a Swarm node IP address, which Gateway uses for request routing. Remove the parameter when pasting the URL elsewhere, such as in Ganesha.

Info

Important

Although group configurations may be shared across NFS servers, each server must be configured with only one group.

Adding Exports

Listing

...

Service

Each export is specific to one and only one Swarm bucket, but clients viewing the mounted directory are able to view, create, and use virtual directories within it via the prefix feature of Swarm named objects (myvirtualdirectory/myobjectname.jpg).

...

Name


Unique name for the export, to distinguish it from the others in Swarm UI listings.

Storage IP(s) or DNS

name

Name(s)


The IP address(es) or DNS-resolvable hostname(s) for one or more Swarm Gateways.

Search

host

Host(s)


(For backwards compatibility) Optional as of version 3.0. The IP addresses or DNS-resolvable hostnames for one or more Swarm Elasticsearch servers.

Note: Both Gateway and SwarmFS use the Primary (default) search feed. If a new feed is made Primary, these servers must be restarted.

Search

index

Index


(For backwards compatibility) Optional as of version 3.0. The unique alias name of the Primary (default) search feed. Locate this value as the Alias field in the primary search feed's definition. 

Export

path

Path


Case-sensitive. Unique pseudo filesystem path for the NFS export. 

Cannot be set to a single slash ("/"), which is reserved.

Scope

Domain
Bucket

Specifies where the data written via the export is associated: which domain and bucket to use.

Important: Verify the existence of the domain and bucket specified here.

Info

Quick Setup

For the remaining setup sections, few changes are usually needed:

  • Cloud Security — :  Each export can have different security, to fit the usage.

  • Client Access — Access:  Keep the defaults unless access control needs to be customized.

  • Permissions — Permissions: Change nobody to x-owner-meta.

  • Logging — Logging: Keep the defaults unless directed by Support.

  • Advanced Settings — : Keep the defaults unless directed by Support.

Cloud Security

In a Gateway (Cloud) environment pass-through authentication can be used. Authenticating to Gateway can use the same login and password provided for authentication by the client to SwarmFS. Session tokens (with various expiration times) and single user authentication are available, by login credentials or token.

...

Info

Tip

Each SwarmFS export created to use the Content Gateway can have an entirely different security method, as needed by the use case.

Session Token

Token Admin Credentials by Login

Token Admin Credentials by Token

User, Password, Expiration

Token, Expiration

Single User 

Authenticate by Login

Authenticate by Token

User, Password

Token

Pass-

through

Through / None

n

N/

a

A


Client Access

This optional section allows access control customization both globally (for this export) and for specific clients. 

Access

type

Type

Defaults to full read/write access. These other access restrictions are available:

  • All operations (RW) - default

  • No access (None)

  • Read-only (RO)

  • No read/write (MDONLY) - allows listing and metadata updates without access to file contents

  • No read/write/modify (MDONLY_RO) - allows listing but no metadata updates and no access to file contents

Squash

Defaults to no squashing (allows all user IDs).

  • None - default

  • Root - squashes the remote superuser (root, uid=0) when using identity authentication (local user is the same as remote user)

  • All - squashes every remote user, including root.

Squash

user

User id (uid)

mapping

Mapping

Squash id (uid)

mapping 

Mapping 

User ID and Group ID can be set when the NFS server is authenticating users from a different authentication sources and/or it is desired all files have a consistent user/group.

Typical situations:

  • All clients are configured to use local password/group files, but SwarmFS through the Content Gateway is configured to use LDAP.

  • All clients have local password/group files, but some users may not exist on all clients systems or may differ on each client.

  • All clients have the same users and groups, but they are created in a different order.

  • All clients authenticate using individual logins/accounts, but it is desired all files have the same consistent owner and group regardless of the user reading or writing the files.

  • A client mounts the NFS exports as anonymous, but it is desired to have the files presented over the share to all NFS clients to have a consistent UID and GID.

Client(s)

As needed, customize the access for one or more specific clients.

Note

:

These override the settings specified above, if any.

Permissions

Files and directories in a SwarmFS system support standard Unix-style read/write/execute permissions based on the user ID (uid) and group ID (gid) asserted by the mounting NFS client. The numeric forms of uid and gid have equivalent human-readable ASCII forms, as given by the Linux 'id' command:

...

Info

Using x-owner-meta

The export's interface and access method selected determines whether x-owner-meta is used or not. Using defaults of x-owner-meta and 0755 or 0644 are valid only when Storage Interface is set to "Content Gateway" and the Cloud Security Access method is set to "Session Token". For all other methods (such as "Direct to Swarm", "Single User", "Pass-through / None"), the NFS client does not map x-owner-meta to a local UNIX/POSIX user.

Logging

Enable additional logging as directed by DataCore Support, but keep this logging disabled for normal production usage. (Swarm UI 2.3)

...

Performance

Performance logging for SwarmFS, which reduces the noise in the ganesha log file. When enabled, logs PERF warnings to Elasticsearch query result dumps.

Elasticsearch

Performance logging for Elasticsearch, for use while troubleshooting issues such as partial listings. When enabled, sends the Elasticsearch query results to the debug log file. 

Advanced Settings

Info

Important

Use these recommended defaults for Advanced Settings unless otherwise advised by DataCore Support.

Transport

protocol

Protocol

TCP

Supported transport protocol (TCP/UDP | TCP | UDP)

Storage

port

Port

80

Required. Network port for traffic to Swarm Storage nodes

Search

port

Port

9200

Required. Network port for traffic to Swarm Search nodes

Security

sys 

Remote Procedure Call (RPC) security type (sys | krb5 | krb5i | krb5p)

Maximum

storage connections

Storage Connections

100

Maximum number of open connections to Swarm Storage. (v2.0)

Retries

5

(positive integer) How many times SwarmFS retries unsuccessful requests to Swarm and Swarm Search before giving up.

Retries

timeout

Timeout

90

(seconds) How long SwarmFS waits before timing out Swarm retries.

Request

timeout

Timeout

90

(seconds) How long SwarmFS waits before timing out Swarm requests.

For best results, set this timeout to at least twice the value of the Storage setting scsp.keepAliveInterval.

Pool

timeout

Timeout

300

(seconds) How long discovered Swarm storage nodes are remembered.

Write

timeout

Timeout

90

(seconds) How long SwarmFS waits for a write to Swarm to complete before retrying.

Read

buffer size

Buffer Size

128000000

(bytes) Defaults to 128 MB, for general workloads. The amount of data to be read each time from Swarm. If the read size buffer is greater than the client request size, then the difference is cached by SwarmFS, and the next client read request is served directly from cache, if possible. Set to 0 to disable read-ahead buffering.

Improving performance

 —

 - Set each export's Read Buffer Size to match the workload expected on that share

  • Lower the read-ahead buffer size if most reads are small and non-sequential.

  • Increase the read-ahead buffer size if most reads are large and sequential.

Parallel

read buffer requests

Read Buffer Requests

4

(positive integer) Adjust to tune the performance of large object reads; the default of 4 reflects the optimal number of threads, per performance testing. (v2.3)

Maximum

part size

Part Size

64000000

(bytes) How large each part of erasure-coded (EC) objects may be. Increase (such as to 200 MB, or 200000000) to create smaller EC sets for large objects and so increase throughput for high volumes of large files. (v2.3)

Collector

sleep time

Sleep Time

1000

(milliseconds) Increase to minimize object consolidation time by directing SwarmFS to collect more data before pushing it to Swarm, at the expense of both RAM and read performance, as SwarmFS slows clients when running out of cache. Increase this value if the implementation is sensitive to how quickly the Swarm health processor consolidates objects, which cannot be guaranteed. (v2.3)

Maximum

buffer memory

Buffer Memory

2000000000

(bytes) Defaults to 2 GB. Maximum limit that can be allocated for the export's export buffer pool. Once exceeded, client requests are temporary blocked until total buffers falls back below this number. (v2.0)

Buffer

high watermark

High Watermark

1500000000

(bytes) Once the allocated export buffers reach this watermark, SwarmFS starts to free buffers in an attempt to stay below “Maximum Memory Buffers”. During this time, client requests may be delayed. (v2.0)

File

access time policy

Access Time Policy

"relatime"

Policy for when to update a file's access time stamp (atime). (v2.0)

  • “noatime”: Disables atime updates.

  • “relatime”: Updates atime if it is earlier than last modified time, so it updates once after each write.

  • “strictatime”: Updates atime on every read and close.

Elasticsearch

buffer refresh time

Buffer Refresh Time

60

(seconds) How rapidly non-SwarmFS object updates are reflected in SwarmFS listings. Lower to reduce the wait for consistency, at the cost of increased load on Elasticsearch. (v2.3)

Child pages (Children Display)

...