Swarm configuration changes for CSN 8.1 and CSN 8.2

CSN 8.2+ requires CentOS/ RHEL 6.8.

Because of configuration parameter changes in Swarm 8.1 and Swarm 8.2.1, CSN 8.1 only supports Swarm 8.1 and CSN 8.2 doesn't support Swarm <8.2.1. CSN 8.2 will likely support Swarm 8.2.x and beyond.

That means we have pulled the Swarm for CSN build (the rpm) from the Downloads page. If we patch Swarm before we release CSN 8.2.x, then that will be added back, but right now, CSN 8.2 only supports Swarm 8.2.1.

The reason for this is because in Swarm 8.1 we no longer support some of the default EC parameters that the CSN wrote into cluster.cfg on every CSN UI change/ edit. Therefore, if you had CSN 8.1 and had the EC parameters for Swarm 8.1 in cluster.cfg, and then you decided to load Swarm 8.0 or previous, the new EC parameters would cause you to get an error on bootup.

In CSN 8.1, we did away with the box for configuring these replica parameters to avoid confusion and start to allow future compatibility between CSN 8.2 and newer Swarm versions.

Same thing with CSN 8.2. In Swarm 8.2, we removed the maxreps and minreps parameters, which have been in the default cluster.cfg since the CSN first was developed. They are now in the "policy" block/section of the configuration. If you boot Swarm 8.2.1 with the old parameters, it will cause you to error. If you boot Swarm 8.1 with the new policy replicas parameters, it will cause you to error.

Another thing to note, previous to Swarm 8.2, the reps parameters were not in the PSS (Persistent Settings Stream). Now, they are in the settings stream in the form of the policy setting. In the PSS, they look like:

policy.replicas=min:2 max:16 default:2 anchored

As a reminder, any configuration element in the PSS (except the snmp password) will overwrite any configuration element in cluster.cfg. That means that once you have a PSS (any configuration change in the Settings page will create it), even if you configure a parameter in cluster.cfg and reboot the node, if the parameter already lives in the PSS, once the node is booted up and reads the PSS, the PSS value will override the value in cluster.cfg.

Only two parameters are kind of exceptions. The SNMP read/write password shows up in the PSS but is always written/changed in the cluster.cfg. That parameter can't be changed via SNMP (for hopefully obvious reasons). The other kind-of exception is the log level. It isn't really an exception, but sometimes we want to see debug level logging on bootup, and in that case, you can't change the PSS (via the Swarm Admin Console or SNMP) to force the node to change the loglevel it is using on its bootup sequence. If you want debug level logging from a node from BEFORE it gets completely booted up and has read its PSS, then you need to set log level in the cluster.cfg. In that case, the effect is that a booting node will be debug level UNTIL it reads the PSS... at which point the PSS gets applied with whatever log level is configured in the PSS. This is useful for troubleshooting bootup problems.

© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.