Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Current »

The Swarm probe build is the same kernel and fsimage used to run Swarm storage nodes on but without the storage part.

It is a very useful tool when troubleshooting driver issues, or other hardware issues as it includes SSH access.

The support team uses this tool to troubleshoot and a client is provided a download link to use on their CSN or other PXE server.

Unlike a usual swarm upgrade image it exists in a zip format and the zip contains both the fsimage and kernel for the version of Swarm it is built for.

Swarm images load into memory so the base OS is not editable once started. Navigating to ‘/etc/’ and making lasting changes on a config file are not possible. It is not possible to add packages to the probe build using tools like yum or apt or even dpkd / rpm.

This article shows how to drop a precompiled binary in with the associated library.

Setup

Install probe-build

This example probe build is for version 12 and the CSN has the IP 10.0.1.26.

scp castor-12.0.0-x86_64-probe.zip root@10.0.1.26:/root/

Transfer over using WinSCP or FTP or another tool altogether but this is the easiest method when running on a linux/unix client.

There is no need to unzip the file when it is transferred over; move it to /root/

[root@swarmservicenode ~]# ll
total 234776
-rw-r--r--  1 root root 118058980 Feb 22 10:06 caringo-castor-12.0.0-1-x86_64(1).rpm
-rw-r--r--. 1 root root        71 Aug 20  2018 caringo_csn_backup.disabled
-rw-r--r--  1 root root 122143502 Feb 22 10:10 castor-12.0.0-x86_64-probe.zip
drwxr-xr-x  4 root root      4096 Dec 10 05:22 dist
-rw-r--r--  1 root root    184320 Feb 22 10:58 iperf.tar
-rw-r--r--  1 root root       966 May 31  2019 metrics.cfg
drwxr-xr-x. 2 root root      4096 Dec 10 05:19 Platform-8.3.2

On this example CSN the support tools are installed.

Use a handy script there to add the probe build.

cd dist
[root@swarmservicenode ~]# cd dist/
You have new mail in /var/spool/mail/root
[root@swarmservicenode dist]# ls
add-bashrcforcustomers.sh               csn-check-backups.sh           indexerConfig171.sh                     platform-send-health-reports.sh
bashrcforcustomers                      csn-create-nodeconfigs.sh      indexerConfig233.sh                     platform-update-mibs.sh
CARINGO-CASTOR-MIB.txt                  csn-install-from-zip.sh        indexer-enumerator.sh                   proxy-set-fw-nat.sh
caringo-content-gateway-audit           csn-modify-saveset.sh          indexer-grab.sh                         README.TXT
caringo-content-gateway-server          csn-ntpd-watch.sh              legacy-tools                            REVISION.txt
CARINGO-MIB.txt                         csn-patch-sosreport702.sh      logging.yaml.syslog                     sendretrieve.sh
CentOS-Base-68.repo                     csn-read-pss.sh                logging.yml                             settings_checker.py
CentOS-Base-6.repo                      csn-rename-cluster.sh          logging.yml-2017_0823                   SimpleCAS.zip
CHANGELOG                               csn-send-health-reports.sh     logrotate-elasticsearch                 snmp-castor-tool.sh
check-siblings.sh                       csn_settings_checker           logrotate-elasticsearch-2017_0823       swarmctl
checkuuidsAllNodes.sh                   csn-update-mibs.sh             make-immutable-streams-mutable.sh       swarmrestart
cipperf.py                              csn-yum-68-upgrade.sh          mcast-tester.py                         techsupport-bundle-grab.sh
cns-dig                                 DD_DISKIO.sh                   ntpd-instantaneous-collect.sh           Tech-Support-Scripts-Bundle.pdf
collect_health_reports.sh               demoParallelWriteToSwarm.sh    parseJsonBuckets.py                     test-volumes-network.sh
copy-streams-to-new-cluster.sh          generateMutableTestStreams.sh  Pcurl.sh                                tmpwatch
createNODEScsv.sh                       generateTestData.sh            performance-profiler.sh                 updateBundle.sh
csn-add-sncfg-to-ip-assignments.sh      getretrieve.sh                 platform-delete-csn-backups.sh          updateTestData.sh
csn-add-to-ip-assignments-from-file.sh  gw-change-log-level.sh         platform-enumerate-vols.sh              uploaddir.sh
csn-allow-68.sh                         hwinfo-dmesg-grab.sh           platform-get-default-search-feed-ip.py  uploader.html
csn-assign-ips.sh                       igmp-querier.pl                platform-read-pss.sh
[root@swarmservicenode dist]# ./csn-install-from-zip.sh ../castor-12.0.0-x86_64-probe.zip

The probe is unzipped and the kernel and fsimage are placed in the correct location creating a named folder indicating what it is.

The net result is:

Select the probe from your netboot config and then select update.

Any node rebooted now loads that image.

Get Iperf3

A version of iperf3 is needed from here.

https://iperf.fr/iperf-download.php

The version for this example is:

iPerf 3.1.3 - DEB package (8 jun 2016 - 8.6 KiB) + libiperf0 3.1.3 - DEB package (53.9 KiB)

Newer versions should work as well. It will be the
Ubuntu 64 bits / Debian 64 bits / Mint 64 bits (AMD64) by Raoul Gunnar Borenius and Roberto LuMiBreras. (sha256) packages. As this closely matches the Debian release Swarm uses.

The deb packages are meant to be used with a package manager but here we just want the pre-compiled binaries. To get them we can do the following

Unpack the deb files

First transfer the files to an ubuntu/debian based machine.

make sure dpkg-deb is installed

sudo apt-get install dpkg-deb

Then run

dpkg-deb -xv iperf3_3.1.3-1_amd64.deb .

For each file that you have

dpkg-deb -xv libiperf0_3.1.3-1_amd64.deb .

You’ll see a directory structure like this

tony@tony-NUC8i7HVK:~/iperf$ tree
.
├── iperf3_3.1.3-1_amd64.deb
├── iperf.tar
├── libiperf0_3.1.3-1_amd64.deb
└── usr
    ├── bin
    │   └── iperf3
    ├── lib
    │   └── x86_64-linux-gnu
    │       ├── libiperf.so.0 -> libiperf.so.0.0.0
    │       └── libiperf.so.0.0.0
    └── share
        ├── doc
        │   ├── iperf3
        │   │   ├── changelog.Debian.gz -> ../libiperf0/changelog.Debian.gz
        │   │   ├── copyright
        │   │   └── README.md.gz -> ../libiperf0/README.md.gz
        │   └── libiperf0
        │       ├── changelog.Debian.gz
        │       ├── copyright
        │       └── README.md.gz
        ├── lintian
        │   └── overrides
        │       └── libiperf0
        └── man
            └── man1
                └── iperf3.1.gz

12 directories, 14 files
tony@tony-NUC8i7HVK:~/iperf$ 

What we need is the iperf3 binary and the lib files.

So to make it easy we can transfer the files over in a tar bundle.

tar -cvf iperf.tar usr/*

Then transfer it to our CSN in my case thats on 10.0.1.26

scp iperf.tar root@10.0.1.26:/root/

When we have the packages over on the CSN we can SSH there and do another transfer

in this case my username is root : caringo is the password

scp iperf.tar root@172.29.3.0:/dev/shm/ 

here we’re transferring the iperf tarball to a storage node ips /dev/shm ‘shared memory’

https://www.cyberciti.biz/tips/what-is-devshm-and-its-practical-usage.html

Running Iperf

Now that we have all of the packages in place we can ssh to a storage node and run the tool

ssh root@<storagenodeip>

[root@swarmservicenode ~]# ssh root@172.29.3.0
root@172.29.3.0's password: 
Linux bb089f85f3ed2a8cb7a054d984418b77 5.4.61 #1 SMP Fri Oct 30 20:39:52 UTC 2020 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Feb 22 16:13:48 2021 from 172.29.0.5
root@bb089f85f3ed2a8cb7a054d984418b77:~# 

cd to /dev/shm and untar your iperf

root@bb089f85f3ed2a8cb7a054d984418b77:/dev/shm# tar -xvf iperf.tar
usr/bin/
usr/bin/iperf3
usr/lib/
usr/lib/x86_64-linux-gnu/
usr/lib/x86_64-linux-gnu/libiperf.so.0
usr/lib/x86_64-linux-gnu/libiperf.so.0.0.0
usr/share/
usr/share/lintian/
usr/share/lintian/overrides/
usr/share/lintian/overrides/libiperf0
usr/share/man/
usr/share/man/man1/
usr/share/man/man1/iperf3.1.gz
usr/share/doc/
usr/share/doc/iperf3/
usr/share/doc/iperf3/changelog.Debian.gz
usr/share/doc/iperf3/copyright
usr/share/doc/iperf3/README.md.gz
usr/share/doc/libiperf0/
usr/share/doc/libiperf0/changelog.Debian.gz
usr/share/doc/libiperf0/copyright
usr/share/doc/libiperf0/README.md.gz
root@bb089f85f3ed2a8cb7a054d984418b77:/dev/shm# 

Then we can go to usr/bin/iperf3 and try to run it

root@bb089f85f3ed2a8cb7a054d984418b77:/usr/bin# cd /dev/shm/usr/bin
root@bb089f85f3ed2a8cb7a054d984418b77:/dev/shm/usr/bin# ./iperf3 -s
./iperf3: error while loading shared libraries: libiperf.so.0: cannot open shared object file: No such file or directory
root@bb089f85f3ed2a8cb7a054d984418b77:/dev/shm/usr/bin# 

it tells us that its missing the libiperf library. But thats fine cause we brought our own

root@bb089f85f3ed2a8cb7a054d984418b77:/dev/shm/usr/bin# export LD_LIBRARY_PATH=/dev/shm/usr/lib/x86_64-linux-gnu

This command will only be valid for this tty session but it tells the kernel where to find the library it needs for iperf.

Now if we go back:

root@bb089f85f3ed2a8cb7a054d984418b77:~# cd /dev/shm/usr/bin
root@bb089f85f3ed2a8cb7a054d984418b77:/dev/shm/usr/bin# ./iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

We have our iperf server running on port 5201

To test that it works we can run iperf in client mode from another host.

[root@swarmservicenode dist]# !57
iperf3 -c 172.29.3.0 -f K -w 500K -P 6
Connecting to host 172.29.3.0, port 5201
[  4] local 172.29.0.5 port 41988 connected to 172.29.3.0 port 5201
[  6] local 172.29.0.5 port 41990 connected to 172.29.3.0 port 5201
[  8] local 172.29.0.5 port 41992 connected to 172.29.3.0 port 5201
[ 10] local 172.29.0.5 port 41994 connected to 172.29.3.0 port 5201
[ 12] local 172.29.0.5 port 41996 connected to 172.29.3.0 port 5201
[ 14] local 172.29.0.5 port 41998 connected to 172.29.3.0 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   243 MBytes  248140 KBytes/sec    6    157 KBytes       
[  6]   0.00-1.00   sec   249 MBytes  254353 KBytes/sec    0    322 KBytes       
[  8]   0.00-1.00   sec   229 MBytes  233829 KBytes/sec    2    160 KBytes       
[ 10]   0.00-1.00   sec   219 MBytes  224163 KBytes/sec    0    321 KBytes       
[ 12]   0.00-1.00   sec   220 MBytes  225166 KBytes/sec    2    148 KBytes       
[ 14]   0.00-1.00   sec  95.0 MBytes  97176 KBytes/sec   39   46.7 KBytes       
[SUM]   0.00-1.00   sec  1.22 GBytes  1282865 KBytes/sec   49             
- - - - - - - - - - - - - - - - - - - - - - - - -
[  4]   1.00-2.00   sec   270 MBytes  276357 KBytes/sec    0    171 KBytes       
[  6]   1.00-2.00   sec   269 MBytes  275670 KBytes/sec    0    322 KBytes       
[  8]   1.00-2.00   sec   259 MBytes  265622 KBytes/sec    0    180 KBytes       
[ 10]   1.00-2.00   sec   246 MBytes  251707 KBytes/sec    0    322 KBytes       
[ 12]   1.00-2.00   sec   239 MBytes  244482 KBytes/sec    0    170 KBytes       
[ 14]   1.00-2.00   sec  99.2 MBytes  101694 KBytes/sec   23   69.3 KBytes       
[SUM]   1.00-2.00   sec  1.35 GBytes  1415501 KBytes/sec   23             
- - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  2.26 GBytes  236932 KBytes/sec   12             sender
[  4]   0.00-10.00  sec  2.26 GBytes  236926 KBytes/sec                  receiver
[  6]   0.00-10.00  sec  2.88 GBytes  301715 KBytes/sec    0             sender
[  6]   0.00-10.00  sec  2.88 GBytes  301715 KBytes/sec                  receiver
[  8]   0.00-10.00  sec  2.16 GBytes  226466 KBytes/sec    6             sender
[  8]   0.00-10.00  sec  2.16 GBytes  226454 KBytes/sec                  receiver
[ 10]   0.00-10.00  sec  2.58 GBytes  270994 KBytes/sec    0             sender
[ 10]   0.00-10.00  sec  2.58 GBytes  270994 KBytes/sec                  receiver
[ 12]   0.00-10.00  sec  2.08 GBytes  218403 KBytes/sec    6             sender
[ 12]   0.00-10.00  sec  2.08 GBytes  218390 KBytes/sec                  receiver
[ 14]   0.00-10.00  sec  1.54 GBytes  161421 KBytes/sec   76             sender
[ 14]   0.00-10.00  sec  1.54 GBytes  161421 KBytes/sec                  receiver
[SUM]   0.00-10.00  sec  13.5 GBytes  1415930 KBytes/sec  100             sender
[SUM]   0.00-10.00  sec  13.5 GBytes  1415899 KBytes/sec                  receiver

And this is what it will look like on the storage node side.

root@bb089f85f3ed2a8cb7a054d984418b77:/dev/shm/usr/bin# ./iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 172.29.0.5, port 41986
[  5] local 172.29.3.0 port 5201 connected to 172.29.0.5 port 41988
[  7] local 172.29.3.0 port 5201 connected to 172.29.0.5 port 41990
[  9] local 172.29.3.0 port 5201 connected to 172.29.0.5 port 41992
[ 11] local 172.29.3.0 port 5201 connected to 172.29.0.5 port 41994
[ 13] local 172.29.3.0 port 5201 connected to 172.29.0.5 port 41996
[ 15] local 172.29.3.0 port 5201 connected to 172.29.0.5 port 41998
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   239 MBytes  2.01 Gbits/sec                  
[  7]   0.00-1.00   sec   246 MBytes  2.06 Gbits/sec                  
[  9]   0.00-1.00   sec   226 MBytes  1.89 Gbits/sec                  
[ 11]   0.00-1.00   sec   217 MBytes  1.82 Gbits/sec                  
[ 13]   0.00-1.00   sec   217 MBytes  1.82 Gbits/sec                  
[ 15]   0.00-1.00   sec  93.4 MBytes   783 Mbits/sec                  
[SUM]   0.00-1.00   sec  1.21 GBytes  10.4 Gbits/sec              

You can then use iperf3 to validate that the network is giving the expected throughput in both directions.

  • No labels