Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: updating to fix grammar, spelling, formatting

The Swarm probe build is the same kernel and fsimage that we used to run Swarm storage nodes on but without the nifty storage part.

It is a very useful tool when troubleshooting driver issues, or other hardware issues as it includes ssh SSH access.

Often the The support team will use uses this tool to troubleshoot and a customer will typically be given client is provided a download link to use it on their csn CSN or other pxe PXE server.

unlike Unlike a usual swarm upgrade image it will be exists in a zip format and the zip will contain contains both the fsimage and kernel for the version of swarm Swarm it is built for.

Swarm images load into memory so the base os OS is not editable once its started. Ie you can’t go into Navigating to ‘/etc/’ and make making lasting changes on a config file . Its also are not possible. It is not possible to add packages to the probe build via usual using tools like yum or apt or even dpkd / rpm.

But in this article we’ll go through This article shows how to drop a precompiled binary in with the associated library.

Setup

Install

...

probe-build

My This example probe build is for version 12 and the csn i’m using is on the ip CSN has the IP 10.0.1.26.

Code Block
breakoutModewide
scp castor-12.0.0-x86_64-probe.zip root@10.0.1.26:/root/

You can transfer Transfer over using winscp WinSCP or ftp FTP or another tool altogether but this is by far the easiest method when you are running on a linux/unix client.

When it There is transferred over no need to unzip the file just put it in when it is transferred over; move it to /root/ .

Code Block
breakoutModewide
[root@swarmservicenode ~]# ll
total 234776
-rw-r--r--  1 root root 118058980 Feb 22 10:06 caringo-castor-12.0.0-1-x86_64(1).rpm
-rw-r--r--. 1 root root        71 Aug 20  2018 caringo_csn_backup.disabled
-rw-r--r--  1 root root 122143502 Feb 22 10:10 castor-12.0.0-x86_64-probe.zip
drwxr-xr-x  4 root root      4096 Dec 10 05:22 dist
-rw-r--r--  1 root root    184320 Feb 22 10:58 iperf.tar
-rw-r--r--  1 root root       966 May 31  2019 metrics.cfg
drwxr-xr-x. 2 root root      4096 Dec 10 05:19 Platform-8.3.2

...

On this example CSN the support tools are installed.

You can use Use a handy script there to add the probe build.

Code Block
breakoutModewide
cd dist
[root@swarmservicenode ~]# cd dist/
You have new mail in /var/spool/mail/root
[root@swarmservicenode dist]# ls
add-bashrcforcustomers.sh               csn-check-backups.sh           indexerConfig171.sh                     platform-send-health-reports.sh
bashrcforcustomers                      csn-create-nodeconfigs.sh      indexerConfig233.sh                     platform-update-mibs.sh
CARINGO-CASTOR-MIB.txt                  csn-install-from-zip.sh        indexer-enumerator.sh                   proxy-set-fw-nat.sh
caringo-content-gateway-audit           csn-modify-saveset.sh          indexer-grab.sh                         README.TXT
caringo-content-gateway-server          csn-ntpd-watch.sh              legacy-tools                            REVISION.txt
CARINGO-MIB.txt                         csn-patch-sosreport702.sh      logging.yaml.syslog                     sendretrieve.sh
CentOS-Base-68.repo                     csn-read-pss.sh                logging.yml                             settings_checker.py
CentOS-Base-6.repo                      csn-rename-cluster.sh          logging.yml-2017_0823                   SimpleCAS.zip
CHANGELOG                               csn-send-health-reports.sh     logrotate-elasticsearch                 snmp-castor-tool.sh
check-siblings.sh                       csn_settings_checker           logrotate-elasticsearch-2017_0823       swarmctl
checkuuidsAllNodes.sh                   csn-update-mibs.sh             make-immutable-streams-mutable.sh       swarmrestart
cipperf.py                              csn-yum-68-upgrade.sh          mcast-tester.py                         techsupport-bundle-grab.sh
cns-dig                                 DD_DISKIO.sh                   ntpd-instantaneous-collect.sh           Tech-Support-Scripts-Bundle.pdf
collect_health_reports.sh               demoParallelWriteToSwarm.sh    parseJsonBuckets.py                     test-volumes-network.sh
copy-streams-to-new-cluster.sh          generateMutableTestStreams.sh  Pcurl.sh                                tmpwatch
createNODEScsv.sh                       generateTestData.sh            performance-profiler.sh                 updateBundle.sh
csn-add-sncfg-to-ip-assignments.sh      getretrieve.sh                 platform-delete-csn-backups.sh          updateTestData.sh
csn-add-to-ip-assignments-from-file.sh  gw-change-log-level.sh         platform-enumerate-vols.sh              uploaddir.sh
csn-allow-68.sh                         hwinfo-dmesg-grab.sh           platform-get-default-search-feed-ip.py  uploader.html
csn-assign-ips.sh                       igmp-querier.pl                platform-read-pss.sh
Code Block
breakoutModewide
[root@swarmservicenode dist]# ./csn-install-from-zip.sh ../castor-12.0.0-x86_64-probe.zip 

That unzips the The probe is unzipped and puts the kernel and fsimage are placed in the right place correct location creating a named folder to tell you indicating what it is.

But the The net result is:

...

Select the probe from your netboot config and then select update.

Then when you reboot the node you reboot will load Any node rebooted now loads that image.

Get Iperf3

You’ll need a A version of iperf3 is needed from here.

https://iperf.fr/iperf-download.php

the one i chose The version for this test was this oneexample is:

iPerf 3.1.3 - DEB package (8 jun 2016 - 8.6 KiB) + libiperf0 3.1.3 - DEB package (53.9 KiB)

but the other newer Newer versions should work aswellas well. It will be the
Ubuntu 64 bits / Debian 64 bits / Mint 64 bits (AMD64) by Raoul Gunnar Borenius and Roberto LuMiBreras. (sha256) packages. As this closely matches the Debian release we are running onSwarm uses.

The deb packages are meant to be used with a package manager but here we just want the pre-compiled binarysbinaries. To get them we can do the following

...

Code Block
tony@tony-NUC8i7HVK:~/iperf$ tree
.
├── iperf3_3.1.3-1_amd64.deb
├── iperf.tar
├── libiperf0_3.1.3-1_amd64.deb
└── usr
    ├── bin
    │   └── iperf3
    ├── lib
    │   └── x86_64-linux-gnu
    │       ├── libiperf.so.0 -> libiperf.so.0.0.0
    │       └── libiperf.so.0.0.0
    └── share
        ├── doc
        │   ├── iperf3
        │   │   ├── changelog.Debian.gz -> ../libiperf0/changelog.Debian.gz
        │   │   ├── copyright
        │   │   └── README.md.gz -> ../libiperf0/README.md.gz
        │   └── libiperf0
        │       ├── changelog.Debian.gz
        │       ├── copyright
        │       └── README.md.gz
        ├── lintian
        │   └── overrides
        │       └── libiperf0
        └── man
            └── man1
                └── iperf3.1.gz

12 directories, 14 files
tony@tony-NUC8i7HVK:~/iperf$ 

What we need is the iperf3 binary and the lib files.

...

Code Block
tar -cvf iperf.tar usr/*

Then transfer it to our csn CSN in my case thats on 10.0.1.26

Code Block
scp iperf.tar root@10.0.1.26:/root/

When we’ve got we have the packages over on the csn CSN we can ssh SSH there and do another transfer

...

here we’re transferring the iperf tarball to a stoarage storage node ips /dev/shm ‘shared memory’

...

Code Block
ssh root@<storagenodeip>

[root@swarmservicenode ~]# ssh root@172.29.3.0
root@172.29.3.0's password: 
Linux bb089f85f3ed2a8cb7a054d984418b77 5.4.61 #1 SMP Fri Oct 30 20:39:52 UTC 2020 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Feb 22 16:13:48 2021 from 172.29.0.5
root@bb089f85f3ed2a8cb7a054d984418b77:~# 

cd to /dev/shm and untar your iperf

Code Block
root@bb089f85f3ed2a8cb7a054d984418b77:/dev/shm# tar -xvf iperf.tar
usr/bin/
usr/bin/iperf3
usr/lib/
usr/lib/x86_64-linux-gnu/
usr/lib/x86_64-linux-gnu/libiperf.so.0
usr/lib/x86_64-linux-gnu/libiperf.so.0.0.0
usr/share/
usr/share/lintian/
usr/share/lintian/overrides/
usr/share/lintian/overrides/libiperf0
usr/share/man/
usr/share/man/man1/
usr/share/man/man1/iperf3.1.gz
usr/share/doc/
usr/share/doc/iperf3/
usr/share/doc/iperf3/changelog.Debian.gz
usr/share/doc/iperf3/copyright
usr/share/doc/iperf3/README.md.gz
usr/share/doc/libiperf0/
usr/share/doc/libiperf0/changelog.Debian.gz
usr/share/doc/libiperf0/copyright
usr/share/doc/libiperf0/README.md.gz
root@bb089f85f3ed2a8cb7a054d984418b77:/dev/shm# 

Then we can go to usr/bin/iperf3 and try to run it

Code Block
root@bb089f85f3ed2a8cb7a054d984418b77:/usr/bin# cd /dev/shm/usr/bin
root@bb089f85f3ed2a8cb7a054d984418b77:/dev/shm/usr/bin# ./iperf3 -s
./iperf3: error while loading shared libraries: libiperf.so.0: cannot open shared object file: No such file or directory
root@bb089f85f3ed2a8cb7a054d984418b77:/dev/shm/usr/bin# 

it tells us that its missing the libiperf library. But thats fine cause we brought our own

Code Block
root@bb089f85f3ed2a8cb7a054d984418b77:/dev/shm/usr/bin# export LD_LIBRARY_PATH=/dev/shm/usr/lib/x86_64-linux-gnu

This command will only be valid for this tty session but it tells the kernel where to find the library it needs for iperf.

now Now if we go back:

Code Block
root@bb089f85f3ed2a8cb7a054d984418b77:~# cd /dev/shm/usr/bin
root@bb089f85f3ed2a8cb7a054d984418b77:/dev/shm/usr/bin# ./iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

We have our iperf server running on port 5201

...

Code Block
[root@swarmservicenode dist]# !57
iperf3 -c 172.29.3.0 -f K -w 500K -P 6
Connecting to host 172.29.3.0, port 5201
[  4] local 172.29.0.5 port 41988 connected to 172.29.3.0 port 5201
[  6] local 172.29.0.5 port 41990 connected to 172.29.3.0 port 5201
[  8] local 172.29.0.5 port 41992 connected to 172.29.3.0 port 5201
[ 10] local 172.29.0.5 port 41994 connected to 172.29.3.0 port 5201
[ 12] local 172.29.0.5 port 41996 connected to 172.29.3.0 port 5201
[ 14] local 172.29.0.5 port 41998 connected to 172.29.3.0 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   243 MBytes  248140 KBytes/sec    6    157 KBytes       
[  6]   0.00-1.00   sec   249 MBytes  254353 KBytes/sec    0    322 KBytes       
[  8]   0.00-1.00   sec   229 MBytes  233829 KBytes/sec    2    160 KBytes       
[ 10]   0.00-1.00   sec   219 MBytes  224163 KBytes/sec    0    321 KBytes       
[ 12]   0.00-1.00   sec   220 MBytes  225166 KBytes/sec    2    148 KBytes       
[ 14]   0.00-1.00   sec  95.0 MBytes  97176 KBytes/sec   39   46.7 KBytes       
[SUM]   0.00-1.00   sec  1.22 GBytes  1282865 KBytes/sec   49             
- - - - - - - - - - - - - - - - - - - - - - - - -
[  4]   1.00-2.00   sec   270 MBytes  276357 KBytes/sec    0    171 KBytes       
[  6]   1.00-2.00   sec   269 MBytes  275670 KBytes/sec    0    322 KBytes       
[  8]   1.00-2.00   sec   259 MBytes  265622 KBytes/sec    0    180 KBytes       
[ 10]   1.00-2.00   sec   246 MBytes  251707 KBytes/sec    0    322 KBytes       
[ 12]   1.00-2.00   sec   239 MBytes  244482 KBytes/sec    0    170 KBytes       
[ 14]   1.00-2.00   sec  99.2 MBytes  101694 KBytes/sec   23   69.3 KBytes       
[SUM]   1.00-2.00   sec  1.35 GBytes  1415501 KBytes/sec   23             
- - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  2.26 GBytes  236932 KBytes/sec   12             sender
[  4]   0.00-10.00  sec  2.26 GBytes  236926 KBytes/sec                  receiver
[  6]   0.00-10.00  sec  2.88 GBytes  301715 KBytes/sec    0             sender
[  6]   0.00-10.00  sec  2.88 GBytes  301715 KBytes/sec                  receiver
[  8]   0.00-10.00  sec  2.16 GBytes  226466 KBytes/sec    6             sender
[  8]   0.00-10.00  sec  2.16 GBytes  226454 KBytes/sec                  receiver
[ 10]   0.00-10.00  sec  2.58 GBytes  270994 KBytes/sec    0             sender
[ 10]   0.00-10.00  sec  2.58 GBytes  270994 KBytes/sec                  receiver
[ 12]   0.00-10.00  sec  2.08 GBytes  218403 KBytes/sec    6             sender
[ 12]   0.00-10.00  sec  2.08 GBytes  218390 KBytes/sec                  receiver
[ 14]   0.00-10.00  sec  1.54 GBytes  161421 KBytes/sec   76             sender
[ 14]   0.00-10.00  sec  1.54 GBytes  161421 KBytes/sec                  receiver
[SUM]   0.00-10.00  sec  13.5 GBytes  1415930 KBytes/sec  100             sender
[SUM]   0.00-10.00  sec  13.5 GBytes  1415899 KBytes/sec                  receiver

And this is what it will look like on the storage node side.

...

You can then use iperf3 to validate that the network is giving the expected throughput in both directions.

 

...

Filter by label (Content by label)
showLabelsfalse
max5
spacescom.atlassian.confluence.content.render.xhtml.model.resource.identifiers.SpaceResourceIdentifier@957
sortmodified
showSpacefalse
reversetrue
typepage
cqllabel in ( "probe" , "howto" , "iperf3" ) and type = "page" and space = "KB"
labelsiperf3 probe howto
Page Properties
hiddentrue

Related issues