Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: replaced some leftover AWS references with REGISTRY_USER/PASSWORD for quay.io, thx ObjectMatrix folks

...

  • A mini DataCore Swarm environment is now running. Use Content Portal and Storage UI as with a production environment.

  • Exec into this container that has a few S3 clients installed and configured:

    Code Block
    $ docker exec -it caringo42_s3ql_1 bash
    # showconfigs
    # s3cmd ls
    # fallocate -l 1G 1G
    # rclone -v copy 1G caringo:mybucket
  • Configure an external S3 client to use this environment. An /etc/hosts (or\WINDOWS\system32\drivers\etc\hosts) entry is needed on the S3 client machine to map the domain backup42 to the IP of the machine running docker. Use 127.0.0.1 if using Docker for Desktop and the S3 client is on the local machine.

    Create a different domain using Portal, or set docker run ... -e DOMAIN=mylaptop.example.com ... init.sh to change the name of the domain the init script creates.

  • All logs in the syslog container are visible and support tools like swarmctl can be run to see or change swarm settings or run indexer-enumerator.sh to list all objects.

    Code Block
    $ docker exec -it caringo42_syslog_1 bash
    # tail -F cloudgateway_audit.log &
    # swarmctl -d swarm -a
    # indexer-enumerator.sh
  • Bring up an existing environment after a reboot or stopped with docker run … stop.sh using docker run … up.sh. Use the setting docker run -e PROJECT_RESTART=always … init.sh to automatically start on reboot.

  • If the docker server has a service already using ports 80 and 443, resulting in “ERROR: for caringo42_https_1 Cannot start service … 0.0.0.0:443: bind: address already in use”, change those published ports by adding:

    Code Block
    docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock -e HTTPS_HTTP_PORT=4280 -e HTTPS_HTTPS_PORT=4243 -e REGISTRY_URL -e AWS_ACCESS_KEY_IDREGISTRY_USER -e AWS_SECRET_ACCESS_KEYREGISTRY_PASSWORD ${REGISTRY_URL}caringo:demo init.sh
  • Put any configuration to reuse into a text file and use --env-file my.env to simplify the docker runcommands.

    Code Block
    cat > my.env <<EOF
    DOCKER_INTERFACE=mylaptop.example.com
    PROJECT_RESTART=always
    SWARM_CLUSTER_NAME=swarmtest.example.com
    SWARM_DISK_SIZE=10g
    DOMAIN=mylaptop.example.com
    GATEWAY_ADMIN_PASSWORD=datacore 
    EOF

    Code Block
    docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock --env-file my.env -e REGISTRY_URL -e AWS_ACCESS_KEY_IDREGISTRY_USER -e AWS_SECRET_ACCESS_KEYREGISTRY_PASSWORD ${REGISTRY_URL}caringo:demo init.sh

    WARNING: changing the Swarm cluster.name loses the “persistent settings UUID”, including the Search Feed. It needs to be recreated by running re-run init.sh.

  • The default 2TB license is sufficient. To use a license devlicense.txt add SWARM_CFG_1=license.url = file:///license/devlicense.txt to the my.env and copy the license to a volume shared to the syslog and swarm containers.

    • Bring up the "syslog" service, it has the new license volume, using "--pull always" to download the latest caringo:demo image.

      Code Block
      docker run -ti --pull always --rm -v /var/run/docker.sock:/var/run/docker.sock -e REGISTRY_URL -e AWS_ACCESS_KEY_IDREGISTRY_USER -e AWS_SECRET_ACCESS_KEYREGISTRY_PASSWORD --env-file my.env ${REGISTRY_URL}caringo:demo up.sh syslog
    • Copy the license file into the volume in the syslog container.

      Code Block
      docker cp /tmp/devlicense.txt caringo42_syslog_1:/var/www/html/license/
    • Now rerun "up.sh" so swarm comes up with the new "license.url" setting and license volume.

      Code Block
      docker run -ti --pull always --rm -v /var/run/docker.sock:/var/run/docker.sock -e REGISTRY_URL -e AWS_ACCESS_KEY_IDREGISTRY_USER -e AWS_SECRET_ACCESS_KEYREGISTRY_PASSWORD --env-file my.env ${REGISTRY_URL}caringo:demo up.sh
  • Add this to the my.env to allow anonymous read and write access to Gateway, e.g. to test an application that makes requests directly to Swarm. This assumes the docker environment is only accessible by trusted clients.

    Code Block
    EXTRA_ROOT_POLICY_STATEMENTS={"Effect": "Allow", "Sid": "Anonymous Full Access", "Action": ["*"], "Resource": "*", "Principal": {"anonymous": ["*"]}}
  • See all variables used to configure this environment and run docker-compose directly in the test container.

    Code Block
    % docker exec -it caringo42_test_1
    # ./diff-env.sh

    ...shows non-default container config...

    Code Block
    # docker-compose ps
    # less config.env
  • Use the images without creating Swarm, e.g. to run a support tool:

    Code Block
    docker run -v /tmp/for-support:/tmp ${REGISTRY_URL}caringo-syslog:stable /root/dist/indexer-grab.sh -t elasticsearch1.example.com:9200
    ls /tmp/for-support
    indexgrab-175f84275a6d-07234012172020.tar.gz
  • Add -e ADD_COMPOSE_FILE=:docker-compose-systemd.yml (the colon prefix is required) to make the gateway and elasticsearch containers use systemd, to more closely match a regular environment.
    This currently requires "deprecatedCgroupv1": true in ~/Library/Group Containers/group.com.docker/settings.json.

  • Run multiple gateways behind the haproxy load balancer by adding -e GATEWAY_SCALE=3.

  • Add these environment variables to bring up an environment that does not use elasticsearch. This means no Search Feed is created and object listings, Swarm metrics and Gateway metering and quotas are disabled.
    -e ELASTICSEARCH_SCALE=0 -e ESHOST= -e INDEXER_HOSTS= -e SKIP_VERIFY_ELASTICSEARCH=true -e GATEWAY_METERING=false

  • Use tcpdump to monitor the http traffic to/from the Gateway S3 port:

    Code Block
    docker run --net=container:caringo42_cloudgateway_1 fish/tcpdump-docker -i eth0 -vv -s 0 -A port 8085

    Code Block
    caringo42_s3ql_1.caringo42_default.37592 > 914f010e83e6.8085: Flags [P.], cksum 0x5b00 (incorrect -> 0xc540), seq 1305:1957, ack 1051, win 501, options [nop,nop,TS val 1815248495 ecr 3455121625], length 652
    E...g.@.@.x.............@...*g......[......
    l2~o....PUT /locker/hello.txt?legal-hold HTTP/1.1
    Host: backup42:8085
    Accept-Encoding: identity
    Content-MD5: 1B2M2Y8AsgTpgAmY7PhCfg==
    User-Agent: aws-cli/2.2.22 Python/3.8.8 Linux/5.10.25-linuxkit exe/x86_64.centos.7 prompt/off command/s3api.put-object-legal-hold
    X-Amz-Date: 20210723T030338Z
    X-Amz-Content-SHA256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
    Authorization: AWS4-HMAC-SHA256 Credential=4ed7e53e89b25a911a5c62557dd5fdc4/20210723/us-east-1/s3/aws4_request, SignedHeaders=content-md5;host;x-amz-content-sha256;x-amz-date, Signature=42496864845c0ddf8b450cd68756dd1ffe4ac1d01fff51dada3d0127a251a35d

  • Remove the environment, deleting all containers and volume and reclaiming any space it used with clean.sh:

    Code Block
    docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock -e DOCKER_INTERFACE=localhost -e REGISTRY_URL -e AWS_ACCESS_KEY_IDREGISTRY_USER -e AWS_SECRET_ACCESS_KEYREGISTRY_PASSWORD -e GATEWAY_ADMIN_PASSWORD=caringo ${REGISTRY_URL}caringo:demo clean.sh

...