Occasional shuffling of containers in swarm deployment.
Occasionally docker swarm will shuffle which containers are running on which nodes, and I'm still trying to figure out why. So for example, earlier this morning:
The containers were running stably on nodes a, b, c, but then the gracedb container on b switched to a different node. Looking at the logs for gracedb/traefik/haproxy showed these warnings coming from HAproxy (webgateway_dockersocket
):
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.3.zl5suk39kwwg50fty1u6f3vyt: Stopping backend dockerbackend in 0 ms.
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.3.zl5suk39kwwg50fty1u6f3vyt: Stopping frontend dockerfrontend in 0 ms.
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.3.zl5suk39kwwg50fty1u6f3vyt: Proxy dockerbackend stopped (FE: 0 conns, BE: 166378 conns).
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.3.zl5suk39kwwg50fty1u6f3vyt: Proxy dockerfrontend stopped (FE: 23768 conns, BE: 0 conns).
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.1.ntddvqu1j5e3eqms12t8h12d9: [WARNING] 110/144524 (1) : Exiting Master process...
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.3.zl5suk39kwwg50fty1u6f3vyt: [WARNING] 110/144524 (1) : Exiting Master process...
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.3.zl5suk39kwwg50fty1u6f3vyt: [WARNING] 110/144524 (7) : Stopping backend dockerbackend in 0 ms.
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.3.zl5suk39kwwg50fty1u6f3vyt: [WARNING] 110/144524 (7) : Stopping frontend dockerfrontend in 0 ms.
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.3.zl5suk39kwwg50fty1u6f3vyt: [WARNING] 110/144524 (7) : Stopping frontend GLOBAL in 0 ms.
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.3.zl5suk39kwwg50fty1u6f3vyt: [WARNING] 110/144524 (7) : Proxy dockerbackend stopped (FE: 0 conns, BE: 166378 conns).
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.3.zl5suk39kwwg50fty1u6f3vyt: [WARNING] 110/144524 (7) : Proxy dockerfrontend stopped (FE: 23768 conns, BE: 0 conns).
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.3.zl5suk39kwwg50fty1u6f3vyt: [WARNING] 110/144524 (7) : Proxy GLOBAL stopped (FE: 0 conns, BE: 0 conns).
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.1.ntddvqu1j5e3eqms12t8h12d9: [WARNING] 110/144524 (7) : Stopping backend dockerbackend in 0 ms.
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.1.ntddvqu1j5e3eqms12t8h12d9: [WARNING] 110/144524 (7) : Stopping frontend dockerfrontend in 0 ms.
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.1.ntddvqu1j5e3eqms12t8h12d9: Stopping backend dockerbackend in 0 ms.
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.1.ntddvqu1j5e3eqms12t8h12d9: Stopping frontend dockerfrontend in 0 ms.
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.1.ntddvqu1j5e3eqms12t8h12d9: Proxy dockerbackend stopped (FE: 0 conns, BE: 63211 conns).
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.1.ntddvqu1j5e3eqms12t8h12d9: Proxy dockerfrontend stopped (FE: 9029 conns, BE: 0 conns).
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.1.ntddvqu1j5e3eqms12t8h12d9: [WARNING] 110/144524 (7) : Stopping frontend GLOBAL in 0 ms.
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.1.ntddvqu1j5e3eqms12t8h12d9: [WARNING] 110/144524 (7) : Proxy dockerbackend stopped (FE: 0 conns, BE: 63211 conns).
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.1.ntddvqu1j5e3eqms12t8h12d9: [WARNING] 110/144524 (7) : Proxy dockerfrontend stopped (FE: 9029 conns, BE: 0 conns).
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.1.ntddvqu1j5e3eqms12t8h12d9: [WARNING] 110/144524 (7) : Proxy GLOBAL stopped (FE: 0 conns, BE: 0 conns).
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.3.zl5suk39kwwg50fty1u6f3vyt: [ALERT] 110/144524 (1) : Current worker #1 (7) exited with code 0 (Exit)
Apr 21 14:45:24 gracedb_docker_webgateway_dockersocket.1.ntddvqu1j5e3eqms12t8h12d9: [ALERT] 110/144524 (1) : Current worker #1 (7) exited with code 0 (Exit)
followed in the same log file as the errors from traefik (webgateway_webgateway
):
Apr 21 14:45:24 gracedb_docker_webgateway_webgateway.2.rn0rimzkaijqp324b90r9ts09: 131.215.113.226 - - [21/Apr/2023:14:45:24 +0000] "GET /api/events/G989764/log/ HTTP/1.1" 200 1082 "-" "-" 621950 "gracedb@docker" "http://10.0.0.47:80" 95ms
Apr 21 14:45:24 gracedb_docker_webgateway_webgateway.2.rn0rimzkaijqp324b90r9ts09: time="2023-04-21T14:45:24Z" level=error msg="accept tcp [::]:443: use of closed network connection" entryPointName=websecure
Apr 21 14:45:24 gracedb_docker_webgateway_webgateway.2.rn0rimzkaijqp324b90r9ts09: time="2023-04-21T14:45:24Z" level=error msg="accept tcp [::]:80: use of closed network connection" entryPointName=web
Apr 21 14:45:24 gracedb_docker_webgateway_webgateway.2.rn0rimzkaijqp324b90r9ts09: time="2023-04-21T14:45:24Z" level=error msg="Error while starting server: http: Server closed" entryPointName=web
Apr 21 14:45:24 gracedb_docker_webgateway_webgateway.2.rn0rimzkaijqp324b90r9ts09: time="2023-04-21T14:45:24Z" level=error msg="Error while starting server: http: Server closed" entryPointName=websecure
Apr 21 14:45:24 gracedb_docker_webgateway_webgateway.2.rn0rimzkaijqp324b90r9ts09: time="2023-04-21T14:45:24Z" level=error msg="Error while starting server: http: Server closed" entryPointName=web
Apr 21 14:45:24 gracedb_docker_webgateway_webgateway.2.rn0rimzkaijqp324b90r9ts09: time="2023-04-21T14:45:24Z" level=error msg="close tcp [::]:80: use of closed network connection" entryPointName=web
Apr 21 14:45:24 gracedb_docker_webgateway_webgateway.2.rn0rimzkaijqp324b90r9ts09: time="2023-04-21T14:45:24Z" level=error msg="Error while starting server: http: Server closed" entryPointName=websecure
on the gracedb side, there were a string of errors coming from kafka:
Apr 21 14:45:24 gracedb-swarm-playground-us-west-2b-docker-mgr-01 gracedb_docker_gracedb_gracedb.3.rn3qo0xic2k63c3j2oflftrbz: 2023-04-21 14:45:24,354 INFO stopped: shibd (exit status 0)
Apr 21 14:45:24 gracedb-swarm-playground-us-west-2b-docker-mgr-01 gracedb_docker_gracedb_gracedb.3.rn3qo0xic2k63c3j2oflftrbz: internal kafka error: KafkaError{code=_TRANSPORT,val=-195,str="sasl_ssl://kb-2.prod.hop.scimma.org:9092/2: Disconnected (after 1245141ms in state UP)"}
Apr 21 14:45:24 gracedb-swarm-playground-us-west-2b-docker-mgr-01 gracedb_docker_gracedb_gracedb.3.rn3qo0xic2k63c3j2oflftrbz: internal kafka error: KafkaError{code=_TRANSPORT,val=-195,str="sasl_ssl://kb-2.prod.hop.scimma.org:9092/2: Disconnected (after 710755ms in state UP, 1 identical error(s) suppressed)"}
Apr 21 14:45:24 gracedb-swarm-playground-us-west-2b-docker-mgr-01 gracedb_docker_gracedb_gracedb.3.rn3qo0xic2k63c3j2oflftrbz: internal kafka error: KafkaError{code=_TRANSPORT,val=-195,str="sasl_ssl://kb-1.prod.hop.scimma.org:9092/1: Disconnected (after 710652ms in state UP)"}
Apr 21 14:45:24 gracedb-swarm-playground-us-west-2b-docker-mgr-01 gracedb_docker_gracedb_gracedb.3.rn3qo0xic2k63c3j2oflftrbz: internal kafka error: KafkaError{code=_TRANSPORT,val=-195,str="sasl_ssl://kb-1.prod.hop.scimma.org:9092/1: Disconnected (after 11863073ms in state UP, 1 identical error(s) suppressed)"}
Apr 21 14:45:24 gracedb-swarm-playground-us-west-2b-docker-mgr-01 gracedb_docker_gracedb_gracedb.3.rn3qo0xic2k63c3j2oflftrbz: internal kafka error: KafkaError{code=_TRANSPORT,val=-195,str="sasl_ssl://kb-0.prod.hop.scimma.org:9092/0: Disconnected (after 22878079ms in state UP)"}
Apr 21 14:45:24 gracedb-swarm-playground-us-west-2b-docker-mgr-01 gracedb_docker_gracedb_gracedb.3.rn3qo0xic2k63c3j2oflftrbz: internal kafka error: KafkaError{code=_TRANSPORT,val=-195,str="sasl_ssl://kb-1.prod.hop.scimma.org:9092/1: Disconnected (after 9059107ms in state UP, 1 identical error(s) suppressed)"}
Apr 21 14:45:24 gracedb-swarm-playground-us-west-2b-docker-mgr-01 gracedb_docker_gracedb_gracedb.3.rn3qo0xic2k63c3j2oflftrbz: internal kafka error: KafkaError{code=_TRANSPORT,val=-195,str="sasl_ssl://kb-2.prod.hop.scimma.org:9092/2: Disconnected (after 22094640ms in state UP, 1 identical error(s) suppressed)"}
But, given that they're all in the same second, it's hard to tell what caused what. If there's any saving grace, the service rolled over to the other nodes automatically... but users connected to node b probably got disconnection errors from the client.
To recover:
trigger the services to scale down to two nodes:
docker service scale webgateway_webgateway=2 webgateway_dockersocket=2 gracedb_memcached=2 gracedb_gracedb=2
then back up to three:
docker service scale webgateway_webgateway=3 webgateway_dockersocket=3 gracedb_memcached=3 gracedb_gracedb=3