502 ошибка docker

I’ve a service listening to 8080 port. This one is not a container.

Then, I’ve created a nginx container using official image:

docker run --name nginx -d -v /root/nginx/conf:/etc/nginx/conf.d -p 443:443 -p 80:80 nginx

After all:

# netstat -tupln | grep 443
tcp6       0      0 :::443                  :::*                    LISTEN      3482/docker-proxy
# netstat -tupln | grep 80
tcp6       0      0 :::80                   :::*                    LISTEN      3489/docker-proxy
tcp6       0      0 :::8080                 :::*                    LISTEN      1009/java

Nginx configuration is:

upstream eighty {
    server 127.0.0.1:8080;
}

server {
    listen 80;
    server_name eighty.domain.com;

    location / {
      proxy_pass                        http://eighty;
    }
}

I’ve checked I’m able to connect with with this server with # curl http://127.0.0.1:8080

 <html><head><meta http-equiv='refresh'
 content='1;url=/login?from=%2F'/><script>window.location.replace('/login?from=%2F');</script></head><body
 style='background-color:white; color:white;'>
 ...

It seems running well, however, when I’m trying to access using my browser, nginx tells bt a 502 bad gateway response.

I’m figuring out it can be a problem related with the visibility between a open by a non-containerized process and a container. Can I container stablish connection to a port open by other non-container process?

EDIT

Logs where upstream { server 127.0.0.1:8080; }:

2016/07/13 09:06:53 [error] 5#5: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 62.57.217.25, server: eighty.domain.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "eighty.domain.com"
62.57.217.25 - - [13/Jul/2016:09:06:53 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" "-"

Logs where upstream { server 0.0.0.0:8080; }:

62.57.217.25 - - [13/Jul/2016:09:00:30 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" "-" 2016/07/13 09:00:30 [error] 5#5: *1 connect() failed (111: Connection refused) while connecting to upstream, client:
62.57.217.25, server: eighty.domain.com, request: "GET / HTTP/1.1", upstream: "http://0.0.0.0:8080/", host: "eighty.domain.com" 2016/07/13 09:00:32 [error] 5#5: *3 connect() failed (111: Connection refused) while connecting to upstream, client: 62.57.217.25, server: eighty.domain.com, request: "GET / HTTP/1.1", upstream: "http://0.0.0.0:8080/", host: "eighty.domain.com"
62.57.217.25 - - [13/Jul/2016:09:00:32 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" "-"

Any ideas?

Nginx 502 Bad Gateway Docker troubleshooting tips by our experts will help resolve this issue in a jiffy. 

At Bobcares, we offer solutions for every query, big and small, as a part of our Docker Hosting Support Service.

Let’s take a look at how our Support Team is ready to help customers resolve Nginx 502 bad gateway Docker error.

How to resolve Nginx 502 bad gateway docker

Some of our customers have been coming across the 502 Bad Gateway error message when they attempt to have a docker container with Nginx work as a reverse proxy. Fortunately, our Support Team has come up with several different ways to resolve this specific issue.

Nginx 502 Bad Gateway Docker

Troubleshooting Tips: Nginx 502 Bad Gateway 

Option 1:

  1. First, we have to set the server name. This is done in different server blocks in the Nginx configuration. Our Support Techs would like to point out that we have to use the docker port rather than the host port in this scenario.
    server {
    
      listen 80;
      server_name server2048;
    
      location / {
        proxy_pass "http://server2048:8080";
      }
    
    }
    
    server {
    
      listen 80;
      server_name server1;
    
      location / {
        # We have to refer to docker port and not the host port
         proxy_pass "http://server1:8080";
      }
    
    }

    Option 2:

    If the above solution did not resolve the issue, our Support Techs recommend tuning proxy_buffer_size as seen below:

    proxy_buffering off;
    proxy_buffer_size 16k;
    proxy_busy_buffers_size 24k;
    proxy_buffers 64 4k;

    proxy_buffer_size defines how much memory Nginx will allocate for each request. This memory is put to use for reading as well as storing the HTTP headers of the response.

    Option 3:

    This solution involves declaring the external network in case the container we are pointing to is defined in a different docker-compose.yml file:

    version: "3"
    
    services:
      webserver:
        image: nginx:1.17.4-alpine
        container_name: ${PROJECT_NAME}-webserver
        depends_on:
          - drupal
        restart: unless-stopped
        ports:
          - 80:80
        volumes:
          - ./docroot:/var/www/html
          - ./nginx-conf:/etc/nginx/conf.d
          - certbot-etc:/etc/letsencrypt
        networks:
          - internal
          - my-passwords
    
    networks:
      my-passwords:
        external: true
        name: my-passwords_default
    
    nginx.conf:
    
    server {
        listen          80;
        server_name     test2.com www.test2.com;
        location / {
            proxy_pass  http://my-passwords:3000/;
        }
    }

    Option 4:

    In this solution, the localhost points to the container itself. In other words, with an upstream like the one below:

    upstream foo{
      server 127.0.0.1:8080;
    }

    or

    upstream foo{
      server 0.0.0.0:8080;
    }

    This informs Nginx to pass the request to the localhost.

    Alternatively, we can also choose to run Nginx on the same network as the host:

    docker run --name nginx -d -v /root/nginx/conf:/etc/nginx/conf.d --net=host nginx

    Furthermore, we do not have to expose any ports in this scenario.

    Last but not least, another approach to resolve the issue is to reconfigure the Nginx upstream directive in order to directly connect to the host machine via adding a remote IP address:

      upstream foo{
      //insert your hosts ip here
      server 192.168.99.100:8080;
    }

    This pushes the container to go through the network stack and thereby resolve the host correctly.

    Let us know in the comments which troubleshooting tip works for you. If you are still having trouble, drop us a line and our Support Techs will help you out in a jiffy.

    [Looking for a solution to another query? We are just a click away.]

    Conclusion

    To sum up, our skilled Support Engineers at Bobcares demonstrated how to resolve Nginx 502 bad gateway docker.

    PREVENT YOUR SERVER FROM CRASHING!

    Never again lose customers to poor server speed! Let us help you.

    Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

    GET STARTED

Docker Support

Spend time on your business, not on your servers.

Managing a server is time consuming. Whether you are an expert or a newbie, that is time you could use to focus on your product or service. Leave your server management to us, and use that time to focus on the growth and success of your business.

TALK TO US Or click here to learn more.

У меня есть сервер на котором запущен docker-compose. В docker-compose 2 сервиса: nginx (как обратный прокси) и back (как апи, который обрабатывает 2 запроса). Кроме этого есть БД, которая расположена не на сервере, а отдельно (база данных как услуга).

Запросы, которые обрабатывает back:
1) get(‘/api’) — сервис back просто отвечает на него «АПИ работает»
2) get(‘/db’) — сервис bаck отправляет простой запрос на внешнюю БД (‘SELECT random() as random, current_database() as db’)

запрос 1 — работает отлично, запрос 2 — сервис back падает, nginx продолжает работать и в консоль выходит ошибка 502 Bad Gateway.

  • В Logs сервиса nginx выходит ошибка: upstream prematurely closed connection while reading response header from upstream.
  • В Logs сервиса back: connection terminated due to connection timeout.

Что я пробовал:
1) увеличить число ядер и RAM (сейчас 2 ядра и 4 Гб Ram);
2) добавлять/убирать/изменять параметры proxy_read_timeout, proxy_send_timeout и proxy_connect_timeout;
3) тестировать запрос www.test.com/db через postman и curl (падает с той же ошибкой);
4) запускать код на своей локальной машине без контейнера и compose и подключаться к той же бд по тому же pool по тому же ip (все ок, оба запроса работают и присылают что нужно);
5) изменять параметр worker_processes (тестировал со значением 1 и auto);
6) добавлять/удалять атрибут proxy_set_header Host $http_host, заменять $http_host на «www.test.com».

Вопрос:
Что еще можно попробовать сделать чтобы пофиксить ошибку и запрос к db заработал?

nginx.conf

worker_processes  1;
events {
  worker_connections  1024;
}    
http{
  upstream back-stream {
    server back:8080;
  }
  server {
    listen 80;
    listen [::]:80;
    server_name test.com www.test.com;
    location / {
      root   /usr/share/nginx/html;
      resolver 121.0.0.11;
      proxy_pass http://back-stream;
    }
  }
}

docker-compose.yml

version: '3.9'
services:
  nginx-proxy:
    image: nginx:stable-alpine
    container_name: nginx-proxy
    ports:
      - 80:80
      - 443:443
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    networks:
       - network
  back:
    image: "mycustomimage"
    container_name: back
    restart: unless-stopped
    ports:
      - '81:8080'
    networks:
       - network
networks:
  network:
    driver: bridge

Создание pool к базе данных (type script)

const pool = createPool(`postgres://user:passwor@publicIp:6432/staging?sslmode=disable`, {
  statementTimeout: 'DISABLE_TIMEOUT',
  idleInTransactionSessionTimeout: 'DISABLE_TIMEOUT'
}); //  statementTimeout и  idleInTransactionSessionTimeout равны DISABLE потому что если
 не ставить эти маркеры выходят ошибки от кластера БД<i> unsupported startup parameter: 
statement_timeout</i> и <i>unsupported startup parameter: idleInTransactionSessionTimeout</i>
 - это тестировалось не в контенейре на сервере, там до этого этапа не доходит.

async function exec(q) {
  await pool.query(q);
}

Код back сервиса тут.

Traefik is a popular reverse proxy and load balancer for containers. When you are running multiple containers in your Docker environment, you may face the «Bad Gateway» (error 502) issue, which indicates that the reverse proxy is unable to connect to a particular container. There are several reasons why this error may occur and several methods to solve it. Below are some common solutions to resolve the «Bad Gateway» error in Traefik for some containers.

Method 1: Check the container’s health status

If you are encountering the «Bad Gateway» error (502) when using Traefik with some of your containers, it may be due to the fact that the container is not healthy. Traefik relies on the health status of your containers to determine if they are ready to receive traffic. In this tutorial, we will show you how to use container health checks to fix the «Bad Gateway» error in Traefik.

Step 1: Add Health Check to Your Docker Container

To add a health check to your Docker container, you can use the HEALTHCHECK instruction in your Dockerfile. The HEALTHCHECK instruction specifies a command to run to check the health of the container. For example, you can use the following command to check if your container is healthy:

HEALTHCHECK --interval=5m --timeout=3s \
  CMD curl -f http://localhost/ || exit 1

This command will run every 5 minutes to check if the container is healthy. It will use curl to send a request to http://localhost/ and exit with a status code of 1 if the request fails. You can customize this command to fit your application’s needs.

Step 2: Configure Traefik to Use Health Checks

Once you have added a health check to your Docker container, you need to configure Traefik to use it. To do this, you can add the following labels to your container:

labels:
  - "traefik.enable=true"
  - "traefik.http.services.<service_name>.loadbalancer.server.port=<container_port>"
  - "traefik.http.routers.<router_name>.rule=Host(`<domain_name>`)"
  - "traefik.http.routers.<router_name>.service=<service_name>"
  - "traefik.http.routers.<router_name>.entrypoints=<entrypoint_name>"
  - "traefik.http.services.<service_name>.healthcheck.path=/"
  - "traefik.http.services.<service_name>.healthcheck.interval=5s"
  - "traefik.http.services.<service_name>.healthcheck.timeout=3s"
  - "traefik.http.services.<service_name>.healthcheck.port=<container_port>"

Here’s what each of these labels does:

  • "traefik.enable=true": Enables Traefik for this container
  • "traefik.http.services.<service_name>.loadbalancer.server.port=<container_port>": Specifies the port on which the container is listening
  • "traefik.http.routers.<router_name>.rule=Host(<domain_name>)": Specifies the domain name for the router
  • "traefik.http.routers.<router_name>.service=<service_name>": Specifies the name of the service to use for this router
  • "traefik.http.routers.<router_name>.entrypoints=<entrypoint_name>": Specifies the entrypoint to use for this router
  • "traefik.http.services.<service_name>.healthcheck.path=/": Specifies the path to use for the health check
  • "traefik.http.services.<service_name>.healthcheck.interval=5s": Specifies the interval at which to perform the health check
  • "traefik.http.services.<service_name>.healthcheck.timeout=3s": Specifies the timeout for the health check
  • "traefik.http.services.<service_name>.healthcheck.port=<container_port>": Specifies the port on which the container is listening

Step 3: Restart Your Docker Container

After adding the health check and configuring Traefik to use it, you need to restart your Docker container for the changes to take effect. You can do this by running the following command:

docker restart <container_name>

Method 2: Verify the Traefik configuration

To verify the Traefik configuration and fix the «Bad gateway» (error 502) for some containers, follow the steps below:

  1. First, check the Traefik logs to see if there are any errors related to the affected containers. You can use the following command to view the logs:

  2. If there are no errors in the Traefik logs, check the configuration file to ensure that the affected containers are correctly defined. Here is an example configuration for a container:

    [http.routers.myapp]
      rule = "Host(`myapp.example.com`)"
      service = "myapp"
      [http.routers.myapp.tls]
        certResolver = "myresolver"
    
    [http.services.myapp.loadBalancer]
      [[http.services.myapp.loadBalancer.servers]]
        url = "http://myapp:80"

    This configuration defines a router for a container named «myapp», which listens on the domain «myapp.example.com» and uses a TLS certificate resolver named «myresolver». The load balancer for the «myapp» service is defined as a single server with the URL «http://myapp:80».

  3. If the configuration file looks correct, verify that the container is running and listening on the correct port. You can use the following command to check the container status:

    This will show a list of all running containers. Find the container that is causing the «Bad gateway» error and check its status.

  4. If the container is running and listening on the correct port, check the network configuration to ensure that the container is connected to the same network as Traefik. Here is an example command to check the network configuration:

    docker inspect myapp --format '{{json .NetworkSettings.Networks}}'

    This will show the network configuration for the «myapp» container in JSON format. Check that the container is connected to the same network as Traefik.

  5. If all of the above steps look correct, try restarting the affected container and see if that resolves the «Bad gateway» error. Here is an example command to restart a container:

    This will restart the «myapp» container.

By following the above steps, you should be able to verify the Traefik configuration and fix the «Bad gateway» (error 502) for some containers.

Method 3: Ensure the container has the correct network settings

Here are the steps to fix Traefik «Bad gateway» (error 502) for some containers using «Ensure the container has the correct network settings»:

  1. Check the network settings of the container that is having the issue:
docker inspect <container_name>
  1. Ensure that the container is connected to the same network as Traefik:
docker network connect <traefik_network> <container_name>
  1. Check the labels of the container that is having the issue:
docker inspect <container_name> | grep "Labels"
  1. Ensure that the container has the required labels for Traefik:
docker run -d --name=<container_name> --label "traefik.http.routers.<router_name>.rule=Host(`<domain_name>`)" --label "traefik.http.services.<service_name>.loadbalancer.server.port=<port_number>" <image_name>
  1. Restart the container and Traefik:
docker restart <container_name>
docker restart <traefik_container_name>
  1. Verify that the issue has been resolved by checking the Traefik dashboard or accessing the container through the domain name.

That’s it! By following these steps, you should be able to fix Traefik «Bad gateway» (error 502) for some containers using «Ensure the container has the correct network settings».

Method 4: Restart the affected container

To fix Traefik’s «Bad gateway» error 502 for some containers, you can try restarting the affected container. Here are the steps to do it:

  1. First, identify the container that is causing the error. You can use the following command to list all the running containers:

  2. Once you have identified the container, stop it using the following command:

    docker stop <container_name>
  3. Then, start the container again using the following command:

    docker start <container_name>
  4. Finally, check if the container is running correctly by checking its logs:

    docker logs <container_name>

    If the container is running correctly, you should see a message indicating that it has started successfully.

Here is an example of how to restart a container named «my_container»:

docker stop my_container
docker start my_container
docker logs my_container

This should fix the «Bad gateway» error 502 for the affected container.

Method 5: Increase the timeouts in Traefik configuration

If you are experiencing «Bad gateway» (error 502) for some containers in Traefik, it could be due to the default timeout values being too low. To increase the timeouts in Traefik configuration, follow the steps below:

  1. Open your Traefik configuration file (traefik.toml or traefik.yml).
  2. Locate the [backends] section and find the backend for the container that is experiencing the «Bad gateway» error.
  3. Add the following lines to the backend configuration block:
[backends.backend_name.loadBalancer]
  # Increase the timeout values (in seconds) as needed
  # These values are just examples
  idleTimeout = 180
  responseHeaderTimeout = 60
  # Set the server timeouts to the same values as the load balancer
  serverTimeouts = { "idleTimeout": 180, "responseHeaderTimeout": 60 }
  1. Save the configuration file and restart Traefik.

With these changes, Traefik will now wait longer for the container to respond before timing out, which should help to resolve the «Bad gateway» error.

Note: You may need to adjust the timeout values based on your specific use case and the resources available on your server.

Comments

@benedict-odonovan

  • I have tried with the latest version of my channel (Stable or Edge)
  • I have uploaded Diagnostics
  • Diagnostics ID: 2DFBB5B6-D613-45B4-B759-05D5E0D65EA1/20200401101227

Expected behavior

Docker-compose runs continuously.

Actual behavior

After some seemingly random period of time (normally longer than 10 minutes) all running docker containers fail with «Unexpected API error (HTTP code 502): Bad response from Docker engine».

Attempting to call docker-compose again returns «open \.\pipe\docker_engine_linux: The system cannot find the file specified.»

To run docker-compose again I need to restart docker from the tray.

Information

  • Windows Version: Windows Server 2019 Standard
  • Docker Desktop Version: 2.2.0.4 (43472)
  • Are you running inside a virtualized Windows e.g. on a cloud server or on a mac VM: Yes, inside a VMWare 7,1 VM

This is on a fresh install of the latest stable Docker Desktop on a newly provisioned VM inside VMWare. It happens every time I run docker-compose up, with the problem occurring after different periods of time.

Looking at the logs I’ve noticed that every crash is accompanied by:

[11:03:48.026][VpnKitBridge      ][Info   ] time="2020-04-01T11:03:48+01:00" msg="Multiplexer main loop failed with The network connection was aborted by the local system."
[11:03:48.027][VpnKitBridge      ][Info   ] time="2020-04-01T11:03:48+01:00" msg="Event trace:"
...
...
[11:03:48.253][VpnKitBridge      ][Info   ] time="2020-04-01T11:03:48+01:00" msg="End of state dump"
[11:03:48.253][VpnKitBridge      ][Info   ] time="2020-04-01T11:03:48+01:00" msg="error CloseWrite to: file has already been closed"
[11:03:48.257][GoBackendProcess  ][Warning] time="2020-04-01T11:03:48+01:00" msg="Resyncer ports: while watching docker events: unexpected EOF"
[11:03:48.258][GoBackendProcess  ][Info   ] time="2020-04-01T11:03:48+01:00" msg="disconnected data connection: multiplexer is offline"
[11:03:48.258][GoBackendProcess  ][Error  ] time="2020-04-01T11:03:48+01:00" msg="error accepting subconnection: accept: multiplexer is not running"
[11:03:48.258][GoBackendProcess  ][Info   ] time="2020-04-01T11:03:48+01:00" msg="established connection to vpnkit-forwarder"
[11:03:48.259][GoBackendProcess  ][Info   ] time="2020-04-01T11:03:48+01:00" msg="listening on unix:\\\\.\\pipe\\dockerVpnkitData for data connection"
[11:03:48.259][GoBackendProcess  ][Warning] time="2020-04-01T11:03:48+01:00" msg="Resyncer volumes: while watching docker events: unexpected EOF"
[11:03:48.259][VpnKitBridge      ][Info   ] time="2020-04-01T11:03:48+01:00" msg="error copying: file has already been closed"
[11:03:48.259][VpnKitBridge      ][Info   ] time="2020-04-01T11:03:48+01:00" msg="error CloseWrite to: An existing connection was forcibly closed by the remote host."
[11:03:48.259][VpnKitBridge      ][Info   ] time="2020-04-01T11:03:48+01:00" msg="error copying: file has already been closed"
[11:03:48.259][VpnKitBridge      ][Info   ] time="2020-04-01T11:03:48+01:00" msg="error CloseWrite to: An existing connection was forcibly closed by the remote host."
[11:03:48.260][VpnKit            ][Error  ] vpnkit.exe: Vmnet.Server.listen: read EOF so closing connection
[11:03:48.260][VpnKitBridge      ][Info   ] time="2020-04-01T11:03:48+01:00" msg="error copying: file has already been closed"
[11:03:48.260][VpnKit            ][Info   ] vpnkit.exe: Vmnet.Server.disconnect
[11:03:48.260][VpnKit            ][Info   ] vpnkit.exe: Vmnet.Server.listen returning Ok()
[11:03:48.260][VpnKit            ][Info   ] vpnkit.exe: TCP/IP stack disconnected

For a copy of the logs with the state dump included: log sample.txt

Steps to reproduce the behavior

Docker-compose:

version: "3.3"
services:
  rabbitmq:
    image: rabbitmq:3.8-management
    hostname: rabbitmq
    ports:
      - 5672:5672
      - 15672:15672
    networks:
      - somenetwork

networks:
  somenetwork:
    driver: bridge

  1. Run docker-compose up
  2. Wait 10-40 minutes

@vytautas-petrikas

Simmilar issue occurs on versions 2.2.0.5, 2.3.0.1, 2.3.0.2, 2.3.0.3 and 2.3.0.4 on Windows 10 Enterprise build 17763.

@robodude666

Same issue on 2.2.0.5 (43884) on Windows 10 Enterprise 1809 (Build 17763.1158).

Once an ~hour docker will crash, requiring a restart of Docker Engine.

Diagnostics ID: E2DC3A5C-A58E-4CB1-BF50-95F0A93B9D80/20200509174318

λ docker version
Client: Docker Engine - Community
 Version:           19.03.8
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        afacb8b
 Built:             Wed Mar 11 01:23:10 2020
 OS/Arch:           windows/amd64
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       afacb8b
  Built:            Wed Mar 11 01:29:16 2020
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          v1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

@robodude666

Seems to have been fixed as part of 2.3.0.2 (45183); looks like there was a leak?

@mat007

@robodude666

@mat007 I thought so… but nope, issue still exists :(. Just much less frequent.

@vytautas-petrikas

Just tested with latest stable build and this issue is still relavant

@qootec

Same here on:

  • Engine 19.03.8
  • Compose 1.25.5
  • Windows 10.0.17763

Environment:
5 Ubuntu containers, quite busy with network traffic between them and to the cloud.

Observed behavior:
All services suddenly drop:
Unexpected API error for <<>> (HTTP code 502)
Response body:
Bad response from Docker engine

Commands such as docker ps are unable to contact docker daemon.
Restarting docker allows me to restart my docker-compose configuration.

I see this error mentioned already in 2017, so I guess this is a regression?

Thanks,
Johan

@mlilienthal

Similar problem here:

Environment:

  • tensorflow/tensorflow:latest-py3-jupyter used as a devcontainer in visual studio code.
  • Windows 10 17763
Client: Docker Engine - Community
 Version:           19.03.12
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        48a66213fe
 Built:             Mon Jun 22 15:43:18 2020
 OS/Arch:           windows/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.12
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       48a66213fe
  Built:            Mon Jun 22 15:49:27 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Behavior:

  • Container does not respond few minutes after starting some longer running task (in this case model training).
  • Resources (CPU/memory) seem not to be an issue here
  • VSCode looses connection to container.
  • Commands are unable to contact Docker Server, a restart of docker is required.

Logs:

[13:25:36.962][VpnKitBridge      ][Info   ] error copying: file has already been closed
[13:25:36.963][VpnKitBridge      ][Info   ] error copying: file has already been closed
[13:25:36.963][VpnKitBridge      ][Info   ] error CloseWrite to: An existing connection was forcibly closed by the remote host.
[13:25:36.963][VpnKitBridge      ][Info   ] error copying: file has already been closed
[13:25:36.963][VpnKitBridge      ][Info   ] error CloseWrite to: An existing connection was forcibly closed by the remote host.
[13:25:36.963][VpnKitBridge      ][Info   ] error dialing remote endpoint filesystem-event: connection refused
[13:25:36.963][VpnKitBridge      ][Info   ] error CloseWrite to: An existing connection was forcibly closed by the remote host.
[13:25:36.963][ApiProxy          ][Info   ] proxy >> GET /v1.40/exec/a46ae14d73698aa255d61ec300bd98b0ff1b8cbb13001f2e01fa9130752de06f/json\n
[13:25:36.963][ApiProxy          ][Info   ] proxy >> GET /v1.40/exec/946927957a2896112735619629659018ba60e9465c11745a8f5eb7c0f52f6e50/json\n
[13:25:36.963][ApiProxy          ][Info   ] error reading response from Docker:  unexpected EOF
[13:25:36.963][VpnKitBridge      ][Info   ] error dialing remote endpoint docker: connection refused
[13:25:36.964][ApiProxy          ][Info   ] proxy >> GET /v1.40/exec/b30a2b4023d93e7dd0b7ab927298894590690f7011d83b0c29fcbe3a7e4be207/json\n
[13:25:36.964][VpnKitBridge      ][Info   ] error dialing remote endpoint docker: connection refused
[13:25:36.964][ApiProxy          ][Info   ] error reading response from Docker:  unexpected EOF
[13:25:36.965][ApiProxy          ][Info   ] proxy >> GET /v1.40/exec/d7830556aa49d70d164457174d69b6d4829bfd5ea316456d2a56ab2d46fdd1e6/json\n
[13:25:36.965][VpnKitBridge      ][Info   ] error dialing remote endpoint docker: connection refused
[13:25:36.965][ApiProxy          ][Info   ] error reading response from Docker:  unexpected EOF
[13:25:36.967][ApiProxy          ][Info   ] proxy << POST /v1.40/exec/a46ae14d73698aa255d61ec300bd98b0ff1b8cbb13001f2e01fa9130752de06f/start (3h19m15.7503046s)\n
[13:25:36.959][GoBackendProcess  ][Info   ] disconnected data connection: multiplexer is offline
[13:25:36.968][GoBackendProcess  ][Warning] Resyncer volumes: while watching docker events: unexpected EOF
[13:25:36.968][GoBackendProcess  ][Error  ] error accepting subconnection: accept: multiplexer is not running
[13:25:36.968][ApiProxy          ][Info   ] proxy << POST /v1.40/exec/b30a2b4023d93e7dd0b7ab927298894590690f7011d83b0c29fcbe3a7e4be207/start (3h19m16.7793677s)\n
[13:25:36.968][GoBackendProcess  ][Info   ] established connection to vpnkit-forwarder
[13:25:36.968][GoBackendProcess  ][Info   ] listening on unix:\\\\.\\pipe\\dockerVpnkitData for data connection
[13:25:36.968][GoBackendProcess  ][Warning] Resyncer ports: while watching docker events: unexpected EOF
[13:25:36.969][ApiProxy          ][Info   ] proxy << POST /v1.40/exec/d7830556aa49d70d164457174d69b6d4829bfd5ea316456d2a56ab2d46fdd1e6/start (3h19m17.2523595s)\n
[13:25:36.973][VpnKitBridge      ][Info   ] error dialing remote endpoint docker: connection refused
[13:25:36.973][ApiProxy          ][Info   ] error reading response from Docker:  unexpected EOF
[13:25:36.974][ApiProxy          ][Info   ] proxy << POST /v1.40/exec/946927957a2896112735619629659018ba60e9465c11745a8f5eb7c0f52f6e50/start (3h19m15.7572911s)\n
[13:25:37.302][VpnKitBridge      ][Info   ] Proxy wsl2-bootstrap-expose-ports: context is done before proxy is established
[13:25:37.303][VpnKitBridge      ][Info   ] Error: accept: multiplexer is not running
[13:25:37.303][VpnKitBridge      ][Fatal  ] accept: multiplexer is not running
[13:25:37.341][VpnKitBridge      ][Error  ] Process died

Thanks,

Martin

@ScottGuymer

I’m seeing the same issue here. I often see it during docker build tasks in addition to running containers via docker-compose.

I have captured the logs of it running and then crashing. I see a lot of VpnKitBridge traffic in the logs. I am using a VPN but the connection seems to be stable enough.

These seem to eb the first lines that indicate an issue after everthing is going ok

[07:50:02.439][VpnKitBridge      ][Info   ] Multiplexer main loop failed with The network connection was aborted by the local system.
[07:50:02.440][VpnKitBridge      ][Info   ] Event trace:
[07:50:02.440][VpnKitBridge      ][Info   ] send  366 Window 179256422
[07:50:02.440][VpnKitBridge      ][Info   ] send  3384 Window 178392613
[07:50:02.440][VpnKitBridge      ][Info   ] recv  37 Data length 250
[07:50:02.440][VpnKitBridge      ][Info   ] recv  37 Data length 214
[07:50:02.440][VpnKitBridge      ][Info   ] send  264729 Open Multiplexed Unix:docker
[07:50:02.440][VpnKitBridge      ][Info   ] send  264729 Window 65536
[07:50:02.440][VpnKitBridge      ][Info   ] recv  264729 Window 65536
[07:50:02.440][VpnKitBridge      ][Info   ] send  264729 Data length 4
[07:50:02.440][VpnKitBridge      ][Info   ] send  264729 Data length 252
[07:50:02.440][VpnKitBridge      ][Info   ] recv  37 Data length 185
[07:50:02.440][VpnKitBridge      ][Info   ] recv  37 Data length 177
[07:50:02.440][VpnKitBridge      ][Info   ] recv  37 Data length 151
[07:50:02.440][VpnKitBridge      ][Info   ] recv  264729 Data length 7151
[07:50:02.440][VpnKitBridge      ][Info   ] send  264729 Shutdown
[07:50:02.440][VpnKitBridge      ][Info   ] recv  264729 Shutdown
[07:50:02.440][VpnKitBridge      ][Info   ] recv  264729 Close
[07:50:02.440][VpnKitBridge      ][Info   ] send  264729 Close
[07:50:02.440][VpnKitBridge      ][Info   ] close 264729 -> Unix:docker

Then after a while of that i get this

[07:50:02.459][ApiProxy          ][Info   ] error reading response from Docker:  unexpected EOF
[07:50:02.459][ApiProxy          ][Info   ] error streaming response body from Docker:  unexpected EOF
[07:50:02.459][VpnKitBridge      ][Info   ] recv  264762 Data length 8652
[07:50:02.459][VpnKitBridge      ][Info   ] send  264762 Shutdown
[07:50:02.459][VpnKit            ][Error  ] vpnkit.exe: Vmnet.Server.listen: read EOF so closing connection
[07:50:02.460][VpnKit            ][Info   ] vpnkit.exe: Vmnet.Server.disconnect
[07:50:02.461][VpnKit            ][Info   ] vpnkit.exe: Vmnet.Server.listen returning Ok()
[07:50:02.461][VpnKit            ][Info   ] vpnkit.exe: TCP/IP stack disconnected
[07:50:02.460][ApiProxy          ][Info   ] error closing response body from Docker:  unexpected EOF
[07:50:02.459][VpnKitBridge      ][Info   ] recv  264762 Shutdown
[07:50:02.461][ApiProxy          ][Info   ] proxy << GET /v1.38/events?filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Dincharge%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D (13h50m26.8338805s)\n
[07:50:02.461][VpnKitBridge      ][Info   ] recv  264762 Close
[07:50:02.461][ApiProxy          ][Info   ] Cancel connection...
[07:50:02.461][VpnKitBridge      ][Info   ] send  264762 Close
[07:50:02.461][ApiProxy          ][Warning] ignored error: The handle is invalid.
[07:50:02.461][VpnKitBridge      ][Info   ] close 264762 -> Unix:docker
[07:50:02.460][GoBackendProcess  ][Warning] Resyncer volumes: while watching docker events: unexpected EOF
[07:50:02.461][VpnKitBridge      ][Info   ] send  264763 Open Multiplexed Unix:docker
[07:50:02.461][VpnKitBridge      ][Info   ] send  264763 Window 65536
[07:50:02.461][VpnKitBridge      ][Info   ] recv  37 Data length 162
[07:50:02.461][VpnKitBridge      ][Info   ] recv  37 Data length 143
[07:50:02.461][GoBackendProcess  ][Info   ] disconnected data connection: multiplexer is offline
[07:50:02.461][VpnKitBridge      ][Info   ] recv  37 Data length 502
[07:50:02.461][GoBackendProcess  ][Error  ] error accepting subconnection: accept: multiplexer is not running

Then going to this

[07:50:02.462][VpnKitBridge      ][Info   ] error copying: file has already been closed
[07:50:02.462][VpnKitBridge      ][Info   ] error CloseWrite to: An existing connection was forcibly closed by the remote host.
[07:50:02.462][VpnKitBridge      ][Info   ] error copying: file has already been closed
[07:50:02.462][VpnKitBridge      ][Info   ] error CloseWrite to: An existing connection was forcibly closed by the remote host.
[07:50:02.462][VpnKitBridge      ][Info   ] error copying: file has already been closed
[07:50:02.462][VpnKitBridge      ][Info   ] error CloseWrite to: An existing connection was forcibly closed by the remote host.
[07:50:02.462][VpnKitBridge      ][Info   ] error dialing remote endpoint filesystem-event: connection refused
[07:50:02.490][ApiProxy          ][Info   ] proxy >> POST /v1.38/containers/0cfadb0cd911de0652c233ebefa03cae77d557b4a9a81c327810c8228cda9ad1/wait\n
[07:50:02.490][VpnKitBridge      ][Info   ] error dialing remote endpoint docker: connection refused
[07:50:02.490][ApiProxy          ][Info   ] error reading response from Docker:  unexpected EOF
[07:50:02.491][ApiProxy          ][Info   ] proxy >> POST /v1.38/containers/c636333bfd9019f2700f9d05e0209855872c10fb92a486aa68a989a87b4ca00d/wait\n
[07:50:02.491][VpnKitBridge      ][Info   ] error dialing remote endpoint docker: connection refused
[07:50:02.491][ApiProxy          ][Info   ] error reading response from Docker:  unexpected EOF
[07:50:02.491][ApiProxy          ][Info   ] Cancel connection...
[07:50:02.491][ApiProxy          ][Warning] ignored error: The handle is invalid.
[07:50:02.492][ApiProxy          ][Info   ] proxy >> POST /v1.38/containers/94a2c5b6f69393d9a11e9968053e9861fd784e60988db189dfa477227641c4d6/wait\n
[07:50:02.492][VpnKitBridge      ][Info   ] error dialing remote endpoint docker: connection refused
[07:50:02.492][ApiProxy          ][Info   ] error reading response from Docker:  unexpected EOF
[07:50:02.494][ApiProxy          ][Info   ] proxy >> POST /v1.38/containers/0e228895cd888dfc6e84dfaa64fd0b9d4d72596d3166c6a0daa836c8b4d7ba5c/wait\n
[07:50:02.494][VpnKitBridge      ][Info   ] error dialing remote endpoint docker: connection refused
[07:50:02.495][ApiProxy          ][Info   ] error reading response from Docker:  unexpected EOF
[07:50:02.497][ApiProxy          ][Info   ] proxy >> POST /v1.38/containers/92836eff091e2d047ee107130b29fa5d77b998bda4d761976856dda2705193ab/wait\n
[07:50:02.497][ApiProxy          ][Info   ] proxy >> POST /v1.38/containers/c0f2340e6bb319ccd5cc918de58ce4b3f6388169da83acbf04c8c61bab8627e9/wait\n
[07:50:02.497][VpnKitBridge      ][Info   ] error dialing remote endpoint docker: connection refused
[07:50:02.498][ApiProxy          ][Info   ] error reading response from Docker:  unexpected EOF
[07:50:02.498][ApiProxy          ][Info   ] Cancel connection...
[07:50:02.498][ApiProxy          ][Warning] ignored error: The handle is invalid.
[07:50:02.498][ApiProxy          ][Info   ] proxy >> POST /v1.38/containers/f43fe061ddf2aebce4e0eaab3194c65ac6564dae553418061a7689795803213a/wait\n
[07:50:02.498][VpnKitBridge      ][Info   ] error dialing remote endpoint docker: connection refused
[07:50:02.498][ApiProxy          ][Info   ] error reading response from Docker:  unexpected EOF
[07:50:02.498][ApiProxy          ][Info   ] Cancel connection...
[07:50:02.498][ApiProxy          ][Warning] ignored error: The handle is invalid.
[07:50:02.499][ApiProxy          ][Info   ] proxy >> POST /v1.38/containers/5f6f47ca992055325c42dfd642179953ecdbd3f8b041b3e20ab753acb5601f35/wait\n
[07:50:02.499][VpnKitBridge      ][Info   ] error dialing remote endpoint docker: connection refused
[07:50:02.499][ApiProxy          ][Info   ] error reading response from Docker:  unexpected EOF
[07:50:02.501][ApiProxy          ][Info   ] proxy >> POST /v1.38/containers/83c72979f4e0d2ca5397d90c9584c1f736b77f063592cd6f8452e2ab7d3c42bf/wait\n
[07:50:02.501][VpnKitBridge      ][Info   ] error dialing remote endpoint docker: connection refused
[07:50:02.501][ApiProxy          ][Info   ] error reading response from Docker:  unexpected EOF
[07:50:02.501][ApiProxy          ][Info   ] Cancel connection...
[07:50:02.501][ApiProxy          ][Warning] ignored error: The handle is invalid.
[07:50:02.503][ApiProxy          ][Info   ] proxy >> POST /v1.38/containers/0f9bc65f3bd8d9ea57af60da5153fa421a4225990987ad3aa3a7d8d490116a23/wait\n
[07:50:02.503][VpnKitBridge      ][Info   ] error dialing remote endpoint docker: connection refused
[07:50:02.503][ApiProxy          ][Info   ] error reading response from Docker:  unexpected EOF
[07:50:02.507][VpnKitBridge      ][Info   ] error dialing remote endpoint docker: connection refused
[07:50:02.507][ApiProxy          ][Info   ] error reading response from Docker:  unexpected EOF
[07:50:02.599][VpnKitBridge      ][Info   ] Proxy wsl2-bootstrap-expose-ports: context is done before proxy is established
[07:50:02.599][VpnKitBridge      ][Info   ] Error: accept: multiplexer is not running
[07:50:02.599][VpnKitBridge      ][Fatal  ] accept: multiplexer is not running
[07:50:02.610][VpnKitBridge      ][Error  ] Process died
[07:50:02.690][ApiProxy          ][Error  ] Error closing raw stream to container: The pipe is being closed.
[07:50:02.690][ApiProxy          ][Error  ] Error closing raw stream to container: The pipe is being closed.
[07:50:02.690][ApiProxy          ][Error  ] Error closing raw stream to container: The pipe is being closed.

System info

❯ docker info
Client:
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 200
 Server Version: 19.03.12
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.19.76-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 5
 Total Memory: 7.757GiB
 Name: docker-desktop
 ID: 3EKN:AJSM:2AVQ:QOLT:4HUI:VWC2:45GG:3WGB:526P:6B6B:7YAM:MOGU
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 40
  Goroutines: 53
  System Time: 2020-08-31T07:28:07.672102247Z
  EventsListeners: 4
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine
❯ [System.Environment]::OSVersion.Version

Major  Minor  Build  Revision
-----  -----  -----  --------
10     0      17763  0

❯ Get-ComputerInfo | select WindowsProductName, WindowsVersion, OsHardwareAbstractionLayer

WindowsProductName    WindowsVersion OsHardwareAbstractionLayer
------------------    -------------- --------------------------
Windows 10 Enterprise 1803           10.0.17763.1369

@tijntje

Any feedback on this, did you manage to resolve this? We are having the same issue on a production system.

@vytautas-petrikas

@qootec

@tijntje: Same here… We stayed with Windows (mandatory in this specific setup), have put Hyper-V active with an Ubuntu 18.04.3 LTS Server in it. This Ubuntu runs Docker and we never saw the problem again.

@kevindump

@qootec :me too….
i run with 2.4.0.0(48506),it still happen again…

@ievgennaida

Same issue on windows.

Unexpected API error for {container_name} (HTTP code 502)
Response body:
Bad response from Docker engine

Docker is configured to use 4 GB of RAM.

@sedovserge

same issue in windows server 2019
docker desktop 2.4.0.0
изображение

@igorbljahhin

I experienced same issues with Docker Desktop during last months. All issues with Docker Desktop disappeared after migration to new laptop. I still use Windows 10, same version of Docker Desktop, but now it is stable.

@cecini

On wsl2 + win docker desktop. Use the remote-container: clone repposity in container volume and devcontainer config use Dockefile ,when the container build ,lead this error.
Look at the build process lead the wsl distribution such as ubuntu oom. For wsl2 and win docker, and the remote-container: clone repposity in container volume , the image build is operated in an helper container, and the containter will consum all resource of the default wsl distruibuiton in possible.
As far i know ,the remote-container have no way to set build option(note, not build args) for the image build(badly even for image build in an containter) , so the build will always fail.

As a workaroud, is there any one know how to setting the help containter’s run args? if this is can be set,the the final image build should be ok.

Now i just use remote-container: clone repposity to create an simple image and do some work in the running container, let some previous build work placed in the running container. (note: for wsl 2, set runargs for a container do not work,need set wslconfig, expand the memory and swap )
So Set wslconfig memory and swap bigger satisfy the build need can solve this issue.

@kevindump

i still have problem on 2.5 with windows server 2019…
but i set an other server run older docker, it runs good more then 7 DAYS~
i think mayby the problem is on windows server 2019???

@dwpramik

I’ve also seen similar behavior happening.

Windows Server 2019
Docker Desktop 3.0 (also saw this on 2.5)
Docker Resources: 8 GB Memory — 4 GB Swap

Containers will be up for anywhere from 1 hour — 24 hours, and then they just go down. Any docker commands result in:

docker error response from daemon: open \\.\pupe\docker_engine_linux: the system cannot find the file specified

I have to fully restart docker desktop to get it working again.

@ScottGuymer

I can’t be 100% sure if it has certainly fixed it but I was advised to re-seat all the memory modules in my laptop.

I did this and have not seen this issue recently.

@aeweiwi

I am having the same problem also on windows server 2016. Any solution for that ?

@docker-robott

Issues go stale after 90 days of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30 days of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

@ievgennaida

Problem gone after the migration to the wsl2

@jankap

/remove-lifecycle stale

But problem is still there with Linux @ Hyper-V. No way to update to WSL2 due to Windows Server 2019, 1809. Company rules do not permit any updates.

Newest Docker version.

What can we do?

Can this issue be reopened please? It’s certainly not solved.

@ievgennaida

@jankap I guess the only way to register a new one and link this issue.

@docker-robott

Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle locked

@docker
docker

locked and limited conversation to collaborators

Jun 3, 2021

Понравилась статья? Поделить с друзьями:

Интересное по теме:

  • 5029 ошибка терминал сбербанк
  • 502 ошибка ddos
  • 502 ошибка bitrixvm
  • 503 ошибка bitrix
  • 502 неисправный шлюз это ошибка

  • 0 0 голоса
    Рейтинг статьи
    Подписаться
    Уведомить о
    guest

    0 комментариев
    Старые
    Новые Популярные
    Межтекстовые Отзывы
    Посмотреть все комментарии