2

I am trying to make a multi-container docker app using docker-compose.

Here's what I am trying to accomplish: I have a python3 app, that takes a list of list of numbers as input from API call(fastAPI with gunicorn server) and pass the numbers to a function(an ML model actually) that returns a number, which will then be sent back(in json of course) as result to that API call. That part is working absolutely fine. Problem started when I introduced a postgres container to store the inputs I receive into a postgres table and I am yet to add the part where I should also be access data of this postgres database from my local pgadmin4 app.

Here's what I have done till now: I am using "docker-compose.yml" file to set up both of these containers and here it is:

version: '3.8'

services: 
    postgres:
        image: postgres:12.4
        restart: always
        environment:
            - POSTGRES_USER=postgres
            - POSTGRES_PASSWORD=postgres_password
            - POSTGRES_DATABASE=postgres

    docker_fastapi:
        # use the Dockerfile in the current directory.
        build: .
        ports:
            # 3000 is what I send API calls to
            - "3000:3000"
            # this is postgres's port
            - "5432:5432"
        environment: 
            # these are the environment variables that I am using inside psycop2 to make connection.
            - POSTGRES_HOST=postgres
            - POSTGRES_PORT=5432
            - POSTGRES_USER=postgres
            - POSTGRES_PASSWORD=postgres_password
            - POSTGRES_DATABASE=postgres

he Here's how I am using those environment variables in psycopg2:

import os
from psycopg2 import connect

# making database connection using environement variables.
connection = connect(host=os.environ['POSTGRES_HOST'], port=os.environ['POSTGRES_PORT'],
                     user=os.environ['POSTGRES_USER'], password=os.environ['POSTGRES_PASSWORD'],
                     database=os.environ['POSTGRES_DATABASE']
                     )

here's the Dockerfile:

FROM tiangolo/uvicorn-gunicorn:python3.8-slim
# slim = debian-based. Not using alpine because it has poor python3 support.
LABEL maintainer="Sebastian Ramirez <[email protected]>"


RUN apt-get update
RUN apt-get install -y libpq-dev gcc
# copy and install from requirements.txt file
COPY requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir -r /app/requirements.txt
# remove all the dependency files to reduce the final image size
RUN apt-get autoremove -y gcc

# copying all the code files to the container's file system
COPY ./api /app/api

WORKDIR /app/api

EXPOSE 3000

ENTRYPOINT ["uvicorn"]

CMD ["api.main:app", "--host", "0.0.0.0", "--port", "3000"]

And here's the error it generates for an API call I send:

root@naveen-hp:/home/naveen/Videos/ML-Model-serving-with-fastapi-and-Docker# # docker-compose up
Starting ml-model-serving-with-fastapi-and-docker_docker_fastapi_1 ... done
Starting ml-model-serving-with-fastapi-and-docker_postgres_1       ... done
Attaching to ml-model-serving-with-fastapi-and-docker_postgres_1, ml-model-serving-with-fastapi-and-docker_docker_fastapi_1
postgres_1        | 
postgres_1        | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1        | 
postgres_1        | 2020-10-22 13:17:14.080 UTC [1] LOG:  starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1        | 2020-10-22 13:17:14.080 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
postgres_1        | 2020-10-22 13:17:14.080 UTC [1] LOG:  listening on IPv6 address "::", port 5432
postgres_1        | 2020-10-22 13:17:14.092 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1        | 2020-10-22 13:17:14.120 UTC [24] LOG:  database system was shut down at 2020-10-22 12:48:50 UTC
postgres_1        | 2020-10-22 13:17:14.130 UTC [1] LOG:  database system is ready to accept connections
docker_fastapi_1  | INFO:     Started server process [1]
docker_fastapi_1  | INFO:     Waiting for application startup.
docker_fastapi_1  | INFO:     Application startup complete.
docker_fastapi_1  | INFO:     Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
docker_fastapi_1  | INFO:     172.18.0.1:56094 - "POST /predict HTTP/1.1" 500 Internal Server Error
docker_fastapi_1  | ERROR:    Exception in ASGI application
docker_fastapi_1  | Traceback (most recent call last):
docker_fastapi_1  |   File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 391, in run_asgi
docker_fastapi_1  |     result = await app(self.scope, self.receive, self.send)
docker_fastapi_1  |   File "/usr/local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
docker_fastapi_1  |     return await self.app(scope, receive, send)
docker_fastapi_1  |   File "/usr/local/lib/python3.8/site-packages/fastapi/applications.py", line 179, in __call__
docker_fastapi_1  |     await super().__call__(scope, receive, send)
docker_fastapi_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
docker_fastapi_1  |     await self.middleware_stack(scope, receive, send)
docker_fastapi_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
docker_fastapi_1  |     raise exc from None
docker_fastapi_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
docker_fastapi_1  |     await self.app(scope, receive, _send)
docker_fastapi_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
docker_fastapi_1  |     raise exc from None
docker_fastapi_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
docker_fastapi_1  |     await self.app(scope, receive, sender)
docker_fastapi_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__
docker_fastapi_1  |     await route.handle(scope, receive, send)
docker_fastapi_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 227, in handle
docker_fastapi_1  |     await self.app(scope, receive, send)
docker_fastapi_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 41, in app
docker_fastapi_1  |     response = await func(request)
docker_fastapi_1  |   File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 182, in app
docker_fastapi_1  |     raw_response = await run_endpoint_function(
docker_fastapi_1  |   File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 135, in run_endpoint_function
docker_fastapi_1  |     return await run_in_threadpool(dependant.call, **values)
docker_fastapi_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 34, in run_in_threadpool
docker_fastapi_1  |     return await loop.run_in_executor(None, func, *args)
docker_fastapi_1  |   File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
docker_fastapi_1  |     result = self.fn(*self.args, **self.kwargs)
docker_fastapi_1  |   File "/app/api/main.py", line 83, in predict
docker_fastapi_1  |     insert_into_db(X)
docker_fastapi_1  |   File "/app/api/main.py", line 38, in insert_into_db
docker_fastapi_1  |     cursor.execute(f"INSERT INTO public.\"API_Test\""
docker_fastapi_1  | IndexError: index 1 is out of bounds for axis 0 with size 1


Here's how I am sending API calls:

curl -X POST "http://0.0.0.0:3000/predict" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"input_data\":[[
       1.354e+01, 1.436e+01, 8.746e+01, 5.663e+02, 9.779e-02, 8.129e-02,
       6.664e-02, 4.781e-02, 1.885e-01, 5.766e-02, 2.699e-01, 7.886e-01,
       2.058e+00, 2.356e+01, 8.462e-03, 1.460e-02, 2.387e-02, 1.315e-02,
       1.980e-02, 2.300e-03, 1.511e+01, 1.926e+01, 9.970e+01, 7.112e+02,
       1.440e-01, 1.773e-01, 2.390e-01, 1.288e-01, 2.977e-01, 7.259e-02]]}"

This works just as expected when I build it with credentials of postgres instance of AWS RDS without this second postgres container and specify credentials directly inside psycopg2.connect() without using environment variables and docker-compose and built directly using Dockerfile shown above; So, my code to insert the received data into postgres is presumably fine. And problems started when I introduced second container. What causes errors like these and How do I fix this?

1

2 Answers 2

1

you have to add network network and depend_on flags. try this:

version: '3.8'

services: 
    postgres:
        image: postgres:12.4
        restart: always
        environment:
            - POSTGRES_USER=postgres
            - POSTGRES_PASSWORD=postgres_password
            - POSTGRES_DB=postgres
        networks:
            - default

    docker_fastapi:
        # use the Dockerfile in the current directory.
        build: .
        ports:
            # 3000 is what I send API calls to
            - "3000:3000"
            # this is postgres's port
            # no need for this
            # - "5432:5432"
        networks:
            - default
        depends_on:
            - postgres
        # no need for this
        # environment: 
            # these are the environment variables that I am using inside psycop2 to make connection.
            # - POSTGRES_HOST=postgres
            # - POSTGRES_PORT=5432
            # - POSTGRES_USER=postgres
            # - POSTGRES_PASSWORD=postgres_password
            # - POSTGRES_DATABASE=postgres
Sign up to request clarification or add additional context in comments.

9 Comments

this is the error: root@naveen-hp:/home/naveen/Videos/ML-Model-serving-with-fastapi-and-Docker# docker-compose up ERROR: Service 'postgres' depends on service 'db' which is undefined.
there's another problem, which arose because you suggested me to remove the environment variables POSTGRES_HOST and POSTGRES_PORT. so, how do i specify them in psycopg2.connect()?
it is better to handle them in code or that's what i used to do it, but no prob to keep them. take a look at python-dotenv
yes, but you have to define those variable in .env, take a look at python-dotenv
Yes, but using python dotenv load variable from .env file instead of writing them explicitly in docker file or config files. .env files makes loading environments variables more easy and flexible between environments. After defining those variables in the .env file you can then use getenv or other function to read it.
|
1

The problem originated because of lot of postgres inside "docker-compose.yml" file.

with the help of alim91's answer, and my realisation; here's what's working, if anyone might need it.

version: '3.8'

services: 
    postgres_instance:
        image: postgres:12.4
        # to expose postgres to local machine and monitor it in something like pgadmin
        ports: 
            - "5432:5432"
        restart: unless-stopped
        # to persist data, if containers are stoopped and resumed.
        volumes:
            - ./postgres-data:/var/lib/postgresql/data
        environment:
            - POSTGRES_USER=postgres
            - POSTGRES_PASSWORD=postgres_password
            - POSTGRES_DB=postgres
        networks:
            - default

    docker_fastapi:
        # using Dockerfile in current directory
        build: .
        # port I send API calls to
        ports:
            - "3000:3000"
        restart: always
        depends_on: 
            - postgres_instance
        networks:
            - default
        # these environment variables must be specified here, to be able to use from .py file inside this container.
        environment: 
            - POSTGRES_HOST=postgres_instance
            - POSTGRES_PORT=5432
            - POSTGRES_USER=postgres
            - POSTGRES_PASSWORD=postgres_password
            - POSTGRES_DB=postgres

as can be seen, changing the service name from postgres to postgres_instance fixed everything. presumably because, postgres actually referred to where the database was hosted, but the tag is same as the user name and database name.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.