用 tourtoise 和 postgresql 测试 aiohttp。

问题描述 投票:-1回答:1

我是用docker-compose文件来运行我的项目。项目本身是建立在aiohttp上的,数据库是postgresql,我想用数据库来测试我的应用程序的端点,这部分我遇到了测试问题。

version: '3'

services:

  # RabbitMQ
  rabbit:
    image: bitnami/rabbitmq:latest
    environment:
      RABBITMQ_USERNAME: $RABBITMQ_USERNAME
      RABBITMQ_PASSWORD: $RABBITMQ_PASSWORD
    ports:
      - "5672:5672"
      - "15672:15672"
    networks:
      - backend

  db:
    image: postgres
    environment:
      POSTGRES_USER: $NLAB_SOVA_SERVICE_DB_USER
      POSTGRES_PASSWORD: $NLAB_SOVA_SERVICE_DB_PASSWORD
      POSTGRES_DB: $NLAB_SOVA_SERVICE_DB_NAME
    ports:
      - 5432:5432
    networks:
      - backend


  web:
    build: .
    command: ['./init.sh', 'dev']
    depends_on:
      - rabbit
    volumes:
      - .:/code
    ports:
      - 8080:8080
    environment:
      NLAB_SOVA_SERVICE_DB_USER: $NLAB_SOVA_SERVICE_DB_USER
      NLAB_SOVA_SERVICE_DB_PASSWORD: $NLAB_SOVA_SERVICE_DB_PASSWORD
      NLAB_SOVA_SERVICE_DB_NAME: $NLAB_SOVA_SERVICE_DB_NAME
      NLAB_SOVA_SERVICE_DB_HOST: $NLAB_SOVA_SERVICE_DB_HOST
      NLAB_SOVA_SERVICE_DB_PORT: $NLAB_SOVA_SERVICE_DB_PORT
      RABBITMQ_USERNAME: $RABBITMQ_USERNAME
      RABBITMQ_PASSWORD: $RABBITMQ_PASSWORD
      NLAB_SOVA_SENTRY_DSN: $NLAB_SOVA_SENTRY_DSN
      NLAB_SOVA_VERSION: $NLAB_SOVA_VERSION
    networks:
      - backend

networks:
  backend: {}
  frontend: {}

我想用数据库来测试我的应用程序的端点,在这部分我遇到了测试问题。因为我看到在测试阶段,应用程序无法初始化数据库。

我也不能像官方文档中的例子那样在测试中使用sqlite,因为这个数据库不支持我需要的部分功能。

为了运行测试,我使用了 pytest。

测试本身是这样的。

from aiohttp import web
import nest_asyncio
from aiohttp.test_utils import AioHTTPTestCase, unittest_run_loop
from tortoise.contrib.test import initializer, finalizer
import config
from routes import routes


class TestInitEndpoint(AioHTTPTestCase):
    """

    """
    async def setUpAsync(self) -> None:
        initializer(["models"], db_url=config.DB_TEST_URL, loop=self.loop)

    async def tearDownAsync(self) -> None:
        finalizer()

    async def get_application(self):
        nest_asyncio.apply()
        app = web.Application()
        app.add_routes(routes)
        return app

    @unittest_run_loop
    async def test_init_withoid_ciud(self):
        self.assertEqual(1, 1)

DB_URL行以如下方式收集。

DB_TEST_URL = f"postgres://{NLAB_SOVA_SERVICE_DB_USER}:{NLAB_SOVA_SERVICE_DB_PASSWORD}@{NLAB_SOVA_SERVICE_DB_HOST}"\
    f":{NLAB_SOVA_SERVICE_DB_PORT}/{NLAB_SOVA_SERVICE_DB_NAME}"

我还试着用官方文档中的前缀来命名数据库。像这样。

DB_TEST_URL = f"postgres://{NLAB_SOVA_SERVICE_DB_USER}:{NLAB_SOVA_SERVICE_DB_PASSWORD}@{NLAB_SOVA_SERVICE_DB_HOST}"\
    f":{NLAB_SOVA_SERVICE_DB_PORT}/test_{NLAB_SOVA_SERVICE_DB_NAME}"

在这两种情况下,我都得到了同样的错误。

root@cb20c8fb0bac:/code# pytest tests
============================================================================================================ test session starts =============================================================================================================
platform linux -- Python 3.8.3, pytest-5.4.2, py-1.8.1, pluggy-0.13.1
rootdir: /code
plugins: aiohttp-0.3.0
collected 1 item                                                                                                                                                                                                                             

tests/test_init.py F                                                                                                                                                                                                                   [100%]

================================================================================================================== FAILURES ==================================================================================================================
__________________________________________________________________________________________________ TestInitEndpoint.test_init_withoid_ciud ___________________________________________________________________________________________________

self = <tortoise.backends.asyncpg.client.AsyncpgDBClient object at 0x7fa59e7997c0>, with_db = False

    async def create_connection(self, with_db: bool) -> None:
        self._template = {
            "host": self.host,
            "port": self.port,
            "user": self.user,
            "database": self.database if with_db else None,
            "min_size": self.pool_minsize,
            "max_size": self.pool_maxsize,
            **self.extra,
        }
        if self.schema:
            self._template["server_settings"] = {"search_path": self.schema}
        try:
>           self._pool = await asyncpg.create_pool(None, password=self.password, **self._template)

/usr/local/lib/python3.8/site-packages/tortoise/backends/asyncpg/client.py:90: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <asyncpg.pool.Pool object at 0x7fa59e152040>

    async def _async__init__(self):
        if self._initialized:
            return
        if self._initializing:
            raise exceptions.InterfaceError(
                'pool is being initialized in another task')
        if self._closed:
            raise exceptions.InterfaceError('pool is closed')
        self._initializing = True
        try:
>           await self._initialize()

/usr/local/lib/python3.8/site-packages/asyncpg/pool.py:398: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <asyncpg.pool.Pool object at 0x7fa59e152040>

    async def _initialize(self):
        self._queue = asyncio.LifoQueue(maxsize=self._maxsize)
        for _ in range(self._maxsize):
            ch = PoolConnectionHolder(
                self,
                max_queries=self._max_queries,
                max_inactive_time=self._max_inactive_connection_lifetime,
                setup=self._setup)

            self._holders.append(ch)
            self._queue.put_nowait(ch)

        if self._minsize:
            # Since we use a LIFO queue, the first items in the queue will be
            # the last ones in `self._holders`.  We want to pre-connect the
            # first few connections in the queue, therefore we want to walk
            # `self._holders` in reverse.

            # Connect the first connection holder in the queue so that it
            # can record `_working_addr` and `_working_opts`, which will
            # speed up successive connection attempts.
            first_ch = self._holders[-1]  # type: PoolConnectionHolder
>           await first_ch.connect()

/usr/local/lib/python3.8/site-packages/asyncpg/pool.py:426: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <asyncpg.pool.PoolConnectionHolder object at 0x7fa59e1362c0>

    async def connect(self):
        if self._con is not None:
            raise exceptions.InternalClientError(
                'PoolConnectionHolder.connect() called while another '
                'connection already exists')

>       self._con = await self._pool._get_new_connection()

/usr/local/lib/python3.8/site-packages/asyncpg/pool.py:125: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <asyncpg.pool.Pool object at 0x7fa59e152040>

    async def _get_new_connection(self):
        if self._working_addr is None:
            # First connection attempt on this pool.
>           con = await connection.connect(
                *self._connect_args,
                loop=self._loop,
                connection_class=self._connection_class,
                **self._connect_kwargs)

/usr/local/lib/python3.8/site-packages/asyncpg/pool.py:468: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

dsn = None

    async def connect(dsn=None, *,
                      host=None, port=None,
                      user=None, password=None, passfile=None,
                      database=None,
                      loop=None,
                      timeout=60,
                      statement_cache_size=100,
                      max_cached_statement_lifetime=300,
                      max_cacheable_statement_size=1024 * 15,
                      command_timeout=None,
                      ssl=None,
                      connection_class=Connection,
                      server_settings=None):
        r"""A coroutine to establish a connection to a PostgreSQL server.

        The connection parameters may be specified either as a connection
        URI in *dsn*, or as specific keyword arguments, or both.
        If both *dsn* and keyword arguments are specified, the latter
        override the corresponding values parsed from the connection URI.
        The default values for the majority of arguments can be specified
        using `environment variables <postgres envvars>`_.

        Returns a new :class:`~asyncpg.connection.Connection` object.

        :param dsn:
            Connection arguments specified using as a single string in the
            `libpq connection URI format`_:
            ``postgres://user:password@host:port/database?option=value``.
            The following options are recognized by asyncpg: host, port,
            user, database (or dbname), password, passfile, sslmode.
            Unlike libpq, asyncpg will treat unrecognized options
            as `server settings`_ to be used for the connection.

        :param host:
            Database host address as one of the following:

            - an IP address or a domain name;
            - an absolute path to the directory containing the database
              server Unix-domain socket (not supported on Windows);
            - a sequence of any of the above, in which case the addresses
              will be tried in order, and the first successful connection
              will be returned.

            If not specified, asyncpg will try the following, in order:

            - host address(es) parsed from the *dsn* argument,
            - the value of the ``PGHOST`` environment variable,
            - on Unix, common directories used for PostgreSQL Unix-domain
              sockets: ``"/run/postgresql"``, ``"/var/run/postgresl"``,
              ``"/var/pgsql_socket"``, ``"/private/tmp"``, and ``"/tmp"``,
            - ``"localhost"``.

        :param port:
            Port number to connect to at the server host
            (or Unix-domain socket file extension).  If multiple host
            addresses were specified, this parameter may specify a
            sequence of port numbers of the same length as the host sequence,
            or it may specify a single port number to be used for all host
            addresses.

            If not specified, the value parsed from the *dsn* argument is used,
            or the value of the ``PGPORT`` environment variable, or ``5432`` if
            neither is specified.

        :param user:
            The name of the database role used for authentication.

            If not specified, the value parsed from the *dsn* argument is used,
            or the value of the ``PGUSER`` environment variable, or the
            operating system name of the user running the application.

        :param database:
            The name of the database to connect to.

            If not specified, the value parsed from the *dsn* argument is used,
            or the value of the ``PGDATABASE`` environment variable, or the
            operating system name of the user running the application.

        :param password:
            Password to be used for authentication, if the server requires
            one.  If not specified, the value parsed from the *dsn* argument
            is used, or the value of the ``PGPASSWORD`` environment variable.
            Note that the use of the environment variable is discouraged as
            other users and applications may be able to read it without needing
            specific privileges.  It is recommended to use *passfile* instead.

        :param passfile:
            The name of the file used to store passwords
            (defaults to ``~/.pgpass``, or ``%APPDATA%\postgresql\pgpass.conf``
            on Windows).

        :param loop:
            An asyncio event loop instance.  If ``None``, the default
            event loop will be used.

        :param float timeout:
            Connection timeout in seconds.

        :param int statement_cache_size:
            The size of prepared statement LRU cache.  Pass ``0`` to
            disable the cache.

        :param int max_cached_statement_lifetime:
            The maximum time in seconds a prepared statement will stay
            in the cache.  Pass ``0`` to allow statements be cached
            indefinitely.

        :param int max_cacheable_statement_size:
            The maximum size of a statement that can be cached (15KiB by
            default).  Pass ``0`` to allow all statements to be cached
            regardless of their size.

        :param float command_timeout:
            The default timeout for operations on this connection
            (the default is ``None``: no timeout).

        :param ssl:
            Pass ``True`` or an `ssl.SSLContext <SSLContext_>`_ instance to
            require an SSL connection.  If ``True``, a default SSL context
            returned by `ssl.create_default_context() <create_default_context_>`_
            will be used.

        :param dict server_settings:
            An optional dict of server runtime parameters.  Refer to
            PostgreSQL documentation for
            a `list of supported options <server settings>`_.

        :param Connection connection_class:
            Class of the returned connection object.  Must be a subclass of
            :class:`~asyncpg.connection.Connection`.

        :return: A :class:`~asyncpg.connection.Connection` instance.

        Example:

        .. code-block:: pycon

            >>> import asyncpg
            >>> import asyncio
            >>> async def run():
            ...     con = await asyncpg.connect(user='postgres')
            ...     types = await con.fetch('SELECT * FROM pg_type')
            ...     print(types)
            ...
            >>> asyncio.get_event_loop().run_until_complete(run())
            [<Record typname='bool' typnamespace=11 ...

        .. versionadded:: 0.10.0
           Added ``max_cached_statement_use_count`` parameter.

        .. versionchanged:: 0.11.0
           Removed ability to pass arbitrary keyword arguments to set
           server settings.  Added a dedicated parameter ``server_settings``
           for that.

        .. versionadded:: 0.11.0
           Added ``connection_class`` parameter.

        .. versionadded:: 0.16.0
           Added ``passfile`` parameter
           (and support for password files in general).

        .. versionadded:: 0.18.0
           Added ability to specify multiple hosts in the *dsn*
           and *host* arguments.

        .. _SSLContext: https://docs.python.org/3/library/ssl.html#ssl.SSLContext
        .. _create_default_context:
            https://docs.python.org/3/library/ssl.html#ssl.create_default_context
        .. _server settings:
            https://www.postgresql.org/docs/current/static/runtime-config.html
        .. _postgres envvars:
            https://www.postgresql.org/docs/current/static/libpq-envars.html
        .. _libpq connection URI format:
            https://www.postgresql.org/docs/current/static/\
            libpq-connect.html#LIBPQ-CONNSTRING
        """
        if not issubclass(connection_class, Connection):
            raise TypeError(
                'connection_class is expected to be a subclass of '
                'asyncpg.Connection, got {!r}'.format(connection_class))

        if loop is None:
            loop = asyncio.get_event_loop()

>       return await connect_utils._connect(
            loop=loop, timeout=timeout, connection_class=connection_class,
            dsn=dsn, host=host, port=port, user=user,
            password=password, passfile=passfile,
            ssl=ssl, database=database,
            server_settings=server_settings,
            command_timeout=command_timeout,
            statement_cache_size=statement_cache_size,
            max_cached_statement_lifetime=max_cached_statement_lifetime,
            max_cacheable_statement_size=max_cacheable_statement_size)

/usr/local/lib/python3.8/site-packages/asyncpg/connection.py:1668: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

loop = <_UnixSelectorEventLoop running=False closed=False debug=False>, timeout = 59.998112866000156, connection_class = <class 'asyncpg.connection.Connection'>
kwargs = {'command_timeout': None, 'database': None, 'dsn': None, 'host': 'db', ...}, addrs = [('db', 5432)]
params = ConnectionParameters(user='dbuser', password='db_password', database='dbuser', ssl=None, ssl_is_advisory=None, connect_timeout=60, server_settings=None)
config = ConnectionConfiguration(command_timeout=None, statement_cache_size=100, max_cached_statement_lifetime=300, max_cacheable_statement_size=15360), last_error = None

    async def _connect(*, loop, timeout, connection_class, **kwargs):
        if loop is None:
            loop = asyncio.get_event_loop()

        addrs, params, config = _parse_connect_arguments(timeout=timeout, **kwargs)

        last_error = None
        addr = None
        for addr in addrs:
            before = time.monotonic()
            try:
>               con = await _connect_addr(
                    addr=addr, loop=loop, timeout=timeout,
                    params=params, config=config,
                    connection_class=connection_class)

/usr/local/lib/python3.8/site-packages/asyncpg/connect_utils.py:652: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    async def _connect_addr(*, addr, loop, timeout, params, config,
                            connection_class):
        assert loop is not None

        if timeout <= 0:
            raise asyncio.TimeoutError

        connected = _create_future(loop)
        proto_factory = lambda: protocol.Protocol(
            addr, connected, params, loop)

        if isinstance(addr, str):
            # UNIX socket
            assert not params.ssl
            connector = loop.create_unix_connection(proto_factory, addr)
        elif params.ssl:
            connector = _create_ssl_connection(
                proto_factory, *addr, loop=loop, ssl_context=params.ssl,
                ssl_is_advisory=params.ssl_is_advisory)
        else:
            connector = loop.create_connection(proto_factory, *addr)

        connector = asyncio.ensure_future(connector)
        before = time.monotonic()
        try:
            tr, pr = await asyncio.wait_for(
                connector, timeout=timeout)
        except asyncio.CancelledError:
            connector.add_done_callback(_close_leaked_connection)
            raise
        timeout -= time.monotonic() - before

        try:
            if timeout <= 0:
                raise asyncio.TimeoutError
>           await asyncio.wait_for(connected, timeout=timeout)

/usr/local/lib/python3.8/site-packages/asyncpg/connect_utils.py:631: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

fut = <Future finished exception=InvalidCatalogNameError('database "dbuser" does not exist')>, timeout = 59.99906388099953

    async def wait_for(fut, timeout, *, loop=None):
        """Wait for the single Future or coroutine to complete, with timeout.

        Coroutine will be wrapped in Task.

        Returns result of the Future or coroutine.  When a timeout occurs,
        it cancels the task and raises TimeoutError.  To avoid the task
        cancellation, wrap it in shield().

        If the wait is cancelled, the task is also cancelled.

        This function is a coroutine.
        """
        if loop is None:
            loop = events.get_running_loop()
        else:
            warnings.warn("The loop argument is deprecated since Python 3.8, "
                          "and scheduled for removal in Python 3.10.",
                          DeprecationWarning, stacklevel=2)

        if timeout is None:
            return await fut

        if timeout <= 0:
            fut = ensure_future(fut, loop=loop)

            if fut.done():
                return fut.result()

            fut.cancel()
            raise exceptions.TimeoutError()

        waiter = loop.create_future()
        timeout_handle = loop.call_later(timeout, _release_waiter, waiter)
        cb = functools.partial(_release_waiter, waiter)

        fut = ensure_future(fut, loop=loop)
        fut.add_done_callback(cb)

        try:
            # wait until the future completes or the timeout
            try:
                await waiter
            except exceptions.CancelledError:
                fut.remove_done_callback(cb)
                fut.cancel()
                raise

            if fut.done():
>               return fut.result()

/usr/local/lib/python3.8/asyncio/tasks.py:483: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <Future finished exception=InvalidCatalogNameError('database "dbuser" does not exist')>

    def result(self):
        """Return the result this future represents.

        If the future has been cancelled, raises CancelledError.  If the
        future's result isn't yet available, raises InvalidStateError.  If
        the future is done and has an exception set, this exception is raised.
        """
        if self._state == _CANCELLED:
            raise exceptions.CancelledError
        if self._state != _FINISHED:
            raise exceptions.InvalidStateError('Result is not ready.')
        self.__log_traceback = False
        if self._exception is not None:
>           raise self._exception
E           asyncpg.exceptions.InvalidCatalogNameError: database "dbuser" does not exist

/usr/local/lib/python3.8/asyncio/futures.py:178: InvalidCatalogNameError

During handling of the above exception, another exception occurred:

self = <tests.test_init.TestInitEndpoint testMethod=test_init_withoid_ciud>

    async def setUpAsync(self) -> None:
>       initializer(["models"], db_url=config.DB_TEST_URL, loop=self.loop)

tests/test_init.py:14: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/local/lib/python3.8/site-packages/tortoise/contrib/test/__init__.py:116: in initializer
    loop.run_until_complete(_init_db(_CONFIG))
/usr/local/lib/python3.8/site-packages/nest_asyncio.py:59: in run_until_complete
    return f.result()
/usr/local/lib/python3.8/asyncio/futures.py:178: in result
    raise self._exception
/usr/local/lib/python3.8/asyncio/tasks.py:280: in __step
    result = coro.send(None)
/usr/local/lib/python3.8/site-packages/tortoise/contrib/test/__init__.py:74: in _init_db
    await Tortoise.init(config, _create_db=True)
/usr/local/lib/python3.8/site-packages/tortoise/__init__.py:555: in init
    await cls._init_connections(connections_config, _create_db)
/usr/local/lib/python3.8/site-packages/tortoise/__init__.py:384: in _init_connections
    await connection.db_create()
/usr/local/lib/python3.8/site-packages/tortoise/backends/asyncpg/client.py:114: in db_create
    await self.create_connection(with_db=False)


self = <tortoise.backends.asyncpg.client.AsyncpgDBClient object at 0x7fa59e7997c0>, with_db = False

    async def create_connection(self, with_db: bool) -> None:
        self._template = {
            "host": self.host,
            "port": self.port,
            "user": self.user,
            "database": self.database if with_db else None,
            "min_size": self.pool_minsize,
            "max_size": self.pool_maxsize,
            **self.extra,
        }
        if self.schema:
            self._template["server_settings"] = {"search_path": self.schema}
        try:
            self._pool = await asyncpg.create_pool(None, password=self.password, **self._template)
            self.log.debug("Created connection pool %s with params: %s", self._pool, self._template)
        except asyncpg.InvalidCatalogNameError:
>           raise DBConnectionError(f"Can't establish connection to database {self.database}")
E           tortoise.exceptions.DBConnectionError: Can't establish connection to database


FAILED tests/test_init.py::TestInitEndpoint::test_init_withoid_ciud - tortoise.exceptions.DBConnectionError: Can't establish connection to database test_dbname
1 failed, 17 warnings in 0.30s 
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x7fa59e799400>

我的问题很简单:如何测试aiohttp服务器与postgresql数据库一起工作,使用tourtoise


asyncpg==0.20.1
tortoise-orm==0.16.10
pytest-aiohttp==0.3.0

python postgresql aiohttp tortoise-orm
1个回答
0
投票

看来真正的错误是。asyncpg.exceptions.InvalidCatalogNameError: database "dbuser" does not exist

在你的compose文件中,你设置了db: POSTGRES_DB: $NLAB_SOVA_SERVICE_DB_NAME 并将其转发给了web: NLAB_SOVA_SERVICE_DB_NAME: $NLAB_SOVA_SERVICE_DB_NAME.

问题是,你如何将这些env var注入到dburl-string中?而且你使用的DB_name是 dbuser?

那为什么连接不到它呢?你连接到的是docker环境中你所期望的同一个DB吗?请记住,在Docker中,除非你将网络设置为外部可见,否则它只在该集群内可见。

你从哪里运行测试?

在我看来,这似乎是一个环境问题,因为它显然是连接到了 一些 DB。我看到的一切都没有明显的错误,我可以看到?

© www.soinside.com 2019 - 2024. All rights reserved.