无法对 Node.js / SvelteKit 应用进行 dockerize

问题描述 投票:0回答:1

我有一个小的空 SvelteKit 模板项目,它在本地运行良好,但是当我尝试 Docker 化时它失败了。

任何人都可以帮助我理解为什么没有找到该版本吗?

这是 Dockerfile:

FROM node:lts-slim AS deps
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN npx pnpm i --frozen-lockfile

FROM node:lts-slim AS builder
WORKDIR /app
COPY . .
COPY --from=deps /app/node_modules ./node_modules
RUN npx pnpm run build

FROM node:lts-slim AS runner
WORKDIR /app
ENV NODE_ENV production

RUN useradd -u 1001 -s /usr/sbin/nologin -m -d /home/sveltekit sveltekit
RUN groupadd nodejs && usermod -aG nodejs sveltekit
COPY --from=builder --chown=sveltekit:nodejs /app/build ./build
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json

USER sveltekit
EXPOSE 3000

CMD ["node", "./build"]

这是 package.json 文件:

{
    "name": "sveltekit-docker",
    "version": "0.0.1",
    "private": true,
    "scripts": {
        "dev": "vite dev",
        "build": "vite build",
        "preview": "vite preview",
        "lint": "prettier --check . && eslint .",
        "format": "prettier --write ."
    },
    "devDependencies": {
        "@sveltejs/adapter-auto": "^3.0.0",
        "@sveltejs/kit": "^2.0.0",
        "@sveltejs/vite-plugin-svelte": "^3.0.0",
        "@types/eslint": "^8.56.0",
        "eslint": "^8.56.0",
        "eslint-config-prettier": "^9.1.0",
        "eslint-plugin-svelte": "^2.35.1",
        "prettier": "^3.1.1",
        "prettier-plugin-svelte": "^3.1.2",
        "svelte": "^4.2.7",
        "vite": "^5.0.3"
    },
    "type": "module"
}

这是(部分编辑,消除了第一步,没有错误)错误跟踪:

bob ~/code/sveltekit_docker [main] $ docker build -t sd --no-cache .
[+] Building 8.9s (15/17)                                  docker:desktop-linux
 => [internal] load build definition from Dockerfile                       0.0s
... lines without errors omitted....
 => [deps 4/4] RUN npx pnpm i --frozen-lockfile                            4.1s
 => [builder 4/5] COPY --from=deps /app/node_modules ./node_modules        0.3s
 => [builder 5/5] RUN npx pnpm run build                                   3.1s
 => ERROR [runner 5/7] COPY --from=builder --chown=sveltekit:nodejs /app/  0.0s
------
[runner 5/7] COPY --from=builder --chown=sveltekit:nodejs /app/build ./build:
------
Dockerfile:18
--------------------
  16 |     RUN useradd -u 1001 -s /usr/sbin/nologin -m -d /home/sveltekit sveltekit
  17 |     RUN groupadd nodejs && usermod -aG nodejs sveltekit
  18 | >>> COPY --from=builder --chown=sveltekit:nodejs /app/build ./build
  19 |     COPY --from=builder /app/node_modules ./node_modules
  20 |     COPY --from=builder /app/package.json ./package.json
--------------------
ERROR: failed to solve: failed to compute cache key: failed to calculate checksum of ref 7b0a430b-35d6-4fc8-a6dd-db95ac242422::eeh419kyu0c4wj7k9gabjqe4w: "/app/build": not found
docker vite sveltekit
1个回答
0
投票

我有一个 SvelteKit typescript 项目,由 pnpm

管理

该项目需要使用两个 REST API,一个 GET /availablemodels 用于获取嵌入模型的名称以填充下拉列表,一个 POST 到 /hybridsearch,其中包含正文中所需的四个参数(搜索字符串、模型名称、alpha 值、数量)要返回的项目)以触发对 Weaviate 后端数据库的混合搜索。

以下是一个很好的分层可缓存Dockerfile:

# Stage 1: Node with pnpm
FROM node:lts-bullseye-slim AS base
WORKDIR /app
RUN npm install -g pnpm

# Stage 2: Build Stage, install prerequisites
FROM base AS build
COPY package.json pnpm-lock.yaml ./
RUN pnpm install
COPY . .
RUN pnpm run build

# Stage 3: Production Stage
FROM base AS production
WORKDIR /app
COPY --from=build --chown=node:node /app/build ./build
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --prod
#  Runtime environment
ENV NODE_ENV=production
ENV MODELS_HOST=localhost
USER node
EXPOSE 3000

第 1 阶段只是构建一个具有长期支持的 Node docker 镜像精简版本的基础镜像(仅比 apline 版本稍大一些,但后者有时会给我带来问题)并安装 pnpm,这是更快更好的 npm 替代方案 -

第 2 阶段安装所有必备模块,复制整个项目(减去 .dockerignore)并运行构建,该构建将生成在 /app/build 中构建的项目

第 3 阶段将把构建复制到原始基础映像上,仅安装生产必备组件,设置一些运行时变量。

Dockerfile仅描述构建容器的配方。实际的运行指令是 docker-compose.yml 文件的责任,我将仅复制其中的一些相关部分:

services:
  weaviate:
    image: cr.weaviate.io/semitechnologies/weaviate:1.24.11
    command:
      - "--host=0.0.0.0"
      - "--port=8080"
      - "--scheme=http"
    ports:
      - "8080:8080"
      - "50051:50051"
    volumes:
      - weaviate_data:/var/lib/weaviate
    restart: unless-stopped
    environment:
      LOG_LEVEL: info 
      ENABLE_CUDA: 0
      LIMIT_RESOURCES: true
      QUERY_DEFAULTS_LIMIT: 25
      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: true
      PERSISTENCE_DATA_PATH: /var/lib/weaviate
      CLUSTER_HOSTNAME: finland
      #ENABLE_MODULES: text2vec-cohere 
      #DEFAULT_VECTORIZER_MODULE: text2vec-transformers
      #DEFAULT_VECTORIZER_MODULE: text2vec-cohere
      #COHERE_APIKEY: N4K8WCmFYNLCR6YsGE5UvGinXko0KBH7nmOowfWB
      #TRANSFORMERS_INFERENCE_API: http://t2v-e5-mistral:8080
      DISABLE_TELEMETRY: true
      GOMAXPROCS: 4
    networks:
      - weaviate_net
  llm_fast:
    build:
      context: /home/mema/code/llm_fast
      dockerfile: Dockerfile
    image: llm_fast:v0.6
    environment:
      PYTHONPATH: "/app/app"
      HF_HUB_CACHE: "/home/llmuser/.cache/huggingface/hub"
    command: gunicorn -w 1 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8086 --timeout 300 app.vectorize:app
    ports:
      - "8086:8086"
    volumes:
      - llm_cache:/home/llmuser/.cache
    user: "1000:1000"
    networks:
      - weaviate_net

  aisearch:
    build:
      context: /home/mema/code/aisearch
      dockerfile: Dockerfile
    image: aisearch:v0.1
    command: gunicorn -w 1 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8088 --timeout 30 search:app
    ports:
      - "8088:8088"
    user: "1000:1000"
    networks:
      - weaviate_net
    restart: unless-stopped
    depends_on:
      - llm_fast
      - weaviate

  frontvect:
    build:
      context: /home/mema/code/skwhfe
      dockerfile: Dockerfile
    image: skwhfe:v0.1
    command: node /app/build/index.js
    ports:
      - "5555:3000"
    environment:
      - NODE_ENV=production
    networks:
      - weaviate_net
      - mema_network
    restart: unless-stopped
    depends_on:
      - llm_fast
      - aisearch
networks:
  weaviate_net:
    driver: bridge

volumes:
  weaviate_data:
    name: weaviate_mema
  llm_cache:
mema@newisa:~/mema_docker_compose$ 

frontvect 服务将由 docker compose 根据代码目录中的 Dockerfile 构建和运行,并将运行命令“node /app/build/index.js”,该命令将启动由 Node 下的 Dockerfile 构建的应用程序。节点将在容器中公开其端口 3000,并将其映射到主机的 5555 端口,因此浏览器将指向那里。

另请注意,在 docker compose 项目中,可以通过等于服务名称的主机名访问运行后端服务的其他容器。因此,在 frontvect 容器中,您将使用 GET 获取模型名称到 http://llm_fast:8086,并使用 POST 获取结果到 http://aisearch:8088。在容器内 localhost 仅指容器本身内的端口。

请注意,要运行 SK 项目必须使用节点适配器。

© www.soinside.com 2019 - 2024. All rights reserved.