在 Colab 中,`ollamaserve &` 冻结运行

问题描述 投票:0回答:1

我在 colab 中运行着 sh

!ollama serve &
!ollama run llama3

出来了

2024/05/08 03:51:17 routes.go:989: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
time=2024-05-08T03:51:17.536Z level=INFO source=images.go:897 msg="total blobs: 0"
time=2024-05-08T03:51:17.536Z level=INFO source=images.go:904 msg="total unused blobs removed: 0"
time=2024-05-08T03:51:17.538Z level=INFO source=routes.go:1034 msg="Listening on 127.0.0.1:11434 (version 0.1.34)"
time=2024-05-08T03:51:17.627Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2853183168/runners
time=2024-05-08T03:51:28.627Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v60002 cpu cpu_avx cpu_avx2 cuda_v11]"
time=2024-05-08T03:51:28.629Z level=INFO source=gpu.go:122 msg="Detecting GPUs"
time=2024-05-08T03:51:28.662Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"

它被冻结在

!ollama serve &
,我认为&已经在后端运行服务器,为什么它被冻结?

python jupyter-notebook artificial-intelligence google-colaboratory ollama
1个回答
0
投票

我发现以下工作

!ollama serve > server.log 2>&1 &
!ollama run llama3
© www.soinside.com 2019 - 2024. All rights reserved.