如何通过 Python 脚本本地访问远程托管在 Google Colab 上的 Ollama 模型?

问题描述 投票:0回答:1

我能够使用此代码(如下)在 Google Colab 上成功运行 Ollama,并且能够使用

export OLLAMA_HOST=https://{url}.ngrok-free.app/
ollama run llama2
从本地终端访问它。

from google.colab import userdata
NGROK_AUTH_TOKEN = userdata.get('NGROK_AUTH_TOKEN')

# Download and install ollama to the system
!curl https://ollama.ai/install.sh | sh

!pip install aiohttp pyngrok

import os
import asyncio

# Set LD_LIBRARY_PATH so the system NVIDIA library
os.environ.update({'LD_LIBRARY_PATH': '/usr/lib64-nvidia'})

async def run_process(cmd):
  print('>>> starting', *cmd)
  p = await asyncio.subprocess.create_subprocess_exec(
      *cmd,
      stdout=asyncio.subprocess.PIPE,
      stderr=asyncio.subprocess.PIPE,
  )

  async def pipe(lines):
    async for line in lines:
      print(line.strip().decode('utf-8'))

  await asyncio.gather(
      pipe(p.stdout),
      pipe(p.stderr),
  )

#register an account at ngrok.com and create an authtoken and place it here
await asyncio.gather(
    run_process(['ngrok', 'config', 'add-authtoken', NGROK_AUTH_TOKEN])
)

await asyncio.gather(
    run_process(['ollama', 'serve']),
    run_process(['ngrok', 'http', '--log', 'stderr', '11434']),
)

但是,我无法使用本地执行的 Python 脚本来访问 Google Colab Ollama。这是我尝试过的代码:

import requests

# Ngrok tunnel URL
ngrok_tunnel_url = "https://{url}.ngrok-free.app/"

# Define the request payload
payload = {
    "model": "llama2",
    "prompt": "Why is the sky blue?"
}

try:
    # Send the request to the ngrok tunnel URL
    response = requests.post(ngrok_tunnel_url, json=payload)
    
    # Check the response status code
    if response.status_code == 200:
        print("Request successful:")
        print("Response content:")
        print(response.text)
    else:
        print("Error:", response.status_code)
except requests.exceptions.RequestException as e:
    print("Error:", e)

返回

Error: 403

有谁知道如何解决这个问题,或者如何正确访问 Ollama 的远程实例?

python json google-colaboratory ollama
1个回答
0
投票

1.将主机标头设置为 localhost:11434

我在终端和 Python 中都遇到了同样的问题。为 ngrok 命令设置标志

--request-header="localhost:11434"
为我修复了这两个问题。

我认为出现 403 是因为传入请求仍未由隧道正确路由。它最近被添加到Ollama FAQ

这是更新后的服务器代码:

await asyncio.gather(
    run_process(['ollama', 'serve']),
    run_process(['ngrok', 'http', '--log', 'stderr', '11434', '--host-header="localhost:11434"']), # Set host header
)
© www.soinside.com 2019 - 2024. All rights reserved.