我正在尝试在 vercel 上托管我的后端代码。
我的模型是使用 scikit-learn 模块进行训练的。
但是,在部署过程中,我遇到了错误(如下)。
我搜索了错误,但找不到 ant 修复。
Vercel 运行时日志:
LAMBDA_WARNING:未处理的异常。最可能的原因是一个问题 在函数代码中。但是,在极少数情况下,Lambda 运行时更新 可能会导致意外的功能行为。对于使用托管的功能 运行时,运行时更新可以由函数更改触发,或者 可以自动应用。确定运行时间是否已 已更新,请检查 INIT_START 日志条目中的运行时版本。如果 此错误与运行时版本的更改相关,您可能会 可以通过暂时回滚来减轻此错误 以前的运行时版本。有关更多信息,请参阅 https://docs.aws.amazon.com/lambda/latest/dg/runtimes-update.html
[错误] Runtime.ImportModuleError:无法导入模块 “vc__handler__python”:没有名为“sklearn”的模块回溯(大多数 最近通话最后):
Flask 后端:
from flask import Flask, request, jsonify, render_template
from flask_cors import CORS
import json
import random
import wikipedia
import re
import time
import numpy as np
import nltk
from nltk.stem import WordNetLemmatizer
from joblib import load
app = Flask(__name__)
CORS(app)
# Load intents data from JSON file
intents = json.loads(open('intents.json').read())
# Load preprocessed data
words = load('words.pkl')
classes = load('classes.pkl')
nb_classifier = load('nb_classifier.joblib')
lemmatizer = WordNetLemmatizer()
# Function to clean up a sentence by tokenizing and lemmatizing its words
def clean_up_sentence(sentence):
sentence_words = nltk.word_tokenize(sentence)
sentence_words = [lemmatizer.lemmatize(word.lower()) for word in sentence_words]
return sentence_words
# Function to convert a sentence into bag of words representation
def bow(sentence, words):
sentence_words = clean_up_sentence(sentence)
bag = [1 if lemmatizer.lemmatize(word.lower()) in sentence_words else 0 for word in words]
return np.array(bag)
# Function to predict the intent class of a given sentence
def predict_class(sentence):
p = bow(sentence, words)
res = nb_classifier.predict(np.array([p]))[0]
return_list = [{"intent": classes[res], "probability": "1"}]
return return_list
# Function to get a response based on predicted intent
def get_response(ints, intents_json):
tag = ints[0]['intent']
list_of_intents = intents_json['intents']
for i in list_of_intents:
if i['tag'] == tag:
result = random.choice(i['responses'])
break
return result
# Function to extract subject from a question
def extract_subject(question):
punctuation_marks = ['.', ',', '!', '?', ':', ';', "'", '"', '(', ')', '[', ']', '-', '—', '...', '/', '\\', '&', '*', '%', '$', '#', '@', '+', '-', '=', '<', '>', '_', '|', '~', '^']
for punctuation_mark in punctuation_marks:
if punctuation_mark in question:
question = question.replace(punctuation_mark, '')
subject = ''
words = question.split(' ')
list_size = len(words)
for i in range(list_size):
if i > 1 and i != list_size:
subject += words[i]+' '
elif i == list_size:
subject += words[i]
return subject
# Function to clean text by removing characters within parentheses
def clean_text(text):
cleaned_text = re.sub(r'\([^()]*\)', '', text)
cleaned_text = cleaned_text.strip()
return cleaned_text
# Function to search Wikipedia for information based on a question
def search_wikipedia(question, num_sentences=2):
try:
subject = extract_subject(question)
wiki_result = wikipedia.summary(subject, auto_suggest=False, sentences=num_sentences)
return clean_text(wiki_result)
except wikipedia.exceptions.PageError:
return f"Sorry, I couldn't find information about {subject}."
except wikipedia.exceptions.DisambiguationError as e:
return f"Multiple matches found. Try being more specific: {', '.join(e.options)}"
except Exception as e:
return "Error, Something went wrong!"
# Function to get a response from the chatbot
def chatbot_response(text):
ints = predict_class(text)
res = get_response(ints, intents)
return res
@app.route('/chat', methods=['POST'])
def chat():
user_text = request.form['user_input']
bot_response = chatbot_response(user_text)
return jsonify({'response': bot_response})
if __name__ == '__main__':
app.run()
这是包含后端代码的 GitHub 存储库:Github Rep
我正在寻求有关如何解决此问题的指导。
任何见解或建议将不胜感激。谢谢!
鉴于错误状态
[ERROR] Runtime.ImportModuleError: Unable to import module
'vc__handler__python': No module named 'sklearn'
Traceback (most recent call last):
并且您使用了许多第三方软件包,很可能您需要使用部署包但没有这样做!
这就是制作部署包的方法:
package
的空文件夹添加到您的存储库中。requirements.txt
文件,列出函数中的所有依赖项。虽然您的 Flask 代码似乎不需要导入或不需要 sklearn
,但请确保列出您在 Lambda 函数中包含或引用的所有代码的依赖项。pip install --target ./package/ --requirement requirements.txt
zip -r ./my_deployment_package.zip ./package/ # don't forget the -r!
Code Source
窗格中上传您的部署包(如果遇到困难,请参阅官方说明此处)。