Heroku 上的 Python Flask Web API,代码在本地工作,但部署时超时

问题描述 投票:0回答:1

我正在开发一个 Flutter 移动应用程序,该应用程序在应用程序中使用 ML 模型。我将文件发送到 Heroku 服务器上的 Flask API,以通过 python 提取功能并将它们发送回应用程序。我的模型使用 StandardScaler 库来缩放数据,因此我最近进行了更改以导出缩放器以在我的 API 中使用,以便可以按照与数据集相同的方式缩放功能。我有这个 API 的先前版本,运行良好,当我在本地计算机上演示新版本时,它也运行良好。但由于某种原因,当我从移动应用程序向服务器发出请求时,它总是超时。

Web 应用程序本身正在运行,日志中也没有错误。 我将请求发布到 url

https://tunetracer-featureextraction-d0dee5876f1e.herokuapp.com:5000/extract_features'

API目录

app/
  |-features.py
  |-Procfile
  |-requirements.txt
  |-scaler

过程文件

web: gunicorn --bind 0.0.0.0:$PORT features:app

features.py

from flask import Flask, request, jsonify
import librosa
import numpy as np
import joblib
from sklearn.preprocessing import StandardScaler
import os

app = Flask(__name__)

@app.route('/extract_features', methods=['POST'])
def extract_features():
    # Check if the POST request has the file part
    if 'file' not in request.files:
        return jsonify({'error': 'No file part'})

    file = request.files['file']

    # If user does not select file, browser also
    # submit an empty part without filename
    if file.filename == '':
        return jsonify({'error': 'No selected file'})

    if file:

        try:
            scaler = joblib.load('scaler')
            y, sr = librosa.load(file, mono=True, duration=30)
            chroma_stft = librosa.feature.chroma_stft(y=y, sr=sr)
            rmse = librosa.feature.rms(y=y)
            spec_cent = librosa.feature.spectral_centroid(y=y, sr=sr)
            spec_bw = librosa.feature.spectral_bandwidth(y=y, sr=sr)
            rolloff = librosa.feature.spectral_rolloff(y=y, sr=sr)
            zcr = librosa.feature.zero_crossing_rate(y)
            mfcc = librosa.feature.mfcc(y=y, sr=sr)
            list = [[]]
            
            #read features into a list
            list[0] = [np.mean(chroma_stft), np.mean(rmse), np.mean(spec_cent), np.mean(spec_bw), np.mean(rolloff), np.mean(zcr)]
            list[0] += [np.mean(e) for e in mfcc]
            
            #scale the list
            list[0] = np.array(scaler.transform(list), dtype = float)
            
            #change floats to strings
            feature_list = [str(f) for f in list[0][0]]
            
            #send list back to app
            return jsonify({'features': feature_list})
        except Exception as e:
            return jsonify({'error': str(e)})

port = int(os.environ.get("PORT", 5000))
if __name__ == '__main__':
    app.run(debug=True, port=port, host='0.0.0.0')

需求.txt

Flask==3.0.2
gunicorn==21.2.0
joblib==1.3.2
librosa==0.10.1
numpy==1.26.4
sklearn-preprocessing==0.1.0
StandardScaler==0.5

日志

2024-04-11T18:34:09.432238+00:00 heroku[web.1]: State changed from crashed to starting
2024-04-11T18:34:25.336192+00:00 heroku[web.1]: Starting process with command `gunicorn --bind 0.0.0.0:39275 features:app`
2024-04-11T18:34:26.050154+00:00 app[web.1]: Python buildpack: Detected 512 MB available memory and 8 CPU cores.
2024-04-11T18:34:26.050248+00:00 app[web.1]: Python buildpack: Defaulting WEB_CONCURRENCY to 2 based on the available memory.
2024-04-11T18:34:26.267579+00:00 app[web.1]: [2024-04-11 18:34:26 +0000] [2] [INFO] Starting gunicorn 21.2.0
2024-04-11T18:34:26.267900+00:00 app[web.1]: [2024-04-11 18:34:26 +0000] [2] [INFO] Listening at: http://0.0.0.0:39275 (2)
2024-04-11T18:34:26.267933+00:00 app[web.1]: [2024-04-11 18:34:26 +0000] [2] [INFO] Using worker: sync
2024-04-11T18:34:26.270092+00:00 app[web.1]: [2024-04-11 18:34:26 +0000] [9] [INFO] Booting worker with pid: 9
2024-04-11T18:34:26.352229+00:00 app[web.1]: [2024-04-11 18:34:26 +0000] [10] [INFO] Booting worker with pid: 10
2024-04-11T18:34:26.644297+00:00 heroku[web.1]: State changed from starting to up
2024-04-11T18:34:40.000000+00:00 app[api]: Build succeeded
python flask heroku scikit-learn
1个回答
0
投票

无需使用端口号即可访问您的 heroku 应用程序。使用 URL

https://tunetracer-featureextraction-d0dee5876f1e.herokuapp.com/extract_features
(不带
:5000
)。 Heroku 对分配给您的应用程序的动态端口进行反向代理 (
$PORT
)。因此您可以通过标准 http(s) 端口访问它。

© www.soinside.com 2019 - 2024. All rights reserved.