请求 sagemaker 端点会出现“需要传递 custom_attributes='accept_eula=true' 作为标头的一部分”错误

问题描述 投票:0回答:2

我正在尝试测试https://aws.amazon.com/blogs/machine-learning/llama-2-foundation-models-from-meta-are-now-available-in-amazon-sagemaker-jumpstart/ Sagemaker Studio 上的 llama2 模型。

我能够在 sagemkaer 笔记本中运行代码,当我运行它时,它会给我一个很好的端点。

所以我有端点链接。通常,当我在邮递员上使用此链接和 AWS 凭证时,它会给我来自模型的响应。

但是对于 Llama 2 模型,当我尝试在邮递员中使用端点时,我收到此错误

> {
>     "ErrorCode": "CLIENT_ERROR_FROM_MODEL",
>     "LogStreamArn": "arn:aws:logs:us-east-1:847137928610:log-group:/aws/sagemaker/Endpoints/meta-textgeneration-llama-2-7b-f-2023-07-26-06-06-21-772",
>     "Message": "Received client error (424) from primary with message \"{\n  \"code\":424,\n  \"message\":\"prediction failure\",\n 
> \"error\":\"Need to pass custom_attributes='accept_eula=true' as part
> of header. This means you have read and accept the
> end-user-license-agreement (EULA) of the model. EULA can be found in
> model card description or from
> https://ai.meta.com/resources/models-and-libraries/llama-downloads/.\"\n}\".
> See
> https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#logEventViewer:group=/aws/sagemaker/Endpoints/meta-textgeneration-llama-2-7b-f-2023-07-26-06-06-21-772
> for more information.",
>     "OriginalMessage": "{\n  \"code\":424,\n  \"message\":\"prediction failure\",\n  \"error\":\"Need to pass
> custom_attributes='accept_eula=true' as part of header. This means you
> have read and accept the end-user-license-agreement (EULA) of the
> model. EULA can be found in model card description or from
> https://ai.meta.com/resources/models-and-libraries/llama-downloads/.\"\n}",
>     "OriginalStatusCode": 424 }

我还尝试使用 Lambda 和 API 网关调用 sagemaker 端点,如下 https://medium.com/@woyera/how-to-use-llama-2-with-an-api-on-aws-to-power- your-ai-apps-3e5f93314b54

但我也得到了

{
    "message": "Internal Server Error"
}

这里有什么建议或帮助推荐吗?

amazon-web-services amazon-sagemaker endpoint amazon-sagemaker-jumpstart
2个回答
0
投票

将其添加为您的请求的标头之一,因为您链接的博客提到了它:

请注意,默认情况下,accept_eula 设置为 false。你需要设置 Accept_eula=true 成功调用端点。通过这样做,你 接受用户许可协议和可接受的使用政策 之前提到。您还可以下载许可协议。


0
投票

当您查询已部署的模型端点或预测器时,您必须在请求标头中发送

custom_attrtibutes
"accept_eula=true"
,如下所示。

predictor.predict(payload, custom_attributes="accept_eula=true")

您可以在此处找到示例笔记本。

© www.soinside.com 2019 - 2024. All rights reserved.