Llama2 转换模型权重以使用 Hugging Face 运行时出错

问题描述 投票:0回答:1

我正在按照此处列出的步骤进行操作https://ai.meta.com/blog/5-steps-to-getting-started-with-llama-2/ 我已经能够完成这几个步骤。但是,在尝试遵循“将模型权重转换为使用 Hugging Face 运行”步骤时,出现以下错误。

命令

pip install protobuf && python3 $TRANSFORM --input_dir ./llama-2-7b-chat --model_size 7B --output_dir ./llama-2-7b-chat-hf --llama_version 2

错误


Traceback (most recent call last):
  File "/home/neeraj/.local/lib/python3.10/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py", line 339, in <module>
    main()
  File "/home/neeraj/.local/lib/python3.10/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py", line 326, in main
    write_model(
  File "/home/neeraj/.local/lib/python3.10/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py", line 94, in write_model
    params = read_json(os.path.join(input_base_path, "params.json"))
  File "/home/neeraj/.local/lib/python3.10/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py", line 75, in read_json
    return json.load(f)
  File "/usr/lib/python3.10/json/__init__.py", line 293, in load
    return loads(fp.read(),
  File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

期待您的支持和指导。

large-language-model llama
1个回答
0
投票

你能解决问题吗?

© www.soinside.com 2019 - 2024. All rights reserved.