TypeError:llama_tokenize() 缺少 2 个必需的位置参数:'add_bos' 和 'special'

问题描述 投票:0回答:1

我正在运行 python 3.11 和带有 gguf 模型的最新版本的 llama-cpp-python

我希望代码像聊天机器人一样正常运行,但我收到了这个错误:

Traceback (most recent call last):
  File "d:\AI Custom\AI Arush\server.py", line 223, in <module>
    init()
  File "d:\AI Custom\AI Arush\server.py", line 57, in init
    m_eval(model, m_tokenize(model, PROMPT_INIT, True), False, "Starting up...")
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "d:\AI Custom\AI Arush\server.py", line 182, in m_tokenize
    n_tokens = llama_cpp.llama_tokenize(
               ^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: llama_tokenize() missing 2 required positional arguments: 'add_bos' and 'special'

这是我的标记化代码:

def m_tokenize(model: llama_cpp.Llama, text: bytes, add_bos=False, special=False):
    assert model.ctx is not None
    n_ctx = llama_cpp.llama_n_ctx(model.ctx)
    tokens = (llama_cpp.llama_token * int(n_ctx))()
    n_tokens = llama_cpp.llama_tokenize(
        model.ctx,
        text,
        tokens,
        n_ctx,
        llama_cpp.c_bool(add_bos),
    )
    if int(n_tokens) < 0:
        raise RuntimeError(f'Failed to tokenize: text="{text}" n_tokens={n_tokens}')
    return list(tokens[:n_tokens])

请帮忙...谢谢

python tokenize llama
1个回答
0
投票

TypeError: llama_tokenize() missing 2 required positional arguments: 'add_bos' and 'special'

要解决该错误,您需要将参数

add_bos
special
包含到
llama_tokenize()
函数中...

def m_tokenize(model: llama_cpp.Llama, text: bytes, add_bos=False, special=False):
    assert model.ctx is not None
    n_ctx = llama_cpp.llama_n_ctx(model.ctx)
    tokens = (llama_cpp.llama_token * int(n_ctx))()
    
    # Include the missing arguments in the function call
    n_tokens = llama_cpp.llama_tokenize(
        model.ctx,
        text,
        tokens,
        n_ctx,
        # You should check if llama_cpp.c_bool(add_bos) is returning a c_boo value also you have the arguments add_bos=False and special=False in this function 
        llama_cpp.c_bool(add_bos),
        # You should check if llama_cpp.c_bool(special) is returning a c_boo value 
        llama_cpp.c_bool(special)  # Include the special argument
    )
    
    if int(n_tokens) < 0:
        raise RuntimeError(f'Failed to tokenize: text="{text}" n_tokens={n_tokens}')
    
    return list(tokens[:n_tokens])

来自 llama_cpp.py (GitHub),代码行从 1817 开始

def llama_tokenize(
    model: llama_model_p,
    text: bytes,
    text_len: Union[c_int, int],
    tokens,  # type: Array[llama_token]
    n_max_tokens: Union[c_int, int],
    add_bos: Union[c_bool, bool],
    special: Union[c_bool, bool],
) -> int:
    """Convert the provided text into tokens."""
    return _lib.llama_tokenize(
        model, text, text_len, tokens, n_max_tokens, add_bos, special
    )


_lib.llama_tokenize.argtypes = [
    llama_model_p,
    c_char_p,
    c_int32,
    llama_token_p,
    c_int32,
    c_bool,
    c_bool,
]
_lib.llama_tokenize.restype = c_int32

© www.soinside.com 2019 - 2024. All rights reserved.