在 M2 Ultra 上使用 Metal 安装 llama-cpp-python 失败

问题描述 投票:0回答:1

我按照 https://llama-cpp-python.readthedocs.io/en/latest/install/macos/ 上的说明进行操作。

我的macOS版本是Sonoma 14.4,并且已经安装了xcode-select(版本:15.3.0.0.1.1708646388)。

我用
创建了一个 conda 环境 “康达版本:24.1.2”
“Python版本:3.10.13.final.0”
“平台:osx-arm64”。

我运行以下命令来安装 llama-cpp-python

CMAKE_ARGS="-DLLAMA_METAL=on" pip install -U llama-cpp-python --no-cache-dir

,但最终出现以下错误消息。

Collecting llama-cpp-python
  Downloading llama_cpp_python-0.2.57.tar.gz (36.9 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 36.9/36.9 MB 34.2 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in /Users/mjchoi/miniforge3/envs/llama/lib/python3.10/site-packages (from llama-cpp-python) (4.8.0)
Requirement already satisfied: numpy>=1.20.0 in /Users/mjchoi/miniforge3/envs/llama/lib/python3.10/site-packages (from llama-cpp-python) (1.26.1)
Collecting diskcache>=5.6.1 (from llama-cpp-python)
  Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Requirement already satisfied: jinja2>=2.11.3 in /Users/mjchoi/miniforge3/envs/llama/lib/python3.10/site-packages (from llama-cpp-python) (3.1.2)
Requirement already satisfied: MarkupSafe>=2.0 in /Users/mjchoi/miniforge3/envs/llama/lib/python3.10/site-packages (from jinja2>=2.11.3->llama-cpp-python) (2.1.3)
Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.5/45.5 kB 405.9 MB/s eta 0:00:00
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... error
  error: subprocess-exited-with-error
  
  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [66 lines of output]
      *** scikit-build-core 0.8.2 using CMake 3.27.7 (wheel)
      *** Configuring CMake...
      2024-03-19 17:44:25,052 - scikit_build_core - WARNING - libdir/ldlibrary: /Users/mjchoi/miniforge3/envs/llama/lib/libpython3.10.a is not a real file!
      2024-03-19 17:44:25,052 - scikit_build_core - WARNING - Can't find a Python library, got libdir=/Users/mjchoi/miniforge3/envs/llama/lib, ldlibrary=libpython3.10.a, multiarch=darwin, masd=None
      loading initial cache file /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/CMakeInit.txt
      -- The C compiler identification is AppleClang 15.0.0.15000309
      -- The CXX compiler identification is AppleClang 15.0.0.15000309
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/cc - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/c++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Found Git: /usr/bin/git (found version "2.39.3 (Apple Git-146)")
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
      -- Found Threads: TRUE
      -- Accelerate framework found
      -- Metal framework found
      -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
      -- CMAKE_SYSTEM_PROCESSOR: arm64
      -- ARM detected
      -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E
      -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed
      CMake Warning (dev) at vendor/llama.cpp/CMakeLists.txt:1218 (install):
        Target llama has RESOURCE files but no RESOURCE DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.
      
      CMake Warning (dev) at CMakeLists.txt:21 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.
      
      CMake Warning (dev) at CMakeLists.txt:30 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.
      
      -- Configuring done (0.4s)
      -- Generating done (0.0s)
      -- Build files have been written to: /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build
      *** Building project with Ninja...
      Change Dir: '/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build'
      
      Run Build Command(s): /opt/homebrew/bin/ninja -v
      [1/25] cd /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/vendor/llama.cpp && xcrun -sdk macosx metal -O3 -c /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/bin/ggml-metal.metal -o /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/bin/ggml-metal.air && xcrun -sdk macosx metallib /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/bin/ggml-metal.air -o /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/bin/default.metallib && rm -f /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/bin/ggml-metal.air && rm -f /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/bin/ggml-common.h && rm -f /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/bin/ggml-metal.metal
      FAILED: bin/default.metallib /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/bin/default.metallib
      cd /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/vendor/llama.cpp && xcrun -sdk macosx metal -O3 -c /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/bin/ggml-metal.metal -o /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/bin/ggml-metal.air && xcrun -sdk macosx metallib /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/bin/ggml-metal.air -o /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/bin/default.metallib && rm -f /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/bin/ggml-metal.air && rm -f /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/bin/ggml-common.h && rm -f /var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/tmp6kq2va35/build/bin/ggml-metal.metal
      xcrun: error: unable to find utility "metal", not a developer tool or in PATH
      [2/25] cd /private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp && /opt/homebrew/Cellar/cmake/3.27.7/bin/cmake -DMSVC= -DCMAKE_C_COMPILER_VERSION=15.0.0.15000309 -DCMAKE_C_COMPILER_ID=AppleClang -DCMAKE_VS_PLATFORM_NAME= -DCMAKE_C_COMPILER=/Library/Developer/CommandLineTools/usr/bin/cc -P /private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/common/../scripts/gen-build-info-cpp.cmake
      -- Found Git: /usr/bin/git (found version "2.39.3 (Apple Git-146)")
      [3/25] /Library/Developer/CommandLineTools/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/System/Library/Frameworks -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/ggml-alloc.c
      [4/25] /Library/Developer/CommandLineTools/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/System/Library/Frameworks -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -c /private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/ggml-backend.c
      [5/25] /Library/Developer/CommandLineTools/usr/bin/c++ -DGGML_USE_METAL -DLLAMA_BUILD -DLLAMA_SHARED -I/private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/examples/llava/. -I/private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/examples/llava/../.. -I/private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/examples/llava/../../common -I/private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -c /private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/examples/llava/llava.cpp
      [6/25] /Library/Developer/CommandLineTools/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/System/Library/Frameworks -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -c /private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/ggml-metal.m
      [7/25] /Library/Developer/CommandLineTools/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/System/Library/Frameworks -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -c /private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/ggml-quants.c
      [8/25] /Library/Developer/CommandLineTools/usr/bin/c++ -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_ACCELERATE -DGGML_USE_METAL -DLLAMA_BUILD -DLLAMA_SHARED -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/System/Library/Frameworks -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/unicode.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/unicode.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/unicode.cpp.o -c /private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/unicode.cpp
      [9/25] /Library/Developer/CommandLineTools/usr/bin/c++ -DGGML_USE_METAL -DLLAMA_BUILD -DLLAMA_SHARED -I/private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/examples/llava/. -I/private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/examples/llava/../.. -I/private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/examples/llava/../../common -I/private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -c /private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/examples/llava/clip.cpp
      [10/25] /Library/Developer/CommandLineTools/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/System/Library/Frameworks -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/ggml.c
      [11/25] /Library/Developer/CommandLineTools/usr/bin/c++ -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_ACCELERATE -DGGML_USE_METAL -DLLAMA_BUILD -DLLAMA_SHARED -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/System/Library/Frameworks -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /private/var/folders/kl/zn5jq0yn7fbb5rr67rmyfsrr0000gn/T/pip-install-u_i4wi0c/llama-cpp-python_6f999557aa7a4fc790f0ac043d8dc610/vendor/llama.cpp/llama.cpp
      ninja: build stopped: subcommand failed.
      
      
      *** CMake build failed
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

我尝试了以下命令

export CMAKE_ARGS="-DLLAMA_METAL=on" 
export FORCE_CMAKE=1 
pip install llama-cpp-python --no-cache-dir   

,但我刚刚收到同样的错误消息。

我还尝试了链接(3.9.16)中的较旧的Python版本,但它也不起作用。

有人可以帮我解决这个问题吗?提前非常感谢。

macos metal llama-cpp-python
1个回答
0
投票

首先,我要感谢 Spo1ler 的发现。

我只安装了 xcode-select 而不是 Xcode.app,它的路径是

/Library/Developer/CommandLineTools

基于https://github.com/gfx-rs/gfx/issues/2309#issuecomment-506130902, 我从 Apple App Store 安装了 Xcode.app。 然后,我可以看到xcode-select的路径自动更改为

/Applications/Xcode.app/Contents/Developer

然后,我必须先运行 Xcode.app 以同意 Xcode 许可协议。之后,以下命令将起作用。

export CMAKE_ARGS="-DLLAMA_METAL=on" 
export FORCE_CMAKE=1 
pip install llama-cpp-python --no-cache-dir   
© www.soinside.com 2019 - 2024. All rights reserved.