[COPY] --- T2-COPYRIGHT-BEGIN --- [COPY] t2/package/*/llama-cpp/llama-cpp.desc [COPY] Copyright (C) 2025 The T2 SDE Project [COPY] SPDX-License-Identifier: GPL-2.0 [COPY] --- T2-COPYRIGHT-END --- [I] LLM inference in C/C++ [T] The main goal of llama.cpp is to enable LLM inference with minimal setup [T] and state-of-the-art performance on a wide range of hardware - locally and [T] in the cloud. [U] https://github.com/ggerganov/llama.cpp [A] llama-cpp Authors [M] The T2 Project <t2@t2-project.org> [C] extra/development [F] OBJDIR [V] b5002 [L] MIT [S] Stable [P] X -----5---9 700.000 [O] [ $prefix_auto = 1 ] && prefix=opt/llama-cpp && set_confopt [O] pkginstalled curl && var_append cmakeopt ' ' -DLLAMA_CURL=ON [O] if pkginstalled opencl-loader && pkginstalled opencl-headers; then var_append cmakeopt ' ' -DGGML_OPENCL=ON; fi #[O] hook_add postmake 5 "cp -rvf ../{models,scripts,prompts,examples,docs} $root/$prefix/" [D] 25dc34078c46be02fa7e0fca19e0bc4340d538ffdae59729483baffb llama-cpp-b5002.tar.gz git+https://github.com/ggerganov/llama.cpp.git b5002