llama-cpp: LLM inference in C/C++1

The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide range of hardware - locally and in the cloud.

... part of T2, get it here

URL: https://github.com/ggerganov/llama.cpp

Author: llama-cpp Authors
Maintainer: The T2 Project <t2 [at] t2-project [dot] org>

License: MIT
Status: Stable
Version: b4589

Download: https://github.com/ggerganov/ llama.cpp.git b4589llama-cpp-b4589.tar.gz

T2 source: llama-cpp.cache
T2 source: llama-cpp.desc

Build time (on reference hardware): 170% (relative to binutils)2

Installed size (on reference hardware): 127.99 MB, 1222 files

Dependencies (build time detected): bash binutils cmake coreutils curl diffutils gawk git grep gzip linux-header make openssl pkgconfig python python-gpep517 python-poetry-core sed tar tbb

Installed files (on reference hardware): n.a.

1) This page was automatically generated from the T2 package source. Corrections, such as dead links, URL changes or typos need to be performed directly on that source.

2) Compatible with Linux From Scratch's "Standard Build Unit" (SBU).