llama-cpp: LLM inference in C/C++1

The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide range of hardware - locally and in the cloud.

... part of T2, get it here

URL: https://github.com/ggerganov/llama.cpp

Author: llama-cpp Authors
Maintainer: The T2 Project <t2 [at] t2-project [dot] org>

License: MIT
Status: Stable
Version: b4798

Download: https://github.com/ggerganov/ llama.cpp.git b4798llama-cpp-b4798.tar.gz

T2 source: llama-cpp.cache
T2 source: llama-cpp.desc

Build time (on reference hardware): 90% (relative to binutils)2

Installed size (on reference hardware): 65.39 MB, 98 files

Dependencies (build time detected): bash binutils cmake coreutils diffutils gawk git grep gzip linux-header make pkgconfig python sed tar tbb

Installed files (on reference hardware): n.a.

1) This page was automatically generated from the T2 package source. Corrections, such as dead links, URL changes or typos need to be performed directly on that source.

2) Compatible with Linux From Scratch's "Standard Build Unit" (SBU).