Thanks to visit codestin.com
Credit goes to rdrr.io

localLLM: Running Local LLMs with 'llama.cpp' Backend

Provides R bindings to the 'llama.cpp' library for running large language models. The package uses a lightweight architecture where the C++ backend library is downloaded at runtime rather than bundled with the package. Package features include text generation, reproducible generation, and parallel inference.

Package details

AuthorEddie Yang [aut] (ORCID: <https://orcid.org/0000-0002-3696-3226>), Yaosheng Xu [aut, cre] (ORCID: <https://orcid.org/0009-0006-8138-369X>)
MaintainerYaosheng Xu <[email protected]>
LicenseMIT + file LICENSE
Version1.2.1
URL https://github.com/EddieYang211/localLLM
Package repositoryView on CRAN
Installation Install the latest version of this package by entering the following in R:
install.packages("localLLM")

Try the localLLM package in your browser

Any scripts or data that you put into this service are public.

localLLM documentation built on Feb. 26, 2026, 9:07 a.m.