Thanks to visit codestin.com
Credit goes to github.com

Skip to content

haalven/LLM_terminal

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

88 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ask LLMs in the Terminal

ask_openai.py – call the OpenAI API, API_KEY required

ask_ollama.py – call the local Ollama API (localhost:11434)

required:

OpenAI API

ask_openai.py uses the openai Python library to call the OpenAI API. An internet connection is required.

ask_openai.toml ➔ you have to insert your API_KEY by editing the my_api_key = "sk-***" line before running the script.

usage: ./ask_openai.py <model> [question?]

Choose a specific OpenAI model ➔ see: platform.openai.com/docs/models.

Ollama API

ask_ollama.py uses the requests Python library to call the local Ollama API (based on llama.cpp). An internet connection is not required. Install Ollama first and start the local server (localhost:11434). To download LLMs use the ollama run <model> command.

usage: ./ask_ollama.py <model> [question?]

The best general LLMs for computers with 8…16 GB of RAM in March 2025 are gemma3n:e4b by Google (4B, 7.5GB) and llama3.2 by Meta (3B, 2.0GB). See also: ollama.com/models.

Example

example screenshot

Languages