ask_openai.py – call the OpenAI API, API_KEY required
ask_ollama.py – call the local Ollama API (localhost:11434)
required:
- prompt_toolkit (input)
- openai (ask_openai.py)
- requests (ask_ollama.py)
- rich (output)
ask_openai.py uses the openai Python library to call the OpenAI API. An internet connection is required.
ask_openai.toml ➔ you have to insert your API_KEY by editing the my_api_key = "sk-***" line before running the script.
usage: ./ask_openai.py <model> [question?]
Choose a specific OpenAI model ➔ see: platform.openai.com/docs/models.
ask_ollama.py uses the requests Python library to call the local Ollama API (based on llama.cpp). An internet connection is not required. Install Ollama first and start the local server (localhost:11434). To download LLMs use the ollama run <model> command.
usage: ./ask_ollama.py <model> [question?]
The best general LLMs for computers with 8…16 GB of RAM in March 2025 are gemma3n:e4b by Google (4B, 7.5GB) and llama3.2 by Meta (3B, 2.0GB). See also: ollama.com/models.