-
Notifications
You must be signed in to change notification settings - Fork 14k
Pull requests: ollama/ollama
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
llamafile: add missing sgemm-ppc.h from llama.cpp PR #16999
#13510
opened Dec 17, 2025 by
shalinib-ibm
Loading…
Fix: NameError in Python example in docs/capabilities/streaming.mdx
#13507
opened Dec 17, 2025 by
j0m0k0
Loading…
fix: preserve partial UTF-8 bytes in logprobs API response
#13500
opened Dec 16, 2025 by
nathannewyen
Loading…
feat: prevent system sleep while Ollama is processing requests
#13467
opened Dec 14, 2025 by
majiayu000
Loading…
ggml: fix mem_hip reporting zero GPU memory on AMD APUs (MI300A)
#13463
opened Dec 14, 2025 by
javicacheiro
Loading…
fix: add CORS headers to redirect responses
#13458
opened Dec 13, 2025 by
nathannewyen
Loading…
5 tasks done
fix: use binary prefixes (1024) for file size display
#13457
opened Dec 13, 2025 by
nathannewyen
Loading…
1 task done
ollamarunner: Automatically enable flash attention
#13448
opened Dec 12, 2025 by
jessegross
Loading…
app: adding ability to persist unsent prompt across navigation
#13446
opened Dec 12, 2025 by
hoyyeva
Loading…
ggml-cpu: Enable Matrix Math Accelerator for Power10
#13428
opened Dec 11, 2025 by
shalinib-ibm
Loading…
convert: check file size for safetensors to warn for improper conversion
#13417
opened Dec 11, 2025 by
ParthSareen
Loading…
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.