-
Notifications
You must be signed in to change notification settings - Fork 12.2k
Insights: ggml-org/llama.cpp
Overview
Could not load contribution data
Please try again later
25 Releases published by 1 person
-
b5749
published
Jun 24, 2025 -
b5751
published
Jun 24, 2025 -
b5752
published
Jun 24, 2025 -
b5753
published
Jun 24, 2025 -
b5754
published
Jun 25, 2025 -
b5755
published
Jun 25, 2025 -
b5756
published
Jun 25, 2025 -
b5757
published
Jun 26, 2025 -
b5759
published
Jun 26, 2025 -
b5760
published
Jun 26, 2025 -
b5769
published
Jun 28, 2025 -
b5770
published
Jun 28, 2025 -
b5771
published
Jun 28, 2025 -
b5772
published
Jun 28, 2025 -
b5773
published
Jun 28, 2025 -
b5774
published
Jun 28, 2025 -
b5775
published
Jun 29, 2025 -
b5777
published
Jun 29, 2025 -
b5778
published
Jun 29, 2025 -
b5780
published
Jun 29, 2025 -
b5782
published
Jun 30, 2025 -
b5783
published
Jun 30, 2025 -
b5784
published
Jun 30, 2025 -
b5785
published
Jun 30, 2025 -
b5787
published
Jun 30, 2025
41 Pull requests merged by 23 people
-
Add Conv2d for CPU
#14388 merged
Jun 30, 2025 -
memory : correctly handle failure in apply()
#14438 merged
Jun 30, 2025 -
metal : disable fast-math for some cpy kernels
#14460 merged
Jun 30, 2025 -
ggml-cpu: sycl: Re-enable exp f16
#14462 merged
Jun 30, 2025 -
test-backend-ops : disable llama test
#14461 merged
Jun 30, 2025 -
Remove redundant include path in CMakeLists.txt
#14452 merged
Jun 30, 2025 -
Make the shell scripts cross-platform
#14341 merged
Jun 30, 2025 -
Support jinja extra template kwargs (Qwen3 enable_thinking feature), from command line and from client
#13196 merged
Jun 29, 2025 -
Fix appearance of the chats list context menu for the browser Safari
#14322 merged
Jun 29, 2025 -
SYCL: disable faulty fp16 exp kernel
#14395 merged
Jun 29, 2025 -
ggml : fix unmerged GGML_FPxx_TO_FPxx refactoring
#14443 merged
Jun 29, 2025 -
ggml : implement REGLU/GEGLU/SWIGLU ops
#14158 merged
Jun 29, 2025 -
vulkan: Add fusion support for RMS_NORM+MUL
#14366 merged
Jun 29, 2025 -
CUDA: add bf16 and f32 support to cublas_mul_mat_batched
#14361 merged
Jun 28, 2025 -
vulkan: Increase workgroup size for GLU, for performance
#14345 merged
Jun 28, 2025 -
vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline
#14378 merged
Jun 28, 2025 -
vulkan: lock accesses of pinned_memory vector
#14333 merged
Jun 28, 2025 -
model : add support for ERNIE 4.5 0.3B model
#14408 merged
Jun 28, 2025 -
[CANN] Fix a bug related to enabling async_mode
#14432 merged
Jun 28, 2025 -
ci : fix windows build and release
#14431 merged
Jun 28, 2025 -
vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO
#14427 merged
Jun 28, 2025 -
graph : make llm_graph_context destructor virtual
#14410 merged
Jun 27, 2025 -
recurrent : call balloc split_reset() in init_batch()
#14414 merged
Jun 27, 2025 -
ggml : add ggml_set_rows
#14274 merged
Jun 27, 2025 -
convert : fix broken sentencepiece vocab
#14416 merged
Jun 27, 2025 -
model : gemma3n text-only
#14400 merged
Jun 26, 2025 -
cmake: regen vulkan shaders when shaders-gen sources change
#14398 merged
Jun 26, 2025 -
llama : return mistral-v7-tekken as default template only
#14390 merged
Jun 26, 2025 -
metal : add special-case mat-vec mul for ne00 == 4
#14385 merged
Jun 26, 2025 -
metal : batch rows copy in a single threadgroup
#14384 merged
Jun 26, 2025 -
docs: update s390x documentation + add faq
#14389 merged
Jun 26, 2025 -
musa: enable fp16 mma (all) and cublas on qy2
#13842 merged
Jun 26, 2025 -
ggml-cpu: enable IBM NNPA Vector Intrinsics
#14317 merged
Jun 25, 2025 -
ggml : do not output unprintable characters on GGUF load failure
#14381 merged
Jun 25, 2025 -
sycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices
#13973 merged
Jun 25, 2025 -
opencl: ref count
ggml_backend_opencl_context
and refactor profiling#14254 merged
Jun 24, 2025 -
batch : fix check for empty sequences in memory
#14364 merged
Jun 24, 2025 -
cmake : use LLAMA_BUILD_NUMBER when defining LLAMA_INSTALL_VERSION
#14362 merged
Jun 24, 2025 -
docs: Fix server API key doc for /props (move it to /health)
#14352 merged
Jun 24, 2025 -
main : honor --verbose-prompt on interactive prompts
#14350 merged
Jun 24, 2025 -
Add Mistral-Small-3.2-24B-Instruct-2506.jinja
#14349 merged
Jun 24, 2025
27 Pull requests opened by 19 people
-
Add script to test op perf and compare
#14354 opened
Jun 24, 2025 -
llama : expose C API to get layer device type
#14358 opened
Jun 24, 2025 -
server : fix assistant prefilling when content is an array
#14360 opened
Jun 24, 2025 -
llama : add high-throughput mode
#14363 opened
Jun 24, 2025 -
test-backend-ops: add support for specifying output format
#14368 opened
Jun 25, 2025 -
docs: fix broken url in main readme
#14371 opened
Jun 25, 2025 -
Q2k interleaving implementation - x86/x64 SIMD
#14373 opened
Jun 25, 2025 -
webui: preserve partial content when streaming errors occur
#14374 opened
Jun 25, 2025 -
ggml-cpu: Build variant targeting Neoverse-V2
#14380 opened
Jun 25, 2025 -
compare-commits.sh: support both llama-bench and test-backend-ops
#14392 opened
Jun 26, 2025 -
ggml : add pointer to attach user data
#14397 opened
Jun 26, 2025 -
OpenCL: add conv2d kernel
#14403 opened
Jun 26, 2025 -
[CANN] weight format to nz for Ascend310P3
#14407 opened
Jun 27, 2025 -
[CANN]update aclnnGroupedMatmulV2 to aclnnGroupedMatmulV3
#14411 opened
Jun 27, 2025 -
ggml : add ggml_scale_bias
#14417 opened
Jun 27, 2025 -
model : add hunyuan moe
#14425 opened
Jun 27, 2025 -
ggml : support broadcast for ggml_soft_max_ext and ggml_flash_attn_ext
#14435 opened
Jun 28, 2025 -
Added CI with RISC-V RVV1.0 Hardware
#14439 opened
Jun 29, 2025 -
ggml : implement GEGLU_ERF and GEGLU_QUICK ops
#14445 opened
Jun 29, 2025 -
Pr/7191
#14447 opened
Jun 29, 2025 -
vulkan: support softmax/FA batch and broadcast
#14449 opened
Jun 29, 2025 -
convert : correct gemma 3n conversion
#14450 opened
Jun 29, 2025 -
vulkan: Split large mul_mat_id to fit in shared memory
#14451 opened
Jun 29, 2025 -
vulkan : add GELU_ERF
#14455 opened
Jun 30, 2025 -
opencl: add `GEGLU`, `REGLU`, `SWIGLU`
#14456 opened
Jun 30, 2025 -
Chore: batch prompts, extract tensors specific layer
#14463 opened
Jun 30, 2025 -
server : (webui) let server send locally-defined default webui settings
#14468 opened
Jun 30, 2025
32 Issues closed by 8 people
-
Feature Request: add jina embeddings model availible convert to gguf
#12327 closed
Jun 30, 2025 -
Eval bug: [CANN] When use aclnnMatmul with cube_math_type=2
#14441 closed
Jun 30, 2025 -
Eval bug: multimodal llama-gemma3-cli gives nonsensical outputs when used with Vulkan
#13046 closed
Jun 30, 2025 -
Eval bug:GGUF Conversion from LLaVA 1.6(LLaVA NeXT) doesn't work
#13593 closed
Jun 30, 2025 -
Is PLE offloading to GPU supported?
#14430 closed
Jun 29, 2025 -
Eval bug: Weight repacking for AVX2 block interleaving is very slow and NUMA unfriendly
#12759 closed
Jun 29, 2025 -
Feature Proposal: Server Model Switching at Runtime
#13027 closed
Jun 29, 2025 -
Feature Request: Add new model support Hunyuan-A13B
#14433 closed
Jun 28, 2025 -
Misc. bug: Fix CI for windows
#14412 closed
Jun 28, 2025 -
(Discussion) Improve usability of llama-server
#13367 closed
Jun 28, 2025 -
Research: How to integrate VITA 1.5 for multi-modal GGUF deployment?
#13520 closed
Jun 28, 2025 -
Feature Request: XiaomiMiMo/MiMo-7B-RL
#13218 closed
Jun 27, 2025 -
Why mul_mat in ggml slower than llama.cpp?
#13473 closed
Jun 27, 2025 -
Eval bug: I just finetuned gpt2 model with lora and save it to gguf file but not properly worked
#13489 closed
Jun 27, 2025 -
Eval bug: BGE-M3 Embedding model is not accessible
#13494 closed
Jun 27, 2025 -
Misc. bug: llama-cli stopped starting in release b4191 (c9b00a7)
#13498 closed
Jun 27, 2025 -
Feature Request: Apple just release Fast-VLM, a very promising set of multimodal language models
#13512 closed
Jun 27, 2025 -
Feature Request: Qwen 2.5 VL
#11483 closed
Jun 26, 2025 -
Misc. bug: Model not loaded on Android with NDK
#13399 closed
Jun 26, 2025 -
Eval bug: I cannot run llama 405b on CPU
#13475 closed
Jun 26, 2025 -
web UI either doesn't scroll or jumps to the wrong element
#13479 closed
Jun 26, 2025 -
Partial offload support for training
#13486 closed
Jun 26, 2025 -
Misc. bug: ggml_cuda_compute_forward: MUL failed ROCm error: invalid device function
#14370 closed
Jun 25, 2025 -
Misc. bug: llama-server slower on 4bit quantized model with f470bc36bed
#14235 closed
Jun 25, 2025 -
Misc. bug: Completions hang after CUDA error, but health endpoint reports all OK
#13281 closed
Jun 25, 2025 -
Misc. bug: The web UI of llama-server is not displaying correctly.
#13428 closed
Jun 25, 2025 -
Compile bug: ld returned 1 exit status (file bigger than 2gb)
#13446 closed
Jun 25, 2025 -
Drop support for sentencepiece
#13448 closed
Jun 25, 2025 -
Misc. bug: Illegal CUDA memory access in ggml_backend_cuda_cpy_tensor_async
#13449 closed
Jun 25, 2025 -
Feature Request: add draft model in llama-bench and more.
#13456 closed
Jun 25, 2025
32 Issues opened by 32 people
-
Feature Request: per-chat prompt caching
#14470 opened
Jul 1, 2025 -
Eval bug: Gemma vision head (possibly Siglip) yields garbage on vulkan / sycl on Intel N150
#14469 opened
Jun 30, 2025 -
Compile bug: ValueError: Can not map tensor 'lm_head.biases' when converting Qwen3-8B (MLX fused LoRA) model
#14467 opened
Jun 30, 2025 -
Feature Request: Add Ernie4.5MoE support
#14465 opened
Jun 30, 2025 -
Compile bug:
#14464 opened
Jun 30, 2025 -
Misc. bug: convert_hf_to_gguf.py not working on qwen3-embedding and qwen3-embedding lora tuned models
#14459 opened
Jun 30, 2025 -
Misc. bug: oom ,The process does not exit.
#14458 opened
Jun 30, 2025 -
Eval bug: gemma-3n crash when using HIP
#14448 opened
Jun 29, 2025 -
Memory isn't freed with a particular set of options
#14446 opened
Jun 29, 2025 -
Eval bug: Loading multimodal ultravox model locally fails at loading clip model without any errors.
#14444 opened
Jun 29, 2025 -
Feature Request: Adding Parquet support for tokenized datasets
#14442 opened
Jun 29, 2025 -
Compile bug: SYCL with OneAPI Toolkit 2025.2 & NixOS
#14440 opened
Jun 29, 2025 -
Eval bug: Extreme perplexity for gemma 3n
#14437 opened
Jun 29, 2025 -
Misc. bug: Saving and restoring an empty slot does a crasherino
#14434 opened
Jun 28, 2025 -
Feature Request: Gemma3n multimodal support
#14429 opened
Jun 28, 2025 -
Eval bug: GGML_ASSERT(nei0 * nei1 <= 4096) failed when setting ubatch to 2048 on Qwen 3-30B
#14426 opened
Jun 27, 2025 -
Eval bug: example/finetune.cpp crashing
#14424 opened
Jun 27, 2025 -
Misc. bug:
#14422 opened
Jun 27, 2025 -
bug: GGML_ASSERT(backend_embd != nullptr) failed error at llama.cpp:14775
#14418 opened
Jun 27, 2025 -
Feature Request: Hunyuan-A13B model support
#14415 opened
Jun 27, 2025 -
Eval bug: Unexpected empty grammar stack after accepting piece: <unused32>
#14413 opened
Jun 27, 2025 -
Eval bug: Tools crash and/or fail for deepseek r1/v3 unsloth dynamic quantization
#14406 opened
Jun 26, 2025 -
main: failed to quantize model from 'gemma-3n-E2B-it.f16.gguf'
#14405 opened
Jun 26, 2025 -
Feature Request: Generic CPU in ggml-cpu/arch
#14402 opened
Jun 26, 2025 -
Feature Request: allow running llama with an idle (lowest) priority as well
#14382 opened
Jun 25, 2025 -
Feature Request: Exclude thinking tokens from server cache for reasoning models
#14379 opened
Jun 25, 2025 -
Request for Official Support of AMD Ryzen AI Platform NPU
#14377 opened
Jun 25, 2025 -
Compile error for ggml_gemv_q4_K_8x8_q8_K on Intel x86_64 MacOS (AVX2)
#14372 opened
Jun 25, 2025 -
Misc. bug: Inconsistent Gemma3 implementation in rope factor
#14367 opened
Jun 24, 2025
77 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
ggml: adds CONV_2D op and direct GEMM Vulkan implementation
#14316 commented on
Jun 30, 2025 • 6 new comments -
imatrix: add option to display importance score statistics for a given imatrix file
#12718 commented on
Jun 29, 2025 • 1 new comment -
[server] webui DB import and export
#14347 commented on
Jun 24, 2025 • 0 new comments -
llama-server : implement universal assisted decoding
#12635 commented on
Jun 28, 2025 • 0 new comments -
[WIP]backend: Integrating QNN (Qualcomm AI Engine Direct) as a dedicated backend for Qualcomm NPUs
#12063 commented on
Jun 30, 2025 • 0 new comments -
Overlap CUDA graph building and processing to minimize GPU idle time and improve tokens per seconds performance.
#11867 commented on
Jun 29, 2025 • 0 new comments -
[Draft] Tensor Parallel support to llama.cpp
#9648 commented on
Jun 28, 2025 • 0 new comments -
imatrix : use GGUF to store importance matrices
#9400 commented on
Jun 25, 2025 • 0 new comments -
llama : initial Mamba-2 support
#9126 commented on
Jun 30, 2025 • 0 new comments -
Feature Request: --swa-extra parameter needed to restore speculative decode function with SWA
#13747 commented on
Jul 1, 2025 • 0 new comments -
Misc. bug: Decreased success rate for tool calling
#13769 commented on
Jul 1, 2025 • 0 new comments -
Feature Request: Regarding Hardcoded GGML Tensor Name Length Limit (GGML_MAX_NAME)
#13947 commented on
Jul 1, 2025 • 0 new comments -
Feature Request: Granite 4 Support
#13275 commented on
Jun 30, 2025 • 0 new comments -
Enhancement: Improve ROCm performance on various quants (benchmarks included)
#11931 commented on
Jun 30, 2025 • 0 new comments -
Feature Request: Qwen2.5-Omni
#12673 commented on
Jun 30, 2025 • 0 new comments -
Compile bug: nvcc fatal : Unsupported gpu architecture 'compute_'
#13893 commented on
Jun 30, 2025 • 0 new comments -
Feature Request: Generate Image Embeddings with llama.cpp
#13913 commented on
Jun 30, 2025 • 0 new comments -
Compile bug: allocator.h:165:24 Call to implicitly-deleted copy constructor of 'std::unique_ptr<llama_adapter_lora, llama_adapter_lora_deleter>'
#13925 commented on
Jun 30, 2025 • 0 new comments -
Eval bug: llama-cli, Qwen3 jinja template will break CLI multiturn conversation
#13404 commented on
Jun 29, 2025 • 0 new comments -
Misc. bug: Complex tool calling schema causes an "Unrecognized Schema" exception
#14227 commented on
Jun 29, 2025 • 0 new comments -
Misc. bug: linux/arm64 does not exist for the server docker image
#13891 commented on
Jun 29, 2025 • 0 new comments -
make "server-core" library
#14331 commented on
Jun 30, 2025 • 0 new comments -
kv-cache : use ggml_set_rows
#14285 commented on
Jun 30, 2025 • 0 new comments -
MODEL: Falcon-H1 support
#14238 commented on
Jun 25, 2025 • 0 new comments -
Mtmd: add a way to select device for vision encoder
#14236 commented on
Jun 26, 2025 • 0 new comments -
ggml: introduce GGML_NUMA_MIGRATE to optimize cross NUMA op computation
#14232 commented on
Jun 25, 2025 • 0 new comments -
tests : add test-model-random
#14139 commented on
Jun 26, 2025 • 0 new comments -
ggml: aarch64: Implement SVE Kernels for Int 8 Quantization
#14117 commented on
Jun 30, 2025 • 0 new comments -
llama : support qwen3 rerank and embeddings
#14029 commented on
Jun 26, 2025 • 0 new comments -
server: Enable mtmd in llama-server `/completion` endpoint
#14016 commented on
Jun 27, 2025 • 0 new comments -
[CANN]:Replace aclrtMemsetSync with InplaceZero operator for zero tensor creation
#14002 commented on
Jul 1, 2025 • 0 new comments -
Add plamo2
#13930 commented on
Jun 29, 2025 • 0 new comments -
finetune.cpp command-line arg
#13873 commented on
Jul 1, 2025 • 0 new comments -
cmake : set `RPATH` to `$ORIGIN` on Linux (#13740)
#13741 commented on
Jun 25, 2025 • 0 new comments -
Move page cache via mbind to prevent cross-NUMA access
#13731 commented on
Jun 30, 2025 • 0 new comments -
model : jina-embeddings-v3 support
#13693 commented on
Jun 28, 2025 • 0 new comments -
Granite Four
#13550 commented on
Jun 30, 2025 • 0 new comments -
Update llama-quant.cpp llama_tensor_get_type with DeepSeek friendly modifications
#12727 commented on
Jun 30, 2025 • 0 new comments -
WIP: Add support for CogAgent
#12679 commented on
Jun 24, 2025 • 0 new comments -
OpenCL backend with Qualcomm Adreno GPUs load time is too long
#14337 commented on
Jun 26, 2025 • 0 new comments -
Eval bug: Program crashes during long input inference when batch size is set to 16384
#14325 commented on
Jun 26, 2025 • 0 new comments -
Error while converting peft finetuned merged model to gguf
#12494 commented on
Jun 26, 2025 • 0 new comments -
Feature Request: Add support of convert.py for model Qwen2.5-Omni-7B
#12641 commented on
Jun 26, 2025 • 0 new comments -
Compile bug: Vulkan shaders build fails due to missing vulkan-shaders directory during ExternalProject\_Add configure step
#13753 commented on
Jun 26, 2025 • 0 new comments -
Compile bug: Vulkan Build Fails in Termux/Proot Due to Missing Cooperative Matrix Shader Variables
#13801 commented on
Jun 26, 2025 • 0 new comments -
Eval bug: Output garbled on DeepSeek-R1-Distill-Qwen-7B-Q4_K_M.gguf from unsloth using musa backend with VMM off
#13788 commented on
Jun 26, 2025 • 0 new comments -
ERROR:hf-to-gguf:Model Qwen2_5_VLModel is not supported
#13802 commented on
Jun 26, 2025 • 0 new comments -
ERROR:hf-to-gguf:Model MllamaForConditionalGeneration is not supported
#13805 commented on
Jun 26, 2025 • 0 new comments -
Misc. bug: ROCm images cannot be found
#11913 commented on
Jun 26, 2025 • 0 new comments -
Feature Request: (webui) do not throw away message if there is error in stream
#13709 commented on
Jun 25, 2025 • 0 new comments -
Feature Request: Add support for moonshotai/Kimi-VL-A3B-Instruct
#14318 commented on
Jun 25, 2025 • 0 new comments -
Eval bug: [CANN]AutoDL Ascend 910B instance running DeepSeek-r1 32B_Q8 error
#14291 commented on
Jun 25, 2025 • 0 new comments -
something with llama_server? slow vs llama_cli
#13560 commented on
Jun 25, 2025 • 0 new comments -
Misc. bug: RUNPATH properties are not properly set
#13740 commented on
Jun 25, 2025 • 0 new comments -
Eval bug: terminate called after throwing an instance of 'std::runtime_error' what(): Unexpected empty grammar stack after accepting piece: [control_36]
#13690 commented on
Jun 25, 2025 • 0 new comments -
Misc. bug: llama-server assistant prefill only works when message content is a string (not a list of objects)
#14353 commented on
Jun 24, 2025 • 0 new comments -
Eval bug: Inconsistent Embedding Similarity between llama-server and LlamaCppEmbeddings for BGE-M3 Model
#14280 commented on
Jun 24, 2025 • 0 new comments -
Eval bug: Command-A generates a single repeating token when using split mode row on P40
#14228 commented on
Jun 24, 2025 • 0 new comments -
LoRA training example
#13485 commented on
Jun 29, 2025 • 0 new comments -
Automatic optimization of runtime parameters such as -ngl given memory constraints
#13860 commented on
Jun 29, 2025 • 0 new comments -
Feature Request: Make the `/completion` endpoint in `llama-server` work with multimodal models
#13872 commented on
Jun 29, 2025 • 0 new comments -
Feature Request: Multimodal: llama-server support for Qwen2.5-VL chat template type: list of image paths (type: "video")
#13905 commented on
Jun 29, 2025 • 0 new comments -
Misc. bug: Server/Chat parallel tool calling not working
#14101 commented on
Jun 28, 2025 • 0 new comments -
changelog : `llama-server` REST API
#9291 commented on
Jun 28, 2025 • 0 new comments -
Eval bug: llama-mtmd-cli doesn't support system prompts
#13454 commented on
Jun 28, 2025 • 0 new comments -
Feature Request: video support in mtmd-cli / server
#13754 commented on
Jun 28, 2025 • 0 new comments -
Feature Request: Set default of --numa to distribute
#13850 commented on
Jun 28, 2025 • 0 new comments -
Eval bug: Embeddings Always returned as non
#13854 commented on
Jun 28, 2025 • 0 new comments -
Feature Request: Optimize for Nvidia Jetson Series' truly Unified Memory Architecture
#13856 commented on
Jun 28, 2025 • 0 new comments -
Compile bug: gcc-12: error: unrecognized command-line option ‘-compress-mode=size’
#14260 commented on
Jun 27, 2025 • 0 new comments -
Feature Request: (webui) add import / export function for ALL conversations
#11718 commented on
Jun 27, 2025 • 0 new comments -
Feature Reequest: Multi model cli tools: Add a possibility to specify a image in conversation mode plus tab auto completion for path
#12983 commented on
Jun 27, 2025 • 0 new comments -
Feature Request: add per-request "reasoning" options in llama-server
#13272 commented on
Jun 27, 2025 • 0 new comments -
webui: First user prompt sometimes disappears after sending
#13622 commented on
Jun 27, 2025 • 0 new comments -
Misc. bug: llama-cli.exe stopped working on Windows Server 10
#13767 commented on
Jun 27, 2025 • 0 new comments -
Eval bug: seed seems to be locked to a single value 4294967295
#13823 commented on
Jun 27, 2025 • 0 new comments -
mtmd: Any plan for mtmd to support video input and audio output?
#14295 commented on
Jun 26, 2025 • 0 new comments