-
Notifications
You must be signed in to change notification settings - Fork 12.2k
vulkan : add GELU_ERF #14455
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
vulkan : add GELU_ERF #14455
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
0cc4m
approved these changes
Jul 1, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, works on all my hardware.
gabe-l-hart
added a commit
to gabe-l-hart/llama.cpp
that referenced
this pull request
Jul 1, 2025
* origin/master: Add Vulkan images to docker.md (ggml-org#14472) CANN: update aclnnGroupedMatmulV2 to aclnnGroupedMatmulV3 (ggml-org#14411) vulkan: Split large mul_mat_id to fit in shared memory (ggml-org#14451) add GELU_ERF (ggml-org#14455) ggml : remove trailing whitespace (#0) sync : ggml ggml-cpu : "align corners" for bilinear upscale/downscale (ggml/1285) ggml-quants : rename best_mad to best_error (ggml/1283) opencl : add GEGLU, REGLU, SWIGLU (ggml-org#14456) Add Conv2d for CPU (ggml-org#14388)
qnixsynapse
pushed a commit
to menloresearch/llama.cpp
that referenced
this pull request
Jul 2, 2025
Minh141120
pushed a commit
to menloresearch/llama.cpp
that referenced
this pull request
Jul 2, 2025
* CANN: Enable labeler for Ascend NPU (#13914) * add geglu activation function (#14074) Co-authored-by: dinhhuy <[email protected]> * sycl: Add reorder to Q6_K mmvq implementation (#13885) * Add Reorder to Q6_K mmvq implementation * Address PR comments: clean up comments * Remove unused parameter after refactoring q4_k * Adding inline to function and removing unnecessary reference to int --------- Signed-off-by: nscipione <[email protected]> * server : fix LRU check (#14079) ggml-ci * webui: fix sidebar being covered by main content (#14082) * webui: fix sidebar being covered by main content Signed-off-by: Xiaodong Ye <[email protected]> * webui: update index.html.gz Signed-off-by: Xiaodong Ye <[email protected]> --------- Signed-off-by: Xiaodong Ye <[email protected]> * CANN: Simplify the environment variable setting(#13104) * Simplify the environment variable setting to specify the memory pool type. * Adjust the GGML_CANN_ASYNC_MODE setting to accept yes, enable, 1, or on (case-insensitive) as valid options. * update * fix CI * update * delete whitespace * fix according to review * update CANN.md * update CANN.md * graph : fix geglu (#14077) ggml-ci * cuda : fix device sync on buffer clear (#14033) * ggml-cpu : split arch-specific implementations (#13892) * move ggml-cpu-aarch64 to repack * split quantize_row_q8_0/1 * split helper functions * split ggml_vec_dot_q4_0_q8_0 * split ggml_vec_dot_q4_1_q8_1 * split ggml_vec_dot_q5_0_q8_0 * split ggml_vec_dot_q5_1_q8_1 * split ggml_vec_dot_q8_0_q8_0 * split ggml_vec_dot_tq1_0_q8_K * split ggml_vec_dot_tq2_0_q8_K * split ggml_vec_dot_q2_K_q8_K * split ggml_vec_dot_q3_K_q8_K * split ggml_vec_dot_q4_K_q8_K * split ggml_vec_dot_q5_K_q8_K * split ggml_vec_dot_q6_K_q8_K * split ggml_vec_dot_iq2_xxs_q8_K * split ggml_vec_dot_iq2_xs_q8_K * split ggml_vec_dot_iq2_s_q8_K * split ggml_vec_dot_iq3_xxs_q8_K * split ggml_vec_dot_iq3_s_q8_K * split ggml_vec_dot_iq1_s_q8_K * split ggml_vec_dot_iq1_m_q8_K * split ggml_vec_dot_iq4_nl_q8_0 * split ggml_vec_dot_iq4_xs_q8_K * fix typos * fix missing prototypes * rename ggml-cpu-quants.c * rename ggml-cpu-traits * rename arm folder * move cpu-feats-x86.cpp * rename ggml-cpu-hbm * update arm detection macro in quants.c * move iq quant tables * split ggml_quantize_mat_q8_0/K * split ggml_gemv_* * split ggml_gemm_* * rename namespace aarch64 to repack * use weak aliases to replace test macros * rename GGML_CPU_AARCH64 to GGML_CPU_REPACK * rename more aarch64 to repack * clean up rebase leftover * fix compilation errors * remove trailing spaces * try to fix clang compilation errors * try to fix clang compilation errors again * try to fix clang compilation errors, 3rd attempt * try to fix clang compilation errors, 4th attempt * try to fix clang compilation errors, 5th attempt * try to fix clang compilation errors, 6th attempt * try to fix clang compilation errors, 7th attempt * try to fix clang compilation errors, 8th attempt * try to fix clang compilation errors, 9th attempt * more cleanup * fix compilation errors * fix apple targets * fix a typo in arm version of ggml_vec_dot_q4_K_q8_K Co-authored-by: Georgi Gerganov <[email protected]> --------- Co-authored-by: Georgi Gerganov <[email protected]> * llama : allow building all tests on windows when not using shared libs (#13980) * llama : allow building all tests on windows when not using shared libraries * add static windows build to ci * tests : enable debug logs for test-chat --------- Co-authored-by: Georgi Gerganov <[email protected]> * kv-cache : fix shift and defrag logic (#14081) * kv-cache : fix shift ggml-ci * cont : reset shift[i] ggml-ci * cont : fix defrag erasing cells that didn't move ggml-ci * metal : use less stack memory in FA kernel (#14088) * metal : use less stack memory in FA kernel ggml-ci * cont : fix BF16 variant * Add in-build ggml::ggml ALIAS library (ggml/1260) Enable uniform linking with subproject and with find_package. * sync : ggml ggml-ci * rpc : nicer error messages for RPC server crash (#14076) * Vulkan: Don't default to CPU device (like llvmpipe), even if no other device is available, to allow fallback to CPU backend (#14099) * ggml : fix weak alias win32 (whisper/0) ggml-ci * sync : ggml ggml-ci * Fixed spec timings to: accepted/tested instead of accepted/drafted (#14104) * vulkan: force device 0 in CI (#14106) * llama : support GEGLU for jina-bert-v2 (#14090) * convert : fix duplicate key DeepSeek-R1 conversion error (#14103) * kv-cache : avoid modifying recurrent cells when setting inputs (#13834) * kv-cache : avoid modifying recurrent cells when setting inputs * kv-cache : remove inp_s_mask It was replaced with equivalent and simpler functionality with rs_z (the first zeroed state) and the already-existing inp_s_copy. * kv-cache : fix non-consecutive token pos warning for recurrent models The problem was apparently caused by how the tail cells were swapped. * graph : simplify logic for recurrent state copies * kv-cache : use cell without src refs for rs_z in recurrent cache * llama-graph : fix recurrent state copy The `state_copy` shuffle assumes everything is moved at once, which is not true when `states_extra` is copied back to the cache before copying the range of states between `head` and `head + n_seqs`. This is only a problem if any of the cells in [`head`, `head + n_seqs`) have an `src` in [`head + n_seqs`, `head + n_kv`), which does happen when `n_ubatch > 1` in the `llama-parallel` example. Changing the order of the operations avoids the potential overwrite before use, although when copies are avoided (like with Mamba2), this will require further changes. * llama-graph : rename n_state to state_size in build_recurrent_state This naming should reduce confusion between the state size and the number of states. * opencl: add `mul_mv_id_q4_0_f32_8x_flat` (#14003) * vulkan: Track descriptor pools/sets per-context (#14109) Use the same descriptor set layout for all pipelines (MAX_PARAMETER_COUNT == 8) and move it to the vk_device. Move all the descriptor pool and set tracking to the context - none of it is specific to pipelines anymore. It has a single vector of pools and vector of sets, and a single counter to track requests and a single counter to track use. * kv-cache : add LLAMA_KV_CACHE_DEBUG environment variable (#14121) * server : pass default --keep argument (#14120) * kv-cache : relax SWA masking condition (#14119) ggml-ci * webui: Wrap long numbers instead of infinite horizontal scroll (#14062) * webui: Wrap long numbers instead of infinite horizontal scroll * Use tailwind class * update index.html.gz * vulkan: Better thread-safety for command pools/buffers (#14116) This change moves the command pool/buffer tracking into a vk_command_pool structure. There are two instances per context (for compute+transfer) and two instances per device for operations that don't go through a context. This should prevent separate contexts from stomping on each other. * tests : add test-tokenizers-repo (#14017) * chore : clean up relative source dir paths (#14128) * Implement GGML_CPU_ALL_VARIANTS for ARM (#14080) * ggml-cpu: Factor out feature detection build from x86 * ggml-cpu: Add ARM feature detection and scoring This is analogous to cpu-feats-x86.cpp. However, to detect compile-time activation of features, we rely on GGML_USE_<FEAT> which need to be set in cmake, instead of GGML_<FEAT> that users would set for x86. This is because on ARM, users specify features with GGML_CPU_ARM_ARCH, rather than with individual flags. * ggml-cpu: Implement GGML_CPU_ALL_VARIANTS for ARM Like x86, however to pass around arch flags within cmake, we use GGML_INTERNAL_<FEAT> as we don't have GGML_<FEAT>. Some features are optional, so we may need to build multiple backends per arch version (armv8.2_1, armv8.2_2, ...), and let the scoring function sort out which one can be used. * ggml-cpu: Limit ARM GGML_CPU_ALL_VARIANTS to Linux for now The other platforms will need their own specific variants. This also fixes the bug that the the variant-building branch was always being executed as the else-branch of GGML_NATIVE=OFF. The branch is moved to an elseif-branch which restores the previous behavior. * common: fix issue with regex_escape routine on windows (#14133) * context : round n_tokens to next multiple of n_seqs when reserving (#14140) This fixes RWKV inference which otherwise failed when the worst case ubatch.n_seq_tokens rounded to 0. * kv-cache : fix split_equal handling in unified implementation (#14130) ggml-ci * cmake : handle whitepsaces in path during metal build (#14126) * cmake : handle whitepsaces in path during metal build ggml-ci * cont : proper fix ggml-ci --------- Co-authored-by: Daniel Bevenius <[email protected]> * batch : remove logits_all flag (#14141) ggml-ci * context : simplify output counting logic during decode (#14142) * batch : remove logits_all flag ggml-ci * context : simplify output counting logic during decode ggml-ci * cont : fix comments * server : re-enable SWA speculative decoding (#14131) ggml-ci * readme : remove project status link (#14149) * sycl: Remove not needed copy f16->f32 for dnnl mul mat (#14125) * vocab : prevent heap overflow when vocab is too small (#14145) ggml-ci * cmake : Improve build-info.cpp generation (#14156) * cmake: Simplify build-info.cpp generation The rebuild of build-info.cpp still gets triggered when .git/index gets changes. * cmake: generate build-info.cpp in build dir * SYCL: Bump oneMath commit (#14152) Update oneMath commit to merged PR https://github.com/uxlfoundation/oneMath/pull/669 which adds SYCL-Graph support for recording CUDA BLAS commands. With this change the `MUL_MAT` tests now pass on DPC++ CUDA backends with SYCL-Graph enabled. Prior to this change, an error would be thrown. ``` $ GGML_SYCL_DISABLE_GRAPH=0 ./bin/test-backend-ops -b SYCL0 -o MUL_MAT -p type_a=f16,type_b=f32,m=16,n=1,k=256,bs=\\[1,1\\],nr=\\[2 UR CUDA ERROR: Value: 700 Name: CUDA_ERROR_ILLEGAL_ADDRESS Description: an illegal memory access was encountered Function: operator() Source Location: $HOME/dpcpp/unified-runtime/source/adapters/cuda/queue.cpp:154 Native API failed. Native API returns: 2147483646 (UR_RESULT_ERROR_UNKNOWN) Exception caught at file:$HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp, line:3598, func:operator() SYCL error: CHECK_TRY_ERROR((stream)->wait()): Meet error in this line code! in function ggml_backend_sycl_synchronize at $HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:3598 $HOME/llama.cpp/ggml/src/ggml-sycl/../ggml-sycl/common.hpp:118: SYCL error Could not attach to process. If your uid matches the uid of the target process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf ptrace: Operation not permitted. No stack. The program is not being run. ``` * sycl: Adding additional cpy dbg print output (#14034) * server : fix SWA condition for full context reprocess (#14163) ggml-ci * pooling : make cls_b and cls_out_b optional (#14165) Co-authored-by: dinhhuy <[email protected]> * cmake: Add ability to pass in LLAMA_BUILD_NUMBER/COMMIT (#14167) * cmake: Add ability to pass in LLAMA_BUILD_NUMBER/COMMIT * cmake: Pass on LLAMA_BUILD_* to GGML_BUILD_* * readme : remove survey link (#14168) * batch : rework llama_batch_allocr (#14153) * batch : rework llama_batch_allocr ggml-ci * cont : move validation inside class ggml-ci * cont : move output counting to class ggml-ci * cont : minor ggml-ci * batch : add TODOs ggml-ci * docs : Update multimodal.md (#14122) * Update multimodal.md * Update multimodal.md * batch : add LLAMA_BATCH_DEBUG environment variable (#14172) * batch : add LLAMA_BATCH_DEBUG environment variable ggml-ci * cont : improve seq_id display * Merge commit from fork * vocab : prevent integer overflow during load * Add static cast and GGML_ABORT --------- Co-authored-by: Georgi Gerganov <[email protected]> * sycl: fix docker image (#14144) * vocab : fix build (#14175) ggml-ci * compare-llama-bench: add option to plot (#14169) * compare llama-bench: add option to plot * Address review comments: convert case + add type hints * Add matplotlib to requirements * fix tests * Improve comment and fix assert condition for test * Add back default test_name, add --plot_log_scale * use log_scale regardless of x_values * llama-chat : Do not throw when tool parsing fails (#14012) Currently when a model generates output which looks like a tool call, but is invalid an exception is thrown and not handled, causing the cli or llama-server to bail. Instead, handle the chat parser exception and simply return the generated text in such cases. Signed-off-by: Piotr Stankiewicz <[email protected]> * docs : remove WIP since PR has been merged (#13912) * batch : auto-gen positions + verify multi-sequence input (#14177) * batch : verify multi-sequence input batches ggml-ci * cont : auto-gen positions + verify multi-seq input ggml-ci * cont : first print debug info, then perform validation ggml-ci * cont : fix position auto-gen + add comments ggml-ci * cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) ggml-ci * model : add dots.llm1 architecture support (#14044) (#14118) Adds: * Dots1Model to convert_hf_to_gguf.py * Computation graph code to llama-model.cpp * Chat template to llama-chat.cpp to detect this model's template. --- The model is called "dots.llm1" (I decided to shorten it to dots1 or DOTS1 in the code generally) architecture. The only models that exist as of writing of this commit that follow this architecture are "dots.llm1.inst" and "dots.llm1.base" from here: * https://huggingface.co/rednote-hilab/dots.llm1.inst * https://huggingface.co/rednote-hilab/dots.llm1.base The model architecture is a combination of Qwen and Deepseek parts, as seen here: https://github.com/huggingface/transformers/blob/ffe12627b4e84489d2ab91dd0ec00614855edc79/src/transformers/models/dots1/modular_dots1.py * kv-cache : fix use-after-move of defrag info (#14189) ggml-ci * HIP: Replace usage of depricated preprocessor macro __AMDGCN_WAVEFRONT_SIZE__ (#14183) * CUDA/HIP: fix ssm_scan on devices where warp size is not 32 (#14196) * quantize : change int to unsigned int for KV overrides (#14197) * server : When listening on a unix domain socket don't print http:// and port (#14180) Instead show something like this: main: server is listening on file.sock - starting the main loop Signed-off-by: Eric Curtin <[email protected]> * model : Add support for Arcee AI's upcoming AFM model (#14185) * Add Arcee AFM support * Add draft update code * Fix linter and update URL, may still not be final * Update src/llama-model.cpp Co-authored-by: Xuan-Son Nguyen <[email protected]> * Remote accidental blank line --------- Co-authored-by: Xuan-Son Nguyen <[email protected]> * ggml-cpu : rework weak alias on apple targets (#14146) * ggml-cpu : rework weak alias on apple targets * fix powerpc detection * fix ppc detection * fix powerpc detection on darwin * vulkan: mutex around vkQueueSubmit (#14127) This fixes the remaining crash in test-thread-safety on my system. * gguf-py : allow key override when adding value to GGUFWriter (#14194) Co-authored-by: dinhhuy <[email protected]> * convert : remove arcee change in convert_hf_to_gguf_update.py (#14207) * ggml: Add Android support for GGML_CPU_ALL_VARIANTS (#14206) * llama : rework embeddings logic (#14208) * llama : rework embeddings logic ggml-ci * cont : fix rerank ggml-ci * cont : engrish [no ci] * cont : fix rerank ggml-ci * server : support both embeddings and completions with single model ggml-ci * cont : avoid embeddings_org ggml-ci * HIP: disable rocwmma on gfx12 by default until rocm 7.0 (#14202) * model : add NeoBERT (#14164) * convert neobert model to gguf * add inference graph * fix flake8 lint * followed reviewer suggestions Co-authored-by: Georgi Gerganov <[email protected]> * follow reviewers suggestions Co-authored-by: Georgi Gerganov <[email protected]> * override NeoBERT feed-forward length --------- Co-authored-by: dinhhuy <[email protected]> Co-authored-by: Georgi Gerganov <[email protected]> * cmake: clean up external project logic for vulkan-shaders-gen (#14179) * Remove install step for vulkan-shaders-gen * Add install step to normalize msvc with make * Regenerate modified shaders at build-time * llama : add thread safety test (#14035) * llama : add thread safety test * llamafile : remove global state * llama : better LLAMA_SPLIT_MODE_NONE logic when main_gpu < 0 GPU devices are not used --------- Co-authored-by: Georgi Gerganov <[email protected]> * server : fix incorrect usage of llama_get_embeddings() (#14225) * server : fix incorrect usage of llama_get_embeddings() ggml-ci * cont : fix the fix ggml-ci * common : suggest --jinja when autodetection fails (#14222) * musa: fix build warning (unused variable) (#14231) Signed-off-by: Xiaodong Ye <[email protected]> * ggml-cpu : remove the weak alias trick (#14221) * cmake: remove shader-gen step-targets from ggml-vulkan (#14226) * Remove step-targets from vulkan-shaders-gen * Unset DESTDIR when building vulkan-shaders-gen * examples : include examples in msvc disable warn (ggml/1270) This commit adds the examples in the "list" of targets to ignore MSVC warnings. The motivation for this is that currently the examples generate a number of warnings that are ignore/disabled for the core ggml project. This makes for a cleaner output when building. * ggml : remove unused ggml_context_container (ggml/1272) This commit removes the unused `ggml_context_container` structure from the ggml library. It looks like the usage of this struct was removed in Commit 4757fe18d56ec11bf9c07feaca6e9d5b5357e7f4 ("ggml : alloc ggml_contexts on the heap (whisper/2525)"). The motivation for this changes is to improve code clarity/readability. * ggml : disable warnings for tests when using MSVC (ggml/1273) * ggml : disable warnings for tests when using MSVC This commit disables warnings for tests on windows when using MSVC. The motivation for this is that this brings the build output more inline with what Linux/MacOS systems produce. There is still one warning generated for the tests which is: ```console Building Custom Rule C:/ggml/tests/CMakeLists.txt cl : command line warning D9025: overriding '/DNDEBUG' with '/UNDEBUG' [C:\ggml\build\tests\test-arange.vcxproj] test-arange.cpp test-arange.vcxproj -> C:\ggml\build\bin\Release\test-arange.exe ``` * ggml : fix typo in tests disable list * sync : ggml ggml-ci * convert : fix null head_dim AutoConfig regression (#14248) * llama-chat : fix multiple system message for gemma, orion (#14246) * mtmd : refactor llava-uhd preprocessing logic (#14247) * mtmd : refactor llava-uhd preprocessing logic * fix editorconfig * ggml: Add Apple support for GGML_CPU_ALL_VARIANTS (#14258) * ggml-cpu: fix uncaught underscore terminators (#14023) Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: reduce asm calls for hsum (#14037) Signed-off-by: Aaron Teo <[email protected]> * docs: add s390x build documentation (#14264) * docs: add s390x-specific build docs Signed-off-by: Aaron Teo <[email protected]> * docs: add s390x model conversion steps Signed-off-by: Aaron Teo <[email protected]> * docs: s390x build indent Signed-off-by: Aaron Teo <[email protected]> * docs: update hyperlinks for s390x docs Signed-off-by: Aaron Teo <[email protected]> * docs: update llama.h docs Signed-off-by: Aaron Teo <[email protected]> * docs: s390x add accelerator and perf optimizations Signed-off-by: Aaron Teo <[email protected]> * docs: s390x indent blocks Signed-off-by: Aaron Teo <[email protected]> * docs: revert block indentation Signed-off-by: Aaron Teo <[email protected]> * docs: add support information for s390x Signed-off-by: Aaron Teo <[email protected]> * docs: s390x reword Signed-off-by: Aaron Teo <[email protected]> * docs: remove indentation for accelerator section s390x Signed-off-by: Aaron Teo <[email protected]> * docs: remove redundant words s390x Signed-off-by: Aaron Teo <[email protected]> * docs: reword for s390x Signed-off-by: Aaron Teo <[email protected]> * docs: s390x reword simd Signed-off-by: Aaron Teo <[email protected]> * docs: fix trailing whitespace for s390x Signed-off-by: Aaron Teo <[email protected]> --------- Signed-off-by: Aaron Teo <[email protected]> * metal : add mean kernel (#14267) * metal : add mean kernel ggml-ci * cont : dedup implementation ggml-ci * memory : Hybrid recurrent cache (#13979) * feat: Add llama_model_is_hybrid API call Also, split llama_model_is_recurrent into llm_arch_is_recurrent in llama-arch with llama_model_is_recurrent delegating to llm_arch_is_recurrent. The same split is done for hybird. This is needed because there are places where the llama_model has not yet been initialized but we need to check if the model is recurrent (specifically for the per-layer recurrent check array in hparams). Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat: Add c++ side constants for attention layer indices hparam Branch: GraniteFour * feat: Add support for distinguishing recurrent vs non-recurrent layers in hparams Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat: Auto-fill hparams.recurrent_layer_arr based on whether the model is recurrent Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * refactor: rename *_is_hybrid -> *_is_hybrid_recurrent The implementation of the hybrid cache intentionally does not specify the types of the child caches, so there was a naming mismatch with these predicate functions that used "hybrid" to imply "hybrid recurrent." Branch: HybridCache Signed-off-by: Gabe Goodhart <[email protected]> * feat: Add layer filter to recurrent cache Branch: HybridCache Signed-off-by: Gabe Goodhart <[email protected]> * fix: Use per-layer sizing everywhere in kv caches Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat: First pass at llama_kv_cache_hybrid_recurrent This follows the pattern in iswa where the two child caches are held explicitly to support the case where a model requires a single attention cache and a single recurrent cache where each layer uses exactly one of the caches. This is a rewrite of the more generic approach in the original hybrid cache PR: https://github.com/ggml-org/llama.cpp/pull/13276 Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * feat: Construct hybrid recurrent cache for hybrid recurrent models This includes a refactor of the create_memory logic to avoid needing to use the arch enum explicitly unless a model needs explicit cache instantiation logic beyond the standard logic for recurrent, hybrid, unified, and iswa. Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * fix: Fix wrong bool condition for split equal in hybrid cache Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * fix: Fix shift logic to defer to unified cache Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * feat: Support hybrid recurrent in llama-graph NOTE: I intentionally did not add support for s_mask since it will be going away soon Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * fix: Fix logic for initializing inputs and attn layers for hybrid caches Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Update recurrent cache for changes to remove intermediate kv_cache interface Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * fix: Fix status for init_update sig for recurrent cache state Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Add missing padding to n_ctx for hybrid cache construction Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Update clear signature for data argument after rebase Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * fix: Remove errant virtual destructor leftover from previous impl attempt Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * fix: Use per-layer n_embd_k/v_s calls for mamba (1) layers Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * refactor: Remove n_embd_k/v_s from unified cache No longer needed now that unified isn't also supporting recurrent https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2140761069 Branch: HybridRecurrentCache * refactor: Remove layer index from n_embd_k/v_s Now that it's not used at all in the unified cache, we don't need to use the layer index to zero it out for attention layers. Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * refactor: Remove n_embd_k/v_gqa from recurrent cache This is no longer needed now that there are separate implementations https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2140825128 Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * feat: Allow custom layer filters for hybrid recurrent This should help support architectures like Falcon H1 where there is overlap between layers that need attention and recurrent caches. https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2140748922 Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * fix: Remove logits_all after rebase Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * fix: Remove llama_model_is_hybrid_Recurrent public API https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2141728423 Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * refactor: Use llama_memory_state_ptr for child states in hybrid memory state Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * feat: Overhaul build_recurrent_state / build_inp_s_copy to match attention pattern https://github.com/ggml-org/llama.cpp/pull/13979/files#r2141701738 This is a big overhaul to bring consistency between how inputs and per- layer components are created for attention layers and recurrent layers. The main changes are: - Rename class llm_graph_input_s_copy -> llm_graph_input_rs - Add a corresponding llm_graph_input_rs_hybrid_recurrent - Rename build_inp_s_copy -> build_rs_inp_recurrent - Add a corresponding build_rs_inp_hybrid_recurrent - Rename build_recurrent_state -> build_rs to match build_attn w/ llm_graph_input_rs android-build AUTHORS bamba-9b-2.2T.gguf bamba-9b-2.2T.q4_k_m.gguf broken.log build build-rel build-xcframework.sh build.android build.android.bak ci cmake CMakeLists.txt CMakePresets.json CODEOWNERS common common.o CONTRIBUTING.md convert_hf_to_gguf_update.py convert_hf_to_gguf.py convert_llama_ggml_to_gguf.py convert_lora_to_gguf.py debug.log docs examples flake.lock flake.nix ggml ggml-alloc.o ggml-backend.o ggml-metal.o ggml-model-BF16.gguf ggml-model-Q4_K_M.gguf ggml-quants.o ggml.o gguf-py grammar-parser.o grammars include LICENSE licenses llama.log llama.o llamacpp_trace.log main.log Makefile media models mypy.ini pocs poetry.lock prompts pyproject.toml pyrightconfig.json q4_k_m_boot.log q8_0_boot.log quant.log quant2.log README.md requirements requirements.txt sampling.o scripts SECURITY.md src test-grammar-output.tmp test-json-schema-input.tmp tests tools vendor working.log as the first input - Add a corresponding overload of build_rs w/ llm_graph_input_rs_hybrid_recurrent android-build AUTHORS bamba-9b-2.2T.gguf bamba-9b-2.2T.q4_k_m.gguf broken.log build build-rel build-xcframework.sh build.android build.android.bak ci cmake CMakeLists.txt CMakePresets.json CODEOWNERS common common.o CONTRIBUTING.md convert_hf_to_gguf_update.py convert_hf_to_gguf.py convert_llama_ggml_to_gguf.py convert_lora_to_gguf.py debug.log docs examples flake.lock flake.nix ggml ggml-alloc.o ggml-backend.o ggml-metal.o ggml-model-BF16.gguf ggml-model-Q4_K_M.gguf ggml-quants.o ggml.o gguf-py grammar-parser.o grammars include LICENSE licenses llama.log llama.o llamacpp_trace.log main.log Makefile media models mypy.ini pocs poetry.lock prompts pyproject.toml pyrightconfig.json q4_k_m_boot.log q8_0_boot.log quant.log quant2.log README.md requirements requirements.txt sampling.o scripts SECURITY.md src test-grammar-output.tmp test-json-schema-input.tmp tests tools vendor working.log as the first input - Add a llm_graph_input_attn_kv_hybrid_recurrent analogous to llm_graph_input_attn_kv_unified - Add a build_attn override that takes llm_graph_input_attn_kv_hybrid_recurrent android-build AUTHORS bamba-9b-2.2T.gguf bamba-9b-2.2T.q4_k_m.gguf broken.log build build-rel build-xcframework.sh build.android build.android.bak ci cmake CMakeLists.txt CMakePresets.json CODEOWNERS common common.o CONTRIBUTING.md convert_hf_to_gguf_update.py convert_hf_to_gguf.py convert_llama_ggml_to_gguf.py convert_lora_to_gguf.py debug.log docs examples flake.lock flake.nix ggml ggml-alloc.o ggml-backend.o ggml-metal.o ggml-model-BF16.gguf ggml-model-Q4_K_M.gguf ggml-quants.o ggml.o gguf-py grammar-parser.o grammars include LICENSE licenses llama.log llama.o llamacpp_trace.log main.log Makefile media models mypy.ini pocs poetry.lock prompts pyproject.toml pyrightconfig.json q4_k_m_boot.log q8_0_boot.log quant.log quant2.log README.md requirements requirements.txt sampling.o scripts SECURITY.md src test-grammar-output.tmp test-json-schema-input.tmp tests tools vendor working.log as the first input This makes the two paradigms fully consistent. The main drawback is the code duplication in the build_attn and build_rs implementations where the only difference between implementations is how they cast the memory state. Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * fix: Fix resize vs reserve and skip null tensors in size computation https://github.com/ggml-org/llama.cpp/pull/13979/files#r2149469788 Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> Co-Authored-By: @younesbelkada * fix: Fix initialization of child states Since initially writing this PR, the logic in the child state types changed such that using the "init full" signature and keeping the ubatches on the parent struct no longer worked. Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * refactor: Use a common build_recurrent_state method that is cache-agnostic This reduces the code duplication between the different build_rs impls and also retains a similar signature to the previous build_recurrent_state method while standardizing on the input-dispatched build_rs implementation. Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * recurrent : rework graph inputs + add TODOs ggml-ci * refactor: Make status and child states const in hybrid and iswa Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * refactor: Rename llama_kv_cache_[recurrent|hybrid_recurrent] to remove kv cache This removes the notion of "kv" from the interface names for these memory types. There are still many references to kv in the implementation of the recurrent memory which will need further adjustment. Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * refactor!: Rename all k/v related values for recurrent/hybrid to r/s Anywhere that "kv_<state|cell|size|etc>" is used, I've used the more generic "mem_" prefix. The specifics of "k" (key) translate to "r" (recurrent state) and "v" (value) translate to "s" (state-space embedding states). Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * refacor: _recurrent -> _recr for brevity It just _happens_ to have the same number of letters as _attn! Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * style: Fix spacing for ref Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * refactor: recurrent_layer() -> is_recurrent() Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <[email protected]> * style: Fix spacing for size_s_bytes declaration Co-authored-by: Georgi Gerganov <[email protected]> --------- Signed-off-by: Gabe Goodhart <[email protected]> Co-authored-by: Georgi Gerganov <[email protected]> * Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer (#14249) * llamafile : support s390x SIMD instruction set (#14273) * convert : fix remote option in Windows (#14100) * llama-bench : add --no-warmup flag (#14224) (#14270) Add no_warmup parameter to cmd_params struct and command-line parsing to allow users to skip warmup runs before benchmarking. - Add no_warmup boolean field to cmd_params struct - Add --no-warmup command-line argument parsing - Add help text documentation for the new flag - Wrap existing warmup logic in conditional check - Maintain full backward compatibility (warmup enabled by default) Addresses #14224 * sycl: Cleanup codepaths in Get Rows in sycl backend (#14215) Addresses unused reorder path * build : suppress gcc15 compile warnings (#14261) * Change _contains_any() substrs to std::string_view and fix the find comparison logic. * server : add server parameters for draft model cache type (#13782) Co-authored-by: aa956 <[email protected]> * gguf-py : make sentencepiece optional (#14200) * Make sentencepiece optional * Bump to 0.18.0 * Bump patch instead of minor Co-authored-by: compilade <[email protected]> --------- Co-authored-by: compilade <[email protected]> * ggml-cpu : remove unnecesary arm feature detection (#14281) Support for Arm runtime feature detection has now been added to GGML_CPU_ALL_VARIANTS. This removes the old and not very functional code. * CUDA: add conv_2d_dw (#14265) * CUDA: add conv_2d_dw * better naming * simplify using template * Review: fix operation ordering in ggml-cuda, use __forceinline__, use more const * ubatch : new splitting logic (#14217) ggml-ci * model : more uniform output id handling (#14275) * model : more uniform output id handling ggml-ci * cont : revert n_outputs < n_tokens optimization ggml-ci * cont : fix out_ids initialization ggml-ci * ggml: Update KleidiAI to v1.9.0 (#14277) * ggml : fix repack work size for mul_mat_id (#14292) ggml-ci * cuda : synchronize graph capture and cublas handle destruction (#14288) Workarounds an issue that may cause CUDA graph capture to fail when a cuBLAS handle is destroyed in a different thread * llama : improve sep token handling (#14272) * Implement GGML_CPU_ALL_VARIANTS for PowerPC (#14286) * Add PowerPC feature detection and scoring * ggml-cpu: Implement GGML_CPU_ALL_VARIANTS for PowerPC * ggml-cpu: Delay some initializations until function is called When using GGML_BACKEND_DL=ON, these initializations might use instructions that are not supported by the current CPU. --------- Co-authored-by: Diego Devesa <[email protected]> * sycl: add usage of enqueue_functions extension (#14244) * Add header and namespace to use enqueue_functions extension * Convert submit and parallel_for to use new extension in convert.cpp * Convert submit and parallel_for to use extension in ggml-sycl.cpp * Convert submit and parallel_for to use extension in gla.cpp * Convert submit and parallel_for in mmq.cpp * Convert submit and parallel_for in mmvq.cpp * Convert submit and parallel_for in remaining files * Convert all simple parallel_for to nd_launch from enqueue_functions extension * Wrapping extension in general function Create a general function that enable the enqueue_functions extension if it is enable in the compiler, otherwise call the general SYCL function to launch kernels. --------- Signed-off-by: nscipione <[email protected]> * vocab : prevent tokenizer overflow (#14301) * vocab : prevent stack overflow in tokenize * vocab : return error instead of aborting on oversized token count * vocab : INT32_MIN from llama_tokenize on overflow * lint : remove trailing whitepace (#14304) * CUDA: add conv_2d_transpose (#14287) * CUDA: add conv_2d_transpose * remove direct include of cuda_fp16 * Review: add brackets for readability, remove ggml_set_param and add asserts * docs : fix the link to llama.h (#14293) * Add `ggml_roll` (ggml/1274) * ggml : add ggml_roll * use set/get_op_params & std::min * sync : ggml ggml-ci * convert : fix Llama 4 conversion (#14311) * memory : rename interface to llama_memory_context_i (#14296) * memory : rename interface to llama_memory_context_i ggml-ci * cont : fix comments * cont : use "mctx" for referencing a memory context ggml-ci * metal : fix thread-safety (#14300) ggml-ci * gguf-py : fix TemplateProcessing pair when bos/eos is missing (#14312) * Add support for VK_EXT_debug_utils to add labels to Vulkan objects. (#13792) * Add support for VK_EXT_debug_utils to add labels to Vulkan objects. In step 1 compute pipelines are getting labeled. * remove #ifdef for debug utils and add queue marker. * gguf-py : fix Qwen3-Embedding eos token (#14314) * CUDA: add mean operation (#14313) * CUDA: add mean operation * add back sum_rows_f32_cuda * Review: early exit if col!=0 * common : use std::string_view now that we target c++17 (#14319) * mtmd : fix Pixtral OOM with large images by capping image_size to 1024 (#14326) Mistral Small 2506 models using Pixtral vision encoder were running out of GPU memory when processing images larger than 1024x1024 pixels due to exponential memory growth from unlimited image size. This fix applies the same 1024x1024 limit used by Qwen2VL models to prevent OOM issues while maintaining compatibility with existing models. * HIP: enable vec fattn on RDNA4 (#14323) * examples : fix is_first logic for tokenization (#14329) ggml-ci * run : avoid double tokenization (#14327) * run : avoid double tokenization by adopting common_tokenize heuristic * build : fix windows gcc and clang warnings * lint : fixed trailing whitepace * run : fix is_first flag * gguf-py : fix SpecialVocab parsing when post_processor is null (#14330) * quantize : handle user-defined pruning of whole layers (blocks) (#13037) * vulkan: update windows SDK in CI (#14334) * kv-cells : fix tracking of seq_pos (#14339) * kv-cells : fix tracking of seq_pos during cache reuse ggml-ci * cont : improve error message ggml-ci * cont : add more comments * CUDA: mul_mat_v support for batch sizes > 1 (#14262) * CUDA: mul_mat_v support for batch sizes > 1 * use 64 bit math for initial offset calculation * llama : better rwkv chat template and add missing `inputs.use_jinja` setting (#14336) * llama-cli : add missing `inputs.use_jinja` setting Signed-off-by: Molly Sophia <[email protected]> * llama : better legacy chat template for rwkv Signed-off-by: Molly Sophia <[email protected]> --------- Signed-off-by: Molly Sophia <[email protected]> * vulkan: update windows SDK in release.yml (#14344) * ci: add workflow for relocatable cmake package (#14346) * CUDA/HIP: optimize mmv paths taken for HIP devices (#14324) Co-authored-by: Johannes Gäßler <[email protected]> * jinja : Add Mistral-Small-3.2-24B-Instruct-2506.jinja (#14349) This will allow the use of tools on the llama-server * main : honor --verbose-prompt on interactive prompts (#14350) * server : move no API key doc to /health (#14352) * cmake : use LLAMA_BUILD_NUMBER when defining LLAMA_INSTALL_VERSION (#14362) * batch : fix check for empty sequences in memory (#14364) * batch : fix check for empty sequences in memory ggml-ci * cont : reuse the var ggml-ci * opencl: ref count `ggml_backend_opencl_context` and refactor profiling (#14254) * Move profiling info into `ggml_backend_opencl_context` * Add `enqueue_ndrange_kernel` to launch kernel * sycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices (#13973) * ggml : do not output unprintable characters on GGUF load failure (#14381) * ggml-cpu: enable IBM NNPA Vector Intrinsics (#14317) * ggml-cpu: add nnpa compile flag Signed-off-by: Aaron Teo <[email protected]> (cherry picked from commit 4a9f60c201573128f73a65999b3e5cc497fae5c1) * ggml-cpu: add fp16->fp32 nnpa first Signed-off-by: Aaron Teo <[email protected]> (cherry picked from commit 8d4a7987f9c1887f716be96250f2caeee0253929) * ggml-cpu: add fp32->fp16 Signed-off-by: Aaron Teo <[email protected]> (cherry picked from commit 0ff0d6516247a41d2ade42b42cf0d676a4dd1627) * ggml-cpu: better variable names Signed-off-by: Aaron Teo <[email protected]> (cherry picked from commit 2f58bbcbb89c183340e252362b2a40651f573f1f) * docs: update s390x docs Signed-off-by: Aaron Teo <[email protected]> (cherry picked from commit 01b929491b50071a5d0572235dcf5a449da70aa7) * ggml-cpu: add debugging prints to see if dlf16 is correct Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix print vs printf Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix float placeholder Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: ensure fp16 and fp32 load and stores are called Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fp16 load ensured to hit Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: remove sigint from fp16 store for some reason, the function is not getting a hit when debugged with gdb. we will need to investigate further Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: activate nnpa for ggml_cpu_fp16_to_fp32 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: nnpa activate ggml_cpu_fp16_to_fp32 for 8 elements Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: nnpa switch to vec_xst test Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: switch to vec_xst for 4 element loops also Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: rework noop Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: remove noop, general code cleanup Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: clarify variable naming Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: activate nnpa for ggml_cpu_fp32_to_fp16 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add breakpoint for debugging Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: test fix for conversion failure Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: disable fp32->fp16 nnpa conversions for now there are some conversion failures in nnpa that requires the eyes of an ibm stsm. will create a separate pr to introduce the fp32->fp16 change. Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: switch to elif macro Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: reattempt fp32->fp16 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix typo Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: reattempt fp32->fp16 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix compiler types Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: change to typedef vector types Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add 4 element loops for fp32->fp16 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: clarified vector naming Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: bring back fp32->fp16 store nnpa Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: activate nnpa fp32->fp16 or fp16->fp32 compute Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add nnpa macro check in ggml-impl Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add missing __func__ Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: diagnose why __NNPA__ macro is not being defined Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: import vecintrin.h to fix compiler errors Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: update macro tests Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: move s390x typedef to own header file Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml-cpu: move s390x typedef to own header file" This reverts commit 157f856c34589566151630e294563a420702db39. Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: switch to importing ggml-cpu-impl instead Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix macro declaration Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: test more macros Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add debug prints Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: bruteforce macro definitions Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: move macro definitions Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add ggml-impl.h to cmakelists Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: switch to private macros Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: move s390x typedef to own header file Signed-off-by: Aaron Teo <[email protected]> (cherry picked from commit 157f856c34589566151630e294563a420702db39) * ggml-cpu: move things around Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: bring back compile macros Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: switch to quotes for import Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add compiler error macro Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add s390x detection in ggml-src Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: bring back compile definitions Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: undo cmakelists work Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml-cpu: move s390x typedef to own header file" This reverts commit 18d79e1a30b39d9aaa0bd58400c5cf2c32135c9a. Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: remove typedefs.h Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: remove typedef from cmakelists Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add ggml-impl.h future notes Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add todo comment for future reference Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: clarify naming of dlf16 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: remove unnecessary target compile definitions Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: move nnpa fp16->fp32 and fp32->fp16 to simd-mappings Signed-off-by: Aaron Teo <[email protected]> * ggml: refactor fp32->fp16 and fp16->fp32 simd to ggml-cpu Signed-off-by: Aaron Teo <[email protected]> * docs: update broken huggingface link for s390x Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix duplicate func names during compile Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml-cpu: fix duplicate func names during compile" This reverts commit fbb733451f27677063b914d4f6c9a9841d45b38d. Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml: refactor fp32->fp16 and fp16->fp32 simd to ggml-cpu" This reverts commit bd288e8fa52b5244f65cee21cb61062f1a9e0ca5. Signed-off-by: Aaron Teo <[email protected]> * ggml: refactor fp16<->fp32 simd to ggml-cpu Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix missing simd-mappings.h import in quants.c Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix missing simd-mappings.h within repack Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix amx mmq missing simd-mappings.h Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: attempt at fixing loongarch failing build Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: move nnpa together with other fp16<->fp32 simd Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix wrong refactor of ggml-base ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164176555 Signed-off-by: Aaron Teo <[email protected]> * ggml: remove dependency on ggml-cpu from ggml-base Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: rename all fp16<->fp32 macros to prefix with ggml_cpu ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164449406 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: remove mistaken fallback macro fallback logic was already implemented but i was too sleepy to realise Signed-off-by: Aaron Teo <[email protected]> * ggml: move ggml_table_f32_f16 to ggml-cpu ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164775006 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: move ggml_table_f32_f16 back to ggml-base due to ci failures Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml-cpu: move ggml_table_f32_f16 back to ggml-base due to ci failures" This reverts commit 32a3533564bdb7902cefb9c89b1c9e956a81ce29. Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml: move ggml_table_f32_f16 to ggml-cpu" This reverts commit 9e40d984ad27d7b60392fb2b7548885201864fe4. Signed-off-by: Aaron Teo <[email protected]> * ggml: move ggml_table_f32_f16 to ggml-cpu ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164775006 Signed-off-by: Aaron Teo <[email protected]> (cherry picked from commit 9e40d984ad27d7b60392fb2b7548885201864fe4) * ggml: move ggml_table_f32_f16 to ggml-cpu.c Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: extern c ggml_table_f32_f16 + chore docs Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: dedup ggml_table_f32_f16 from simd-mappings.h we rely on the variable declaration in ggml-cpu.c instead Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml-cpu: dedup ggml_table_f32_f16 from simd-mappings.h" This reverts commit f71b21d2f74f5e03ec0c2b4fefd3cbf395aecf16. Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: bring back ggml_table_f32_f16 Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml-cpu: bring back ggml_table_f32_f16" This reverts commit 2dce119178bed5ef5c8398c4230ddd14fef80e49. Signed-off-by: Aaron Teo <[email protected]> * fix ggml time initialization * fix f32_f16 table init * remove extra line --------- Signed-off-by: Aaron Teo <[email protected]> Co-authored-by: slaren <[email protected]> * musa: enable fp16 mma (all) and cublas on qy2 (#13842) * musa: enable fp16 mma (all) and cublas on qy2 Signed-off-by: Xiaodong Ye <[email protected]> * Update ggml/src/ggml-cuda/ggml-cuda.cu Co-authored-by: Johannes Gäßler <[email protected]> * Address review comments Signed-off-by: Xiaodong Ye <[email protected]> * Address review comments Signed-off-by: Xiaodong Ye <[email protected]> * musa: disable MUL_MAT_ID (q2_k × f32) due to precision issues Signed-off-by: Xiaodong Ye <[email protected]> --------- Signed-off-by: Xiaodong Ye <[email protected]> Co-authored-by: Johannes Gäßler <[email protected]> * docs: update s390x documentation + add faq (#14389) * docs: update s390x documentation + add faq Signed-off-by: Aaron Teo <[email protected]> * docs: add s390x z17 build q&a Signed-off-by: Aaron Teo <[email protected]> --------- Signed-off-by: Aaron Teo <[email protected]> * metal : batch rows copy in a single threadgroup (#14384) * metal : batch rows copy in a single threadgroup ggml-ci * metal : handle some edge cases when threadgroup size is not a power of 2 ggml-ci * metal : add special-case mat-vec mul for ne00 == 4 (#14385) ggml-ci * llama : return mistral-v7-tekken as default template only (#14390) * cmake: regen vulkan shaders when shaders-gen sources change (#14398) * Add shaders-gen sources as target deps * model : gemma3n text-only (#14400) * gemma3n * add llm_graph_input_one * convert : fix broken sentencepiece vocab (#14416) * ggml : add ggml_set_rows (#14274) * ggml : add ggml_set_rows Add ggml_set_rows(a, b, c) which copies rows from 'b' into 'a' using indices from 'c'. ref: #8366 * use I64 for indices * ggml : add repeat impl for i64 * ggml : add ggml_is_contiguous_rows * ggml : ggml_set_rows support broadcast * ggml : ggml_set_rows support quantized dst ggml-ci * ggml : support GGML_TYPE_F32 ".from_float" trait * ggml : ggml_set_rows update comment + better index name * tests : add ggml_set_rows * metal : add ggml_set_rows implementation ggml-ci * ggml : simplify forward_dup_f32 * ggml : fix supports_op * tests : add comment to set_rows * ggml : leave the repeat_i64 for a separate PR ggml-ci * ggml : set_rows use std::min instead of MIN * ggml : better error message for set_rows unsupported type * metal : perform op->type check only once * tests : more consistent implementation + more tests ggml-ci --------- Co-authored-by: Georgi Gerganov <[email protected]> * recurrent : call balloc split_reset() in init_batch() (#14414) ggml-ci * graph : make llm_graph_context destructor virtual (#14410) ggml-ci * vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO (#14427) This setting needs to be passed through to vulkan-shaders-gen * ci : fix windows build and release (#14431) * fix async_mode bug (#14432) * model : add support for ERNIE 4.5 0.3B model (#14408) Add Day-0 support for Baidu ERNIE 4.5 0.3B model. Signed-off-by: Weizhao Ouyang <[email protected]> * vulkan: lock accesses of pinned_memory vector (#14333) * vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (#14378) * CUDA: add bf16 and f32 support to cublas_mul_mat_batched (#14361) * CUDA: add bf16 and f32 support to cublas_mul_mat_batched * Review: add type traits and make function more generic * Review: make check more explicit, add back comments, and fix formatting * Review: fix formatting, remove useless type conversion, fix naming for bools * vulkan: Add fusion support for RMS_NORM+MUL (#14366) * vulkan: Add fusion support for RMS_NORM+MUL - Add a use_count to ggml_tensor, so we can detect if an output is used more than once. - Change the ggml-vulkan rms_norm shader to optionally multiply by another tensor. - Add detection logic and basic fusion logic in ggml-vulkan. - Add some testing support for fusion. Rather than computing one node at a time, allow for computing the whole graph and just testing one node's results. Add rms_norm_mul tests and enable a llama test. * extract some common fusion logic * fix -Winconsistent-missing-override * move ggml_can_fuse to a common function * build fix * C and C++ versions of can_fuse * move use count to the graph to avoid data races and double increments when used in multiple threads * use hash table lookup to find node index * change use_counts to be indexed by hash table slot * minimize hash lookups style fixes * last node doesn't need single use. fix type. handle mul operands being swapped. * remove redundant parameter --------- Co-authored-by: slaren <[email protected]> * ggml : implement REGLU/GEGLU/SWIGLU ops (#14158) * implement unary REGLU/GEGLU/SWIGLU cpu ops * relax constraints * duplicate shape of source * fix ggml_vec_geglu_f16 * special case gated ops * implement unary REGLU/GEGLU/SWIGLU cuda ops * tighten constraints again * refactor into GGML_GLU_OP * metal : add glu kernels ggml-ci * add CUDA_GLU_BLOCK_SIZE [no ci] * more constraints and use 64bit ints ggml-ci * 64bit multiplication [no ci] * implement swapped variants (cpu/cuda) * update comment [no ci] ggml-ci * Vulkan: Add GLU ops and shaders * SYCL: Implement fused kernel GEGLU, SWIGLU and REGLU for single up+gate * ggml : implement GLU for split up/gate (#14181) * implement GLU for split up/gate * add tests for ggml_glu_split * Vulkan: Implement glu_split logic and shader support * add split to logging [no ci] * SYCL: refactor element_size ops and add split up and gate support to gated kernels * SYCL: switch GEGLU to use tanh approximation --------- Co-authored-by: 0cc4m <[email protected]> Co-authored-by: Akarshan <[email protected]> * GGML: increase OP count in assertion * Refactor: Optimize SYCL element-wise operations with unary function inlining This commit refactors the SYCL element-wise operations to improve performance by: - Inlining unary operations (sgn, abs, elu, gelu, silu, etc.) to reduce kernel launch overhead. - Introducing helper functions `op_xxx` for each unary operation to encapsulate the logic. - Replacing direct kernel calls with calls to these inlined functions. - Using `__dpct_inline__` to encourage compiler inlining. - Minor code cleanup and consistency improvements. The changes aim to reduce kernel launch overhead and improve the overall efficiency of element-wise operations on SYCL devices. * vulkan: Increase workgroup size for GLU, for performance (#14345) * vulkan: Increase workgroup size for GLU, for performance * vulkan: change GLU shaders to do one element per invocation rather than one row per workgroup * merge fix * metal : add support for split and swap ggml-ci --------- Co-authored-by: Georgi Gerganov <[email protected]> Co-authored-by: 0cc4m <[email protected]> Co-authored-by: Akarshan <[email protected]> Co-authored-by: Jeff Bolz <[email protected]> * ggml : fix unmerged GGML_FPxx_TO_FPxx refactoring (#14443) * SYCL: disable faulty fp16 exp kernel (#14395) * SYCL: disable faulty fp16 CPU exponent for now * Revert "SYCL: disable faulty fp16 CPU exponent for now" This reverts commit ed0aab1ec31b4eb4b0f275dd7acd41d96a375202. * SYCL: disable faulty fp16 CPU exponent for now * Fix logic of disabling exponent kernel * server : fix appearance of the chats list context menu for Safari (#14322) * server : support jinja extra template kwargs (Qwen3 enable_thinking feature), from command line and from client (#13196) * initial commit for handling extra template kwargs * enable_thinking and assistant prefill cannot be enabled at the same time * can set chat_template_kwargs in command line * added doc * fixed formatting * add support for extra context in generic template init * coding standard: common/chat.cpp Co-authored-by: Georgi Gerganov <[email protected]> * coding standard: common/chat.cpp Co-authored-by: Georgi Gerganov <[email protected]> * Apply suggestions from code review coding standard: cosmetic changes Co-authored-by: Georgi Gerganov <[email protected]> * fix merge conflict * chat.cpp: simplify calls to apply to ensure systematic propagation of extra_context (+ the odd existing additional_context) * normalize environment variable name * simplify code * prefill cannot be used with thinking models * compatibility with the new reasoning-budget parameter * fix prefill for non thinking models --------- Co-authored-by: Georgi Gerganov <[email protected]> Co-authored-by: Olivier Chafik <[email protected]> * scripts : make the shell scripts cross-platform (#14341) * cmake : Remove redundant include path in CMakeLists.txt (#14452) * Update docker.yml 修改docker.yml文件中的内容使其停止周期性的运行该workflow,如果想要运行该workflow可以手动启动 * Remove redundant include path in CMakeLists.txt The parent directory '..' was removed from the include directories for the ggml-cpu-feats target, to avoid unnecessary include paths. * Enable scheduled Docker image builds Uncomments the workflow schedule to trigger daily Docker image rebuilds at 04:12 UTC, improving automation and keeping images up to date. * test-backend-ops : disable llama test (#14461) * ggml-cpu: sycl: Re-enable exp f16 (#14462) * metal : disable fast-math for some cpy kernels (#14460) * metal : disable fast-math for some cpy kernels ggml-ci * cont : disable for q4_1 ggml-ci * cont : disable for iq4_nl ggml-ci * memory : correctly handle failure in apply() (#14438) ggml-ci * Add Conv2d for CPU (#14388) * Conv2D: Add CPU version * Half decent * Tiled approach for F32 * remove file * Fix tests * Support F16 operations * add assert about size * Review: further formatting fixes, add assert and use CPU version of fp32->fp16 * opencl : add GEGLU, REGLU, SWIGLU (#14456) * ggml-quants : rename best_mad to best_error (ggml/1283) This commit renames the variable `best_mad` to `best_error` in the `make_qkx2_quants` function. The motivation for this is that the name `best_mad` can be somewhat confusing if mean absolute deviation (MAD) is not in use. * ggml-cpu : "align corners" for bilinear upscale/downscale (ggml/1285) * add "align corners" mode for bilinear upscale, and allow downscaling * add ggml_interpolate, deprecate ggml_upscale_ext, pass in align-corners as bit-flag * test-backend-ops: replace ggml_upscale_ext with ggml_interpolate, add test cases for downscale and align-corners * sync : ggml ggml-ci * ggml : remove trailing whitespace (#0) * add GELU_ERF (#14455) * vulkan: Split large mul_mat_id to fit in shared memory (#14451) * CANN: update aclnnGroupedMatmulV2 to aclnnGroupedMatmulV3 (#14411) * [CANN]update to aclnnGroupedMatmulV2 Signed-off-by: noemotiovon <[email protected]> * Support MUL_MAT_ID on 310p Signed-off-by: noemotiovon <[email protected]> * fix editorconfig Signed-off-by: noemotiovon <[email protected]> --------- Signed-off-by: noemotiovon <[email protected]> * Add Vulkan images to docker.md (#14472) Right now it's not easy to find those. * ci : disable fast-math for Metal GHA CI (#14478) * ci : disable fast-math for Metal GHA CI ggml-ci * cont : remove -g flag ggml-ci * ggml : Callback before abort (#14481) * Add a callback that will be called just before abort. This allows apps without a console to display a message to the user and save data if needed. * Return previous callback to allow callback chaining * style fixes --------- Co-authored-by: Diego Devesa <[email protected]> --------- Signed-off-by: nscipione <[email protected]> Signed-off-by: Xiaodong Ye <[email protected]> Signed-off-by: Piotr Stankiewicz <[email protected]> Signed-off-by: Eric Curtin <[email protected]> Signed-off-by: Aaron Teo <[email protected]> Signed-off-by: Gabe Goodhart <[email protected]> Signed-off-by: Molly Sophia <[email protected]> Signed-off-by: Weizhao Ouyang <[email protected]> Signed-off-by: noemotiovon <[email protected]> Co-authored-by: Yuanhao Ji <[email protected]> Co-authored-by: Đinh Trọng Huy <[email protected]> Co-authored-by: dinhhuy <[email protected]> Co-authored-by: Nicolò Scipione <[email protected]> Co-authored-by: Georgi Gerganov <[email protected]> Co-authored-by: R0CKSTAR <[email protected]> Co-authored-by: Xinpeng Dou <[email protected]> Co-authored-by: Diego Devesa <[email protected]> Co-authored-by: xctan <[email protected]> Co-authored-by: Kai Pastor <[email protected]> Co-authored-by: Isaac McFadyen <[email protected]> Co-authored-by: 0cc4m <[email protected]> Co-authored-by: Juk Armstrong <[email protected]> Co-authored-by: Jeff Bolz <[email protected]> Co-authored-by: Sigbjørn Skjæret <[email protected]> Co-authored-by: compilade <[email protected]> Co-authored-by: lhez <[email protected]> Co-authored-by: Taylor <[email protected]> Co-authored-by: Aman <[email protected]> Co-authored-by: Christian Kastner <[email protected]> Co-authored-by: bandoti <[email protected]> Co-authored-by: Daniel Bevenius <[email protected]> Co-authored-by: Anton Mitkov <[email protected]> Co-authored-by: Ewan Crawford <[email protected]> Co-authored-by: ddpasa <[email protected]> Co-authored-by: Guy Goldenberg <[email protected]> Co-authored-by: Svetlozar Georgiev <[email protected]> Co-authored-by: Piotr <[email protected]> Co-authored-by: Pepijn de Vos <[email protected]> Co-authored-by: Mikko Juola <[email protected]> Co-authored-by: uvos <[email protected]> Co-authored-by: Ed Addario <[email protected]> Co-authored-by: Eric Curtin <[email protected]> Co-authored-by: Bartowski <[email protected]> Co-authored-by: Xuan-Son Nguyen <[email protected]> Co-authored-by: xctan <[email protected]> Co-authored-by: Charles Xu <[email protected]> Co-authored-by: Xuan-Son Nguyen <[email protected]> Co-authored-by: Aaron Teo <[email protected]> Co-authored-by: Gabe Goodhart <[email protected]> Co-authored-by: pqnet <[email protected]> Co-authored-by: bashayer hijji <[email protected]> Co-authored-by: Anton Mitkov <[email protected]> Co-authored-by: fanyang <[email protected]> Co-authored-by: aa956 <[email protected]> Co-authored-by: aa956 <[email protected]> Co-authored-…
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Adds missing GELU_ERF