Thanks to visit codestin.com
Credit goes to github.com

Skip to content

kv-cache : add SWA support #13194

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 16 commits into from
May 20, 2025
Merged

kv-cache : add SWA support #13194

merged 16 commits into from
May 20, 2025

Conversation

ggerganov
Copy link
Member

@ggerganov ggerganov commented Apr 29, 2025

Overview

Add class llama_kv_cache_unified_iswa for interleaved SWA attention support.

The implementation internally utilizes 2 instances of the existing llama_kv_cache_unified - one for the non-SWA and one for the SWA layers of the model. To achieve that, the llama_kv_cache_unified implementation is updated to be able to cache a subset of the model's layers (instead of always caching all layers as it is on master). The 2 internal caches behave almost in exactly the same way with 2 main differences:

  • The SWA cache is much smaller
  • The SWA cache automatically "forgets/prunes" old tokens upon successful commit (i.e. successful batch decode)

The size of the SWA cache is computed as:

PAD(n_swa*n_seq_max + n_batch)

This way we can store the cache data for the last n_swa tokens for all sequences and we also have room to evaluate a new batch of tokens with size up to n_batch.

Note that advanced cache operations such as removing tokens or shifting their positions are not possible when using SWA cache, because token information becomes lost when the window slides. For such cases, we can "fallback" to the old implementation by expanding the SWA cache size to the full context and disabling the SWA token pruning. This of course would lead to more memory usage. See the swa_full flag for more info.

The new llama_kv_cache_unified_iswa can be used for non-SWA models with n_swa = n_ctx_train.


Main changes

  • Move KV cache store and view logic from llama-graph to llama-kv-cache
  • Move KV cache mask creation logic from llama-graph to llama-kv-cache
  • The inputs to build_attn_mha() are now not permuted
  • The QKV self-attention code is now more harmonious:
      const llama_kv_cache_unified * kv_self = static_cast<const llama_kv_cache_unified *>(memory);
    
      // store to KV cache
      {
          ggml_build_forward_expand(gf, kv_self->cpy_k(ctx0, k_cur, il));
          ggml_build_forward_expand(gf, kv_self->cpy_v(ctx0, v_cur, il));
      }
    
      const auto & kq_mask = inp->get_kq_mask();
    
      ggml_tensor * q = q_cur;
      ggml_tensor * k = kv_self->get_k(ctx0, il);
      ggml_tensor * v = kv_self->get_v(ctx0, il);
    
      ggml_tensor * cur = build_attn_mha(gf, q, k, v, kq_b, kq_mask, v_mla, kq_scale);
      cb(cur, "kqv_out", il);
  • Add enum hparams.swa_type to support chunked and non-chunked SWA (remove hparams.n_attn_chunk)
  • Add class llama_kv_cache_unified_iswa - new iSWA cache that internally utilizes 2 standard llama_kv_cache_unified instances
  • Make the llama_kv_cache_unified implementation more private and polish the interface
  • Move the Llama 4 build function to a new llm_build_llama_iswa()
  • llama-server now respects llama_kv_self_can_shift(ctx)
  • The llama_decode now attempts to do a defrag if it fails to fit the input batch in the cache
  • The llama_decode now correctly restores the cache state in all cases
  • Examples can fallback to full-size SWA cache with --swa-full

API changes

  • Update llama_context_params - add bool swa_full

TODO

  • Cut-off old SWA tokens in llama_kv_cache_unified_iswa::commit()
  • Pass n_seq_max and n_batch to the KV cache and utilize it to determine SWA cache size
  • Allow KV shift when SWA window size is big enough
  • Add limits to batch size based on SWA window
  • llama-server check for llama_kv_self_can_shift
  • Add context parameter for switching between small and large SWA cache (kv-cache : add SWA supportΒ #13194 (comment))

Testing

Any help with testing the following scenarios and reporting the results are highly appreciated:

  • Llama 4
  • Phi 3
  • Gemma 2
  • Gemma 3
  • Cohere 2
  • Multi-user
  • Context shift
  • Context reuse
  • Speculative decoding?

Next PRs

  • Split KV cache implementations in separate source files
  • Remove llama_kv_cache_view API (not useful, can be replaced with internal debugging functions)
  • Add struct kv_cells and simplify logic with modifying the cells
  • Refactor the llama_kv_cache logic to allow SWA cache with size n_swa + n_ubatch
  • Set defrag threshold to 0.0 by default
  • llama_decode distinguish return code when we are sure that even after defrag there is no space available
  • Update experimental status of llama_context_params
  • Avoid llama_kv_cache::set_full()
  • Rework llama_kv_cache to not maintain the batching state (kv-cache : add SWA supportΒ #13194 (review))
  • Consider template <bool SWA> llm_build_llama()

outdated

This is still very WIP - the goal is to redesign the unified KV cache to properly support layers with sliding-window attention (SWA) in order to reduce the memory usage for models such as Gemma3.

However, while working on this, I realized that enabling this option would prevent context caching, which IMO is a pretty big deal. So I am wondering if I am missing something.

The reason we cannot do context caching with SWA enabled is because when the window slides, we "forget" the old KV stuff and there is no way to recover it without recomputing it. This means, no prefix cache in llama-server (ok, just last-prefix caching works), no context shift, no context reuse, etc. So I am having some doubts if this is really worth supporting.

Any thoughts?

@slaren
Copy link
Member

slaren commented Apr 29, 2025

It's not very clear to me how to handle SWA with a unified cache where there may be multiple sequences, and it is not always obvious what tokens can be dropped from the cache. However I think it is definitely worth it for the single user case, which after all is the main use case of llama.cpp.

@ngxson
Copy link
Collaborator

ngxson commented Apr 29, 2025

However, while working on this, I realized that enabling this option would prevent context caching, which IMO is a pretty big deal. So I am wondering if I am missing something.

Yes this is what I was thinking about for months now. There is no better solution than to disable context caching in this case.

An alternative solution is to allow user to choose one of the 2: either a proper SWA cache (good for memory) or allocate full (good for reusing cache)

So I am having some doubts if this is really worth supporting.

I'm feeling 50/50 here. One of the biggest use case would be to process large and diverse set of documents locally. In this case, user may never reuse the cache because each new request is a new document

@ggerganov
Copy link
Member Author

It's not very clear to me how to handle SWA with a unified cache where there may be multiple sequences, and it is not always obvious what tokens can be dropped from the cache. However I think it is definitely worth it for the single user case, which after all is the main use case of llama.cpp.

The way I am approaching it is to have the "KV cells" information maintained separately for the non-SWA and SWA layers. This way, upon each KV cache commit (see #12799), we can do a pass over the SWA cells and automatically remove those that have position pos < pos_max(seq_id) - n_swa. Note that such tokens are only pruned from the SWA cells, while they remain in the non-SWA cells. When constructing the KQ mask for the graph, we use the non-SWA cells to construct the kq_mask and the SWA cells to construct the kq_mask_swa.

The rest of the logic is the same - it just operates on both set of cells. For example, find_slot searches in both the non-SWA and SWA cells.

@JohannesGaessler
Copy link
Collaborator

My experience with the Gemma models in the context of Elo HeLLM has been that they required a disproportionate amount of computational resources to run benchmarks. The reason is that I was able to fit comparatively fewer parallel slots on 1 or 2 GPUs and my throughput was lower as a consequence. At least for my use case I value low memory usage for the context more than I value prompt caching because I have O(10000) short prompts and I'm bottlenecked mostly by generation throughput.

@ggerganov
Copy link
Member Author

Continuing thinking about the logic for when to discard tokens from the cache, it's indeed tricky and not very clear how to do. For example, when doing speculative decoding, we can submit a draft batch with D tokens to the target model. If we apply the pruning logic from my previous comment strictly, then this would cause to "forget" D-1 of the oldest tokens in the SWA layers, which depending if the draft gets rejected would be problematic. This makes me think that we should probably have some "extra room" in the SWA cache - for example n_swa + 2*n_batch. And the prune logic should be something like: pos < pos_max(seq_id) - n_swa - n_batch.

@ggerganov ggerganov force-pushed the gg/llama-kv-cache-v6 branch from e37f112 to 7e4b545 Compare April 30, 2025 07:22
@ymcki
Copy link
Contributor

ymcki commented Apr 30, 2025

It's not very clear to me how to handle SWA with a unified cache where there may be multiple sequences, and it is not always obvious what tokens can be dropped from the cache. However I think it is definitely worth it for the single user case, which after all is the main use case of llama.cpp.

I second slaren's opinion. As far as I know, vllm also doesn't support iSWA while hf transformers and ollama does. vllm is geared toward multi-user server use case. I suppose that's why they don't support it.

Ideally, it should be implemented as a switch to let user choose which one to use. By default, iSWA should be on for llama-cli but off for llama-server.

@ngxson
Copy link
Collaborator

ngxson commented Apr 30, 2025

This makes me think that we should probably have some "extra room" in the SWA cache - for example n_swa + 2*n_batch. And the prune logic should be something like: pos < pos_max(seq_id) - n_swa - n_batch.

Yes I was thinking about this too, I think it can be a bit complicated to manage this case, but totally possible.

We can let user specify how many tokens are allocated in the sliding layers. For example, given n_swa=512, if llama_context is created with n_ctx=4096 and n_ctx_swa=1024, this will allow user to rollback until n_past - (1024 - 512)

We can further let n_ctx_swa = n_ctx * scale by default to make it transparent to end-user, with scale=0.5 by default for example. If scale=-1 then n_ctx_swa=n_swa

And finally, we may need to add an API to return the furthest n_past that user can rollback to, maybe something like llama_kv_self_get_minimum_pos ?

@isaac-mcfadyen
Copy link
Contributor

isaac-mcfadyen commented Apr 30, 2025

I'd +1 the ability to allow the user to switch.

Some use-cases benefit greatly from the prefix caching (example: on Metal systems with 48GB of RAM/VRAM, where pp is much slower than non-Metal pp and we have plenty of VRAM anyway) so allowing the user to choose would be optimal.

@ExtReMLapin
Copy link
Contributor

It's not very clear to me how to handle SWA with a unified cache where there may be multiple sequences, and it is not always obvious what tokens can be dropped from the cache. However I think it is definitely worth it for the single user case, which after all is the main use case of llama.cpp.

Is llama.cpp single user mode the most used case because that’s what the user base prefer or is it like that because the server performance goes down a lot with more than 3 users ? (#10860 )

We are really thankful of all the work you main contributors do on this project, but please do not fall in this Β« self-fulfilling prophecyΒ Β» trap.

@aviallon
Copy link
Contributor

aviallon commented May 1, 2025

I personally use llama.cpp for server use (with multiple users).
I wonder if we could do something hybrid between iSWA and what is currently done.
I wonder if partial kV cache offload could work, with iSWA on the accelerator, and slower cache on RAM.

@ggerganov ggerganov force-pushed the gg/llama-kv-cache-v6 branch 2 times, most recently from 58115a2 to 7e79a42 Compare May 2, 2025 13:02
Base automatically changed from gg/llama-kv-cache-v6 to master May 2, 2025 14:48
@Dampfinchen
Copy link

According to the Gemma3 paper, interleaved Sliding Window Attention reduces KV Cache memory usage by 1/5, so it would be much easier to run as right now KV Cache size is much heavier than comparable models.

If the drawback is the absence of prompt caching, then indeed it would make sense to give the option to the user and let them decide on a per use case basis. I think for cases where you use RAG/Vector DB it would prove to be very useful as prompt caching does not work when beginning of the context changes anyway. I would personally agree with Johannes here, faster token generation thanks to SWA would be more useful for me as well since I'm using vector DB.

So for the use cases short prompts/RAG it would make a lot of sense. For simple chat use cases without any RAG, prompt caching would probably make it faster overall compared to SWA and no prompt cache. Overall, I think having the option would be a great addition to llama.cpp.

If it helps, Ollama implemented iSWA support for Gemma 3, since the project is pretty similar to llama.cpp, perhaps it's useful to get a rough idea on how to implement it (although Ollama is a different coding language): https://github.com/ollama/ollama/blob/2fec73eef6e9482f606f185ebb2ae4f75ad1a37c/model/models/gemma3/model_text.go#L190

I've been thinking, does Ollama support prompt caching? Since Gemma 3 SWA is supported in Ollama, how did they handle it?

@ggerganov ggerganov force-pushed the gg/swa branch 3 times, most recently from 1c69466 to 1e10743 Compare May 9, 2025 12:15
@LostRuins
Copy link
Collaborator

Some people recently mentioned concerns with this PR - I think caching is quite important for a subset of users who don't have GPUs and run purely CPU only.

They are fine spending initial minutes or more ingesting a large initial prompts which they then reuse for many future turns - generation speed itself is usable, but the inability to cache would be crippling for such users.

@ggerganov
Copy link
Member Author

Both the old cache (i.e. more memory usage, but with advanced caching supported) and the new cache (less memory with just last-prefix caching) will be supported. Still figuring the implementation details - will likely be supported via a flag or a parameter.

@ggerganov
Copy link
Member Author

Thanks for all the feedback in this discussion. This branch should be ready for testing - I've listed some important use cases that need to be exercised. If something does not work, please let me know - at the moment I've done very little testing, so there could be some issues remaining.

I will soon write up a detailed summary of the changes and the approach taken. And after that will add some comments to the code and open the PR for review.

Regarding the parameter for controlling the size of the SWA cache - for now I haven't introduced it because some initial tests show that Gemma 3 remains coherent even when it "forgets" the local SWA cache - likely thanks to the data in the non-SWA cache. So I am thinking about giving this approach a try because it keeps the UX simple (i.e. we won't have to add new parameter and handle the use cases where context editing is not possible). If we determine that this breaks some important use cases, we can add the parameter - the libllama change is simple and the behavior would basically fallback to what currently happens on master.

@ExtReMLapin
Copy link
Contributor

ExtReMLapin commented May 11, 2025

To people who have the bandwidth to test models, FYI Cohere 2 arch includes R7B which is much smaller than Command-A

@andportnoy
Copy link

for now I haven't introduced it because some initial tests show that Gemma 3 remains coherent even when it "forgets" the local SWA cache

Does this mean in the current implementation the model isn't executed correctly?

@andportnoy
Copy link

FWIW, Gemma 3 worked better for me on main with Q8 cache quantization than on this branch + unquantized kv cache.

@ggerganov
Copy link
Member Author

ggerganov commented May 11, 2025

@andportnoy It's evaluated correctly, as long as you don't use context shift, cache reuse or branching from old states. Do you do any of that in your tests? Can you provide a repro?

Edit: Also don't change 2 things at the same time when testing. Use the same KV cache type, so we can rule out differences that are not relevant to the changes in this branch.

@ggerganov
Copy link
Member Author

Pushed tentative proposal:

  • Print warnings when we detect partial SWA contexts
  • Utilize the llama_kv_self_seq_pos_min() to detect partial SWA contexts and force recompute in the server logic

Now the computation should always be exact when using llama-server. If an application fails to do the check for partial SWA context, they will get a warning spam in the logs.

The idea is this to be a temporary solution until we are able to automatically reprocess what is necessary.

@slaren
Copy link
Member

slaren commented May 17, 2025

At the moment it seems that there is no way to disable SWA cache. It might be good now to add an option to disable SWA in llama_context_params, and leave it disabled by default to avoid breaking other applications until automatic re-computation is implemented.

@ggerganov
Copy link
Member Author

ggerganov commented May 18, 2025

I added bool llama_context_params::swa_full; - when enabled (default value) the SWA cache will be created with the full context size, just like we do on master. In this case, the iSWA KV cache will not prune the old tokens on commit and this way we will have all the necessary data always precomputed.

When the flag is false, we create the small SWA cache. Even now, third-party apps have a way to fully work with this, similar to how the llama-server is implemented. For now, they need to be careful about when the cache data has been lost. One way to do it is like this:

if (slot.n_past > 0 && slot.n_past < (int) slot.cache_tokens.size()) {
if (llama_kv_self_seq_pos_min(ctx, slot.id) > 0) {
SLT_WRN(slot, "forcing full prompt re-processing due to lack of cache data (likely due to SWA, see %s)\n",
"https://github.com/ggml-org/llama.cpp/pull/13194#issuecomment-2868343055");
slot.n_past = 0;
}
}

And in the future we will try to do this automatically. In any case, if the user app does something wrong in this mode, they will get many warnings in the logs:
if (n_attended < std::min<int>(n_swa, pmin)) {
LLAMA_LOG_WARN("%s: partial SWA cache detected - possible loss of information, pmin = %d, n_attended = %d, n_swa = %d\n", __func__, pmin, n_attended, n_swa);
}
}

The llama-bench always uses small SWA cache (i.e. swa_full = false).

In common we set the swa_full = false by default, but can be changed with --swa-full CLI arg.

Copy link
Member

@slaren slaren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something that doesn't look right to me in the current design of the KV cache is that the result of find_slot or set_full, and the data for commit and restore, is part of the state of the KV cache object itself. The result of find_slot or set_full could be returned in an object, which then could be committed or not, but these functions shouldn't change the state of the KV cache. I think that would make the code easier to understand since too much state in an object can make it very hard to keep track of what's happening.

Comment on lines 1293 to 1295
if (wo_b) {
//cb(cur, "kqv_wo", il);
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this was already here before, but should be removed regardless.

bufs.emplace_back(buf);
}

{
const size_t memory_size_k = size_k_bytes();
const size_t memory_size_v = size_v_bytes();

LLAMA_LOG_INFO("%s: KV self size = %7.2f MiB, K (%s): %7.2f MiB, V (%s): %7.2f MiB\n", __func__,
(float)(memory_size_k + memory_size_v) / (1024.0f * 1024.0f),
LLAMA_LOG_INFO("%s: size = %7.2f (%6d cells, %3d layers) MiB, K (%s): %7.2f MiB, V (%s): %7.2f MiB\n", __func__,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
LLAMA_LOG_INFO("%s: size = %7.2f (%6d cells, %3d layers) MiB, K (%s): %7.2f MiB, V (%s): %7.2f MiB\n", __func__,
LLAMA_LOG_INFO("%s: size = %7.2f MiB (%6d cells, %3d layers), K (%s): %7.2f MiB, V (%s): %7.2f MiB\n", __func__,

@ggerganov ggerganov merged commit e298d2f into master May 20, 2025
50 of 53 checks passed
@ggerganov ggerganov deleted the gg/swa branch May 20, 2025 05:05
@ggerganov
Copy link
Member Author

Something that doesn't look right to me in the current design of the KV cache is that the result of find_slot or set_full, and the data for commit and restore, is part of the state of the KV cache object itself.

Agree. I will continue some refactoring around the KV cache (sketched a list of changes in the OP above) and will submit a set of PRs to improve the implementation.

@ExtReMLapin
Copy link
Contributor

From my understanding, using full SWA cache should use less memory, right ?

Well on my tests it doesn't.

  • CUDA_VISIBLE_DEVICES=1 ./llama-cli -ngl 45464556 -m /opt/IdExtend/models/llm/c4ai-command-r7b-12-2024-Q6_K_L.gguf --temp 0.3 -no-cnv -f ../../../prompt.txt -c 31000 -fa --swa-full uses 11459MiB
llama_perf_sampler_print:    sampling time =      17,73 ms / 10947 runs   (    0,00 ms per token, 617393,27 tokens per second)
llama_perf_context_print:        load time =    1340,00 ms
llama_perf_context_print: prompt eval time =    1149,49 ms / 10715 tokens (    0,11 ms per token,  9321,55 tokens per second)
llama_perf_context_print:        eval time =    2502,93 ms /   231 runs   (   10,84 ms per token,    92,29 tokens per second)
  • CUDA_VISIBLE_DEVICES=1 ./llama-cli -ngl 45464556 -m /opt/IdExtend/models/llm/c4ai-command-r7b-12-2024-Q6_K_L.gguf --temp 0.3 -no-cnv -f ../../../prompt.txt -c 31000 -fa uses 9115MiB
llama_perf_sampler_print:    sampling time =       6,37 ms / 10819 runs   (    0,00 ms per token, 1698430,14 tokens per second)
llama_perf_context_print:        load time =    1282,71 ms
llama_perf_context_print: prompt eval time =    1271,69 ms / 10715 tokens (    0,12 ms per token,  8425,78 tokens per second)
llama_perf_context_print:        eval time =    1021,83 ms /   103 runs   (    9,92 ms per token,   100,80 tokens per second)
llama_perf_context_print:       total time =    2362,03 ms / 10818 tokens

https://huggingface.co/bartowski/c4ai-command-r7b-12-2024-GGUF

@ggerganov
Copy link
Member Author

Could you clarify? From the numbers you posted, the SWA cache uses 2GB less than the non-SWA (i.e. --swa-full). What is your expectation?

@Dampfinchen
Copy link

Dampfinchen commented May 20, 2025

From my understanding, using full SWA cache should use less memory, right ?

Well on my tests it doesn't.

  • CUDA_VISIBLE_DEVICES=1 ./llama-cli -ngl 45464556 -m /opt/IdExtend/models/llm/c4ai-command-r7b-12-2024-Q6_K_L.gguf --temp 0.3 -no-cnv -f ../../../prompt.txt -c 31000 -fa --swa-full uses 11459MiB
llama_perf_sampler_print:    sampling time =      17,73 ms / 10947 runs   (    0,00 ms per token, 617393,27 tokens per second)
llama_perf_context_print:        load time =    1340,00 ms
llama_perf_context_print: prompt eval time =    1149,49 ms / 10715 tokens (    0,11 ms per token,  9321,55 tokens per second)
llama_perf_context_print:        eval time =    2502,93 ms /   231 runs   (   10,84 ms per token,    92,29 tokens per second)
  • CUDA_VISIBLE_DEVICES=1 ./llama-cli -ngl 45464556 -m /opt/IdExtend/models/llm/c4ai-command-r7b-12-2024-Q6_K_L.gguf --temp 0.3 -no-cnv -f ../../../prompt.txt -c 31000 -fa uses 9115MiB
llama_perf_sampler_print:    sampling time =       6,37 ms / 10819 runs   (    0,00 ms per token, 1698430,14 tokens per second)
llama_perf_context_print:        load time =    1282,71 ms
llama_perf_context_print: prompt eval time =    1271,69 ms / 10715 tokens (    0,12 ms per token,  8425,78 tokens per second)
llama_perf_context_print:        eval time =    1021,83 ms /   103 runs   (    9,92 ms per token,   100,80 tokens per second)
llama_perf_context_print:       total time =    2362,03 ms / 10818 tokens

https://huggingface.co/bartowski/c4ai-command-r7b-12-2024-GGUF

You've got it the other way around. Without the CLI-flag, SWA is enabled by default, so if you pass the CLI flag --swa-full it disables iSWA leading to higher memory usage.

I was pretty confused about this too. Contrary to Slaren's good suggestion, the SWA is enabled by default which disables KV Cache shifting. I don't think this is a good idea.

Default should be the old way with context shifting enabled, while a CLI flag like --SWA should enable iSWA for experienced users who know they would have to sacrifice context shifting for it. That would be much less confusing and more practical.

@ngxson
Copy link
Collaborator

ngxson commented May 20, 2025

I think it's just the naming a bit confused for non-technical people. --swa-full means "allocate full memory for SWA" but it could be misinterpreted as "use SWA for all layers". I think we can add a simpler an alias like -no-swa

@ExtReMLapin
Copy link
Contributor

Oh, my bad, thanks for the quick answers !

@Dampfinchen
Copy link

Dampfinchen commented May 20, 2025

Gave it a quick test spin, memory used for KV Cache decreased from 3 GB to 960 MB on Gemma 3 12B with 10K context. Awesome job!! Gemma is finally an efficient model.

@RodriMora
Copy link

Working great, 12K context:

Before: 2914.00 MiB
After: 624.00 MiB

@chigkim
Copy link

chigkim commented May 20, 2025

For non-technical users, when would you want to use --swa-full and use the old cache?
Could someone provide any example use case for why someone might want to go back?
Thanks!

@hjc4869
Copy link
Contributor

hjc4869 commented May 20, 2025

There seems to be some issues running Llama 4 Maverick after this change. The server crashes at below location

Thread 1 "llama-server" received signal SIGSEGV, Segmentation fault.
ggml_can_mul_mat (t0=0x0, t1=0x55556ba0bb00) at /home/david/Development/llama.cpp/ggml/src/ggml.c:2728
2728        return (t0->ne[0]           == t1->ne[0])  &&

Full back trace

(gdb) bt
#0  ggml_can_mul_mat (t0=0x0, t1=0x55556ba0bb00) at /home/david/Development/llama.cpp/ggml/src/ggml.c:2728
#1  ggml_mul_mat (ctx=0x555568b84db0, a=0x0, b=0x55556ba0bb00) at /home/david/Development/llama.cpp/ggml/src/ggml.c:2737
#2  0x00007ffff7de7fd5 in llm_graph_context::build_lora_mm (this=0x555569275780, w=0x0, cur=0x55556ba0bb00)
    at /home/david/Development/llama.cpp/src/llama-graph.cpp:476
#3  0x00007ffff7de8a9c in llm_graph_context::build_moe_ffn (this=0x555568b84db0, cur=0x55556ba0bb00, gate_inp=0x0, up_exps=0x0, gate_exps=0x0, 
    down_exps=0x0, exp_probs_b=0x0, n_expert=128, n_expert_used=1, type_op=LLM_FFN_SILU, norm_w=<optimized out>, scale_w=<optimized out>, 
    w_scale=<error reading variable: Value cannot be represented as integer of 8 bytes.>, gating_op=LLAMA_EXPERT_GATING_FUNC_TYPE_SIGMOID, il=0)
    at /home/david/Development/llama.cpp/src/llama-graph.cpp:709
#4  0x00007ffff7e3769e in llm_build_llama_iswa::llm_build_llama_iswa (this=0x555569275780, model=..., params=..., gf=0x55556b804a30)
    at /home/david/Development/llama.cpp/src/llama-model.cpp:4813
#5  0x00007ffff7e31883 in std::make_unique<llm_build_llama_iswa, llama_model const&, llm_graph_params const&, ggml_cgraph*&> (
    __args=@0x7fffffff8d28: 0x55556b804a30, __args=@0x7fffffff8d28: 0x55556b804a30, __args=@0x7fffffff8d28: 0x55556b804a30)
    at /usr/lib/gcc/x86_64-linux-gnu/14/../../../../include/c++/14/bits/unique_ptr.h:1077
#6  0x00007ffff7e3031e in llama_model::build_graph (this=0x55555ab14a20, params=..., gf=0x55556b804a30, type=LLM_GRAPH_TYPE_DEFAULT)
    at /home/david/Development/llama.cpp/src/llama-model.cpp:13267
#7  0x00007ffff7dbf776 in llama_context::graph_build (this=this@entry=0x555569974ca0, ctx=<optimized out>, gf=0x55556ba0bb00, ubatch=..., 
    gtype=gtype@entry=LLM_GRAPH_TYPE_DEFAULT) at /home/david/Development/llama.cpp/src/llama-context.cpp:1240
#8  0x00007ffff7dbebbb in llama_context::llama_context (this=0x555569974ca0, model=..., params=...)
    at /home/david/Development/llama.cpp/src/llama-context.cpp:292
#9  0x00007ffff7dc5031 in llama_init_from_model (model=0x55555ab14a20, params=...) at /home/david/Development/llama.cpp/src/llama-context.cpp:2131
#10 0x00005555557a1422 in common_init_from_params (params=...) at /home/david/Development/llama.cpp/common/common.cpp:925
#11 0x0000555555611211 in server_context::load_model (this=this@entry=0x7fffffffc010, params=...)
    at /home/david/Development/llama.cpp/tools/server/server.cpp:1912
#12 0x00005555555e849d in main (argc=<optimized out>, argv=<optimized out>) at /home/david/Development/llama.cpp/tools/server/server.cpp:4820

Looks like llm_build_llama_iswa in llama-model.cpp can't handle the layers where there's no experts, such as the first layer of Llama 4 Maverick. Below code passed 0 into build_moe_ffn

                ggml_tensor * moe_out = build_moe_ffn(ffn_inp_normed,
                        model.layers[il].ffn_gate_inp,
                        model.layers[il].ffn_up_exps,
                        model.layers[il].ffn_gate_exps,
                        model.layers[il].ffn_down_exps,
                        nullptr,
                        n_expert, n_expert_used,
                        LLM_FFN_SILU, false,
                        false, 0.0,
                        LLAMA_EXPERT_GATING_FUNC_TYPE_SIGMOID,
                        il);

Going back to stack frame of llm_build_llama_iswa:

(gdb) print il
$1 = 0

@ggerganov
Copy link
Member Author

@hjc4869 Could you propose a fix - it should be simple, but I don't have the model downloaded to do a test. Just see the logic from before this PR and apply it to the new llm_build_llama_iswa().

@stduhpf
Copy link
Contributor

stduhpf commented May 20, 2025

I can confirm the issue with L4 Maverick. Even --swa-full isn't enough to work around it sadly.

@hjc4869
Copy link
Contributor

hjc4869 commented May 20, 2025

Reverting some of the changes solved my issue, though I'm not sure if all these are necessary.

diff --git a/src/llama-model.cpp b/src/llama-model.cpp
index 057f1fc17..bc51602c1 100644
--- a/src/llama-model.cpp
+++ b/src/llama-model.cpp
@@ -4803,7 +4803,22 @@ struct llm_build_llama_iswa : public llm_graph_context {
             ggml_tensor * ffn_inp = ggml_add(ctx0, cur, inpSA);
             cb(ffn_inp, "ffn_inp", il);
 
-            {
+            // feed-forward network (non-MoE)
+            if (model.layers[il].ffn_gate_inp == nullptr) {
+                cur = build_norm(ffn_inp,
+                        model.layers[il].ffn_norm, NULL,
+                        LLM_NORM_RMS, il);
+                cb(cur, "ffn_norm", il);
+
+                cur = build_ffn(cur,
+                        model.layers[il].ffn_up,   model.layers[il].ffn_up_b,   NULL,
+                        model.layers[il].ffn_gate, model.layers[il].ffn_gate_b, NULL,
+                        model.layers[il].ffn_down, model.layers[il].ffn_down_b, NULL,
+                        NULL,
+                        LLM_FFN_SILU, LLM_FFN_PAR, il);
+                cb(cur, "ffn_out", il);
+
+            } else if (arch == LLM_ARCH_LLAMA4) {
                 // llama4 MoE
                 ggml_tensor * ffn_inp_normed = build_norm(ffn_inp,
                         model.layers[il].ffn_norm, NULL,
@@ -4833,6 +4848,25 @@ struct llm_build_llama_iswa : public llm_graph_context {
 
                 cur = ggml_add(ctx0, moe_out, shexp_out);
                 cb(cur, "ffn_moe_out_merged", il);
+            } else {
+                // MoE branch
+                cur = build_norm(ffn_inp,
+                        model.layers[il].ffn_norm, NULL,
+                        LLM_NORM_RMS, il);
+                cb(cur, "ffn_norm", il);
+
+                cur = build_moe_ffn(cur,
+                        model.layers[il].ffn_gate_inp,
+                        model.layers[il].ffn_up_exps,
+                        model.layers[il].ffn_gate_exps,
+                        model.layers[il].ffn_down_exps,
+                        nullptr,
+                        n_expert, n_expert_used,
+                        LLM_FFN_SILU, true,
+                        false, 0.0,
+                        LLAMA_EXPERT_GATING_FUNC_TYPE_SOFTMAX,
+                        il);
+                cb(cur, "ffn_moe_out", il);
             }
 
             cur = ggml_add(ctx0, cur, ffn_inp);

@ggerganov
Copy link
Member Author

Can you confirm that #13663 fixes it?

@hjc4869
Copy link
Contributor

hjc4869 commented May 20, 2025

Yes I can confirm this one fixed my issue.

@rhvall
Copy link

rhvall commented May 21, 2025

Simple question: These changes will have a sample code like "passkeys" or "llama-cli" that shows the improvements of using SWI and the changes in code (ex. kv updates)?

Also, SWI stands for "Sliding Window Attention", right?? Do you have a reference paper to learn mode about it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.