Thanks to visit codestin.com
Credit goes to github.com

Skip to content

llama : initial Mamba-2 support #9126

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 44 commits into from
Jul 2, 2025
Merged

llama : initial Mamba-2 support #9126

merged 44 commits into from
Jul 2, 2025

Conversation

compilade
Copy link
Collaborator

@compilade compilade commented Aug 21, 2024

Follow-up from #8519 (comment). This should fix #7727 and fix #8519.

I've implemented the fully recurrent mode of Mamba-2, because it's very similar to Mamba-1, and also because it seems like the most appropriate mode for text generation.

This does not implement the sequentially semistructured matrix mode, because I'm not yet sure how the block decomposition would fit within the batch and ubatch framework of llama.cpp, and how the chunk size should be chosen. If the recurrent mode is faster at single-user auto-regressive text generation, then I'm not sure how to keep the graph node structure constant when using the most appropriate technique for the batch size.

If the sequentially semistructured matrix mode is eventually implemented, it should help with prompt processing speed for large prompts.

What to expect

(mostly taken from #8519 (comment))

The state in Mamba-2 is bigger than I thought; Mamba-Codestral-7B-v0.1 takes 263.5 MiB (in F32) per sequence (e.g. with -np 1), compared to 38 MiB (also in F32) for Falcon-Mamba-7B (which is based on Mamba-1). But that remains constant whatever the context size. Mamba-2 is easier to implement efficiently, so the bigger state does not really impede inference speed.

However, a big downside right now with recurrent models in llama.cpp is the lack of state rollback (which is implemented through state checkpoints in #7531, but needs to be re-adapted to #8526), so the prompt will be reprocessed a lot if using llama-server. I think using llama-cli in conversation mode does not have this problem, however (or maybe only the bare interactive mode with --in-prefix and --in-suffix, not sure).

This initial implementation is CPU-only, but uses SIMD for the SSM scan, so even though the state is bigger than for Mamba-1 models, in my tests, the speed of Mamba2-130M is similar or better than Mamba-130M (but still not that fast compared to transformer-based models with an empty context), when both are run on CPU.

The speed of Mamba-2 models seems comparable to Transformer-based models when the latter have 2k to 4k tokens in their context.

Summary of changes

  • Add support for Mamba2ForCausalLM (including the official Mamba-2 models, and Mamba-Codestral-7B-v0.1)
    • Note that config.json needs to contain "architectures": ["Mamba2ForCausalLM"], for the convert script to properly detect the architecture.
  • View Mamba-1 as having d_inner (aka 2 * n_embd) heads of size 1.
    • This simplifies the handling of shapes in ggml_ssm_scan
  • ggml
    • Implement Mamba-2's selective state update in ggml_ssm_scan.
      • Re-using the same operator as Mamba-1, because it's pretty much the same operation. (except for how ssm_a is broadcast)
    • Fuse the operation with ssm_d into ggml_ssm_scan
      • Otherwise it would need to be transposed, because the dot-products are done head-wise.
    • Implement Mamba-2's SSM scan with GGML_SIMD.
      • This is possible because there is no element-wise expf in the state update unlike with Mamba-1.
    • Avoid state copies for the SSM state (both for Mamba-1 and Mamba-2) by passing state ids to ggml_ssm_scan.
      • Mamba-2 states are huge. Otherwise masking and copying took close to 10% of the CPU time according to perf.

Other

Here's my favorite quote from Section 3.3 of https://arxiv.org/abs/2405.21060:

Furthermore—by a twist of fate—structured state space models and sequentially semiseparable matrices have the same acronyms, underscoring their equivalence! Conveniently we can use any of these acronyms SSM (state space model or semiseparable matrix), SSS (structured state space or sequentially semiseparable), or SS (state space or semiseparable) interchangeably to unambiguously refer to either concept.

TODO

  • Rebase onto master after merging llama : simplify Mamba with advanced batch splits #8526.
  • Avoid unnecessary moves of the state
  • Adapt the Metal kernels and the tests from ggml : add SSM Metal kernels #8546 to the updated ggml_ssm_scan
  • Remove the new GGML_MUL fast broadcast path because it's not used anymore to mask the states.
  • Maybe use a new metadata key instead of {arch}.ssm.time_step_rank for the number of heads of Mamba-2, because it's not really the rank of the time step (well, maybe kind of).
    • The meaning of the number of heads and the time-step rank is overlapping enough in Mamba-2 that I think this is fine.
  • Maybe not fuse the multiplication with ssm_d in ggml_ssm_scan?
  • Maybe split ggml_ssm_scan to separate the implementations for Mamba-1 and Mamba-2, although they do have a lot in common.
    • Seems like they can be distinguished easily enough at the time of kernel dispatch.

@compilade compilade marked this pull request as draft August 21, 2024 21:51
@github-actions github-actions bot added python python script changes ggml changes relating to the ggml tensor library for machine learning labels Aug 21, 2024
* ggml : improve ggml_mul speed when masking recurrent states
* ggml : make the ggml_mul fast broadcast path more consistently formatted
@compilade compilade changed the base branch from compilade/batch-splits to master August 21, 2024 22:02
@compilade compilade marked this pull request as ready for review August 21, 2024 22:02
@compilade compilade added the Review Complexity : Medium Generally require more time to grok but manageable by beginner to medium expertise level label Aug 21, 2024
@ngxson
Copy link
Collaborator

ngxson commented Aug 22, 2024

Hey @compilade , thanks for implementing this!

I tried converting https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1 using convert_hf_to_gguf.py, but it gives error:

    with open(dir_model / "config.json", "r", encoding="utf-8") as f:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'config.json'

Nevertheless, I successfully converted a Mamba-Codestral transformers-compatible model: https://huggingface.co/Molbap/code2 (Need to comment out the line raise NotImplementedError("BPE pre-tokenizer was not recognized - update get_vocab_base_pre()") in convert_hf_to_gguf.py)

Run it output model (remember to select the correct chat template, since the model does not come with one):

make llama-cli -j && ./llama-cli -m ../models/mcode-7.3B-Q8_0.gguf -cnv -p "You are a helpful assistant" --chat-template mistral -ngl 0

The result looks promising, but I have no idea why there are [UNK_BYTE_0x29681...]. It seems like the there is a problem with space character:

<<SYS>>Youareahelpfulassistant<</SYS>>
> hi
[UNK_BYTE_0xe29681▁Hello]Hello![UNK_BYTE_0xe29681▁How]How[UNK_BYTE_0xe29681▁can]can[UNK_BYTE_0xe29681▁I]I[UNK_BYTE_0xe29681▁assist]assist[UNK_BYTE_0xe29681▁you]you[UNK_BYTE_0xe29681▁today]today?

Link to download GGUF: https://huggingface.co/ngxson/codestral-mamba-llamacpp-test/tree/main

@compilade
Copy link
Collaborator Author

compilade commented Aug 22, 2024

Hey @compilade , thanks for implementing this!

I tried converting https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1 using convert_hf_to_gguf.py, but it gives error:

    with open(dir_model / "config.json", "r", encoding="utf-8") as f:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'config.json'

@ngxson

The steps I took to convert Mamba-Codestral-7B-v0.1 are the following:

  1. Rename consolidated.safetensors to model.safetensors
  2. Rename params.json to config.json
  3. Add the line "architectures": ["Mamba2ForCausalLM"], in config.json
  4. Rename tokenizer.model.v3 to tokenizer.model
  5. Use convert_hf_to_gguf.py as usual.

I did not have tokenization problems in my tests. Maybe because I was using the original SentencePiece tokenizer instead of a BPE tokenizer.

That tokenizer.json in the transformers-compatible version seems to have problematic spaces. It uses the SentencePiece space escaping instead of the BPE one. Its normalizer seems to revert the escaping, but that's not handled in llama.cpp.

There are probably still problems with the SentencePiece tokenizer too, like the lack of special tokens (control tokens seem to be identified correctly, the only difference seems to be with the 20 [REFERENCE_DOC_{n}] tokens (where n is 0 to 19), which tokenzier.json identifies as non-special added tokens (maps to USER_DEFINED for llama.cpp), while tokenizer.model identifies them as NORMAL tokens).

I think the SentencePiece tokenizer should be preferred for this model; it should be easier to handle without workarounds. I should change that in convert_hf_to_gguf.py. Meanwhile either not include tokenizer.json or rename it to something else.

The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires
workarounds to work correctly.
@ngxson
Copy link
Collaborator

ngxson commented Aug 23, 2024

Thanks for the guide! I've successfully converted the original repository the gguf by following your steps.

For the transformers-compatible, I will try to contact the one who made it. Hopefully it will be fixed soon.

I'm wondering if convert_hf_to_gguf.py can automatically handle the renaming of params.json, consolidated.safetensors and tokenizer.model.v3? For now, my fear is that someone who use automated tools like gguf-my-repo will be stuck due to this issue.

(Also cc @Vaibhavs10 since he's the maintainer of gguf-my-repo.)

Copy link
Collaborator

@Vaibhavs10 Vaibhavs10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @compilade/ @ngxson - JFYI - the transformers weights are now merged in the main repo: https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1

If you face any issues with the conversion with this could you open an issue on the repo for us to track! 🤗

@1ns0mni4c
Copy link

Any updates on when Codestral Mamba should be supported?

@learning-chip
Copy link

Nice work! Just a note on the ssm_scan kernel performance: a better fused implementation by the flash-linear-attention project can give the equivalent functionality as Mamba2's original kernel: fla-org/flash-linear-attention#49 , and runs 2x faster: fla-org/flash-linear-attention#50

@molbap
Copy link

molbap commented Sep 16, 2024

Hi @compilade ! I worked on repo conversion for the transformers-compatible mamba2 version, let us know if you need anything from us to move forward with this PR :)

@HanClinto
Copy link
Collaborator

I'm wondering if convert_hf_to_gguf.py can automatically handle the renaming of params.json, consolidated.safetensors and tokenizer.model.v3? For now, my fear is that someone who use automated tools like gguf-my-repo will be stuck due to this issue.

(Also cc @Vaibhavs10 since he's the maintainer of gguf-my-repo.)

It sounds like having a simple fallback of expected filenames would be a reasonable thing to include here? I don't know that we want to maintain a ton of different ones, but adding a second layer of fallbacks for alternate filenames doesn't feel arduous.

@compilade
Copy link
Collaborator Author

It sounds like having a simple fallback of expected filenames would be a reasonable thing to include here? I don't know that we want to maintain a ton of different ones, but adding a second layer of fallbacks for alternate filenames doesn't feel arduous.

@HanClinto

That's not really a problem anymore (at least for Mamba-Codestral) since the official repo was updated in https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1/commit/88085f9cdfa832c3aca8a0315a4520cf7558c947 to use more standard names.

What is currently blocking this is that the Metal and CUDA kernels for ggml_ssm_scan need to be updated BUT before that, I want to refactor the operator to completely avoid copying Mamba-2 states (because otherwise the unnecessary copies use a non-negligible fraction of the memory bandwidth (10% of total text generation inference time on my laptop), since Mamba-2 states are big).

@hg0428
Copy link

hg0428 commented Oct 1, 2024

Any updates on this?

@github-actions github-actions bot added the testing Everything test related label Oct 1, 2024
@ggerganov
Copy link
Member

Decide if it's okay to make breaking changes to SSM_SCAN to reduce copies

It's OK to make breaking changes to SSM_SCAN. I'm pretty sure these have very small adoption (if any) at this point.

@gabe-l-hart
Copy link
Contributor

Thanks for all the testing @younesbelkada! I'm still working on reviving my CUDA box and can test there once it's live.

@compilade when you test new model support, do you have a system for comparing activations against the corresponding transformers model? I've done this with granite models using llama-eval-callback, but it tends to be pretty manual, so just curious if you have any recommendations.

@compilade
Copy link
Collaborator Author

compilade commented Jun 25, 2025

@compilade when you test new model support, do you have a system for comparing activations against the corresponding transformers model?

@gabe-l-hart I don't, unfortunately. I usually rely on catastrophic failure to notice when it's broken. E.g. absurdly high perplexity, inconsistent parallel outputs, etc.

I rely on the assumption that an incorrect implementation will pretty much always give worse results, and so if it gives good results, it's likely implemented correctly (!q -> !p <=> p -> q).

How were you intended to test things?

@younesbelkada I think the following should be sufficient:

$ # only relevant when testing non-CPU backends
$ ./bin/test-backend-ops test -o SSM_SCAN

A small perplexity test can also be useful to check if inference mostly works properly

$ ./bin/llama-perplexity -m /path/to/the/model.gguf -f /path/to/some/dataset/maybe/wiki.test.txt --chunks 16

If the perplexity is relatively small, then it's most likely working correctly.

(If testing a GPU backend (e.g. Metal, CUDA), make sure to try with -ngl 999 and/or with -ngl 0)

My go-to for manual testing is usually llama-parallel, because it goes through more code paths for parallel inference with continuous batching, and because the output should mostly make sense when it works correctly.

$ ./bin/llama-parallel -m /path/to/model.gguf -np 5 -ns 12 --temp 0 --repeat-penalty 1.1 -pps

Eventually though, with #14139, the only correctness testing necessary could be perplexity, because multi-sequence batch output coherency will be tested automatically through test-model-random.

@younesbelkada
Copy link
Contributor

I just ran the test for SSM_SCAN and it passed on my end (Apple M3):

ggml_metal_init: loaded kernel_pool_2d_max_f32                        0x122854660 | th_max = 1024 | th_width =   32
  Device description: Apple M3 Max
  Device memory: 49152 MB (49146 MB free)

  SSM_SCAN(type=f32,d_state=16,head_dim=1,n_head=1024,n_group=1,n_seq_tokens=32,n_seqs=4): OK
  SSM_SCAN(type=f32,d_state=128,head_dim=64,n_head=16,n_group=2,n_seq_tokens=32,n_seqs=4): OK
  5638/5638 tests passed
  Backend Metal: OK

ggml_metal_free: deallocating
ggml_metal_mem_pool_free: freeing memory pool, num heaps = 0 (total = 0)
ggml_metal_mem_pool_free: freeing memory pool, num heaps = 0 (total = 0)
ggml_metal_mem_pool_free: freeing memory pool, num heaps = 0 (total = 0)
ggml_metal_mem_pool_free: freeing memory pool, num heaps = 0 (total = 0)
ggml_metal_mem_pool_free: freeing memory pool, num heaps = 0 (total = 0)
ggml_metal_mem_pool_free: freeing memory pool, num heaps = 0 (total = 0)
ggml_metal_mem_pool_free: freeing memory pool, num heaps = 0 (total = 0)
ggml_metal_mem_pool_free: freeing memory pool, num heaps = 0 (total = 0)
Backend 2/3: BLAS
  Device description: Accelerate
  Device memory: 0 MB (0 MB free)

  SSM_SCAN(type=f32,d_state=16,head_dim=1,n_head=1024,n_group=1,n_seq_tokens=32,n_seqs=4): not supported [BLAS] 
  SSM_SCAN(type=f32,d_state=128,head_dim=64,n_head=16,n_group=2,n_seq_tokens=32,n_seqs=4): not supported [BLAS] 
  5638/5638 tests passed
  Backend BLAS: OK

Backend 3/3: CPU
  Skipping CPU backend
3/3 backends passed
OK

I will run shortly the perplexity test

@gabe-l-hart
Copy link
Contributor

Ok, I'm finally back in business with my CUDA box. I've got initial sniff-test results, all looking strong. I'm running off of my GraniteFour branch (which includes all of the changes here).

Model conversion setup

python convert_hf_to_gguf.py /path/to/hf-model
./build/bin/llama-quantize /path/to/hf-model/model-name-F16.gguf Q4_K_M

Model test setup

For each model, I'm testing the grid of CPU/GPU x F16/Q4_K_M

sniff-test.sh
#!/usr/bin/env bash

model_dir=""
prompt="Tell me a story about a developer and their dog"
n_predict="100"

# Parse CLI args
while [[ "$#" -gt 0 ]]; do
  case $1 in
    -m|--model-dir)
      model_dir="$2"
      shift # past argument
      shift # past value
      ;;
    -p|--prompt)
      prompt="$2"
      shift # past argument
      shift # past value
      ;;
    -h|--help)
      echo "Usage: $0 --model-dir <path> [--prompt <string>] [-h | --help]"
      echo ""
      echo "  --model-dir, -m: Parent directory for the model (required)"
      echo "  --prompt, -p: Prompt to test with (optional, default: $prompt)"
      echo "  --n-predict, -n: Number of tokens to predict (optional, default: $n_predict)"
      echo "  -h, --help: Show this help message and exit"
      exit 0
      ;;
    *)
      echo "Unknown parameter passed: $1"
      if [ -z "$model_dir" ]; then
        echo "Error: Missing required argument --model-dir. Use -h for help."
        exit 1
      fi
      ;;
  esac
done

# Check if required arguments were provided
if [ -z "$model_dir" ]; then
  echo "Error: Missing required argument --model-dir. Use -h for help."
  exit 1
fi

echo "Model directory: $model_dir"

if [ -n "$prompt" ]; then
  echo "Prompt: $prompt"
else
  echo "No prompt provided, using an empty string."
fi

f16_gguf=$(find $model_dir -name "*-F16.gguf")
q4_gguf=$(find $model_dir -name "*Q4_K_M.gguf")

echo "F16: $f16_gguf"
echo "Q4_K_M: $a4_gguf"

run_llama_cli() {
  local model=$1
  local use_gpu=$2

  # Set the -ngl flag based on the input bool
  local ngl_value=${use_gpu:='0'}

  # Construct the command with the given arguments
  ./build/bin/llama-cli -m $model --temp 0 -p "$prompt" -n $n_predict -no-cnv -ngl $ngl_value 2>&1 | grep --color=never "llama_perf_"
}

# Run the test grid
echo "------> CPU / F16"
run_llama_cli $f16_gguf 0
echo
echo "------> CPU / Q4_K_M"
run_llama_cli $q4_gguf 0
echo
echo "------> GPU / F16"
run_llama_cli $f16_gguf 999
echo
echo "------> GPU / A4_K_M"
run_llama_cli $q4_gguf 999

(vibe coded with granite3.3 😉)

Sniff test results

./sniff-test.sh -m ~/models/state-spaces/mamba2-2.7b/
Model directory: /home/ghart/models/state-spaces/mamba2-2.7b/
Prompt: Tell me a story about a developer and their dog
F16: /home/ghart/models/state-spaces/mamba2-2.7b/mamba2-2.7B-F16.gguf
Q4_K_M: 
------> CPU / F16
llama_perf_sampler_print:    sampling time =       4.50 ms /   110 runs   (    0.04 ms per token, 24444.44 tokens per second)
llama_perf_context_print:        load time =     424.76 ms
llama_perf_context_print: prompt eval time =     109.02 ms /    10 tokens (   10.90 ms per token,    91.72 tokens per second)
llama_perf_context_print:        eval time =    5892.84 ms /    99 runs   (   59.52 ms per token,    16.80 tokens per second)
llama_perf_context_print:       total time =    6024.16 ms /   109 tokens

------> CPU / Q4_K_M
llama_perf_sampler_print:    sampling time =       5.28 ms /   110 runs   (    0.05 ms per token, 20837.28 tokens per second)
llama_perf_context_print:        load time =     243.90 ms
llama_perf_context_print: prompt eval time =     103.00 ms /    10 tokens (   10.30 ms per token,    97.09 tokens per second)
llama_perf_context_print:        eval time =    3349.81 ms /    99 runs   (   33.84 ms per token,    29.55 tokens per second)
llama_perf_context_print:       total time =    3476.10 ms /   109 tokens

------> GPU / F16
llama_perf_sampler_print:    sampling time =       4.00 ms /   110 runs   (    0.04 ms per token, 27486.26 tokens per second)
llama_perf_context_print:        load time =     975.95 ms
llama_perf_context_print: prompt eval time =      92.11 ms /    10 tokens (    9.21 ms per token,   108.57 tokens per second)
llama_perf_context_print:        eval time =    1324.86 ms /    99 runs   (   13.38 ms per token,    74.73 tokens per second)
llama_perf_context_print:       total time =    1431.39 ms /   109 tokens

------> GPU / A4_K_M
llama_perf_sampler_print:    sampling time =       3.88 ms /   110 runs   (    0.04 ms per token, 28372.45 tokens per second)
llama_perf_context_print:        load time =     351.65 ms
llama_perf_context_print: prompt eval time =      25.90 ms /    10 tokens (    2.59 ms per token,   386.10 tokens per second)
llama_perf_context_print:        eval time =     862.56 ms /    99 runs   (    8.71 ms per token,   114.77 tokens per second)
llama_perf_context_print:       total time =     902.52 ms /   109 tokens
./sniff-test.sh -m ~/models/mistralai/Mamba-Codestral-7B-v0.1/
Model directory: /home/ghart/models/mistralai/Mamba-Codestral-7B-v0.1/
Prompt: Tell me a story about a developer and their dog
F16: /home/ghart/models/mistralai/Mamba-Codestral-7B-v0.1/Mamba-Codestral-7B-v0.1-F16.gguf
Q4_K_M: 
------> CPU / F16
llama_perf_sampler_print:    sampling time =       4.79 ms /   111 runs   (    0.04 ms per token, 23163.61 tokens per second)
llama_perf_context_print:        load time =     823.50 ms
llama_perf_context_print: prompt eval time =     215.69 ms /    11 tokens (   19.61 ms per token,    51.00 tokens per second)
llama_perf_context_print:        eval time =   12671.57 ms /    99 runs   (  128.00 ms per token,     7.81 tokens per second)
llama_perf_context_print:       total time =   12908.29 ms /   110 tokens

------> CPU / Q4_K_M
llama_perf_sampler_print:    sampling time =       5.45 ms /   111 runs   (    0.05 ms per token, 20363.24 tokens per second)
llama_perf_context_print:        load time =     353.57 ms
llama_perf_context_print: prompt eval time =     211.97 ms /    11 tokens (   19.27 ms per token,    51.89 tokens per second)
llama_perf_context_print:        eval time =    6059.11 ms /    99 runs   (   61.20 ms per token,    16.34 tokens per second)
llama_perf_context_print:       total time =    6291.71 ms /   110 tokens

------> GPU / F16
llama_perf_sampler_print:    sampling time =       4.27 ms /   111 runs   (    0.04 ms per token, 26007.50 tokens per second)
llama_perf_context_print:        load time =    2350.44 ms
llama_perf_context_print: prompt eval time =     107.96 ms /    11 tokens (    9.81 ms per token,   101.89 tokens per second)
llama_perf_context_print:        eval time =    2619.14 ms /    99 runs   (   26.46 ms per token,    37.80 tokens per second)
llama_perf_context_print:       total time =    2740.47 ms /   110 tokens

------> GPU / A4_K_M
llama_perf_sampler_print:    sampling time =       4.20 ms /   111 runs   (    0.04 ms per token, 26415.99 tokens per second)
llama_perf_context_print:        load time =     750.08 ms
llama_perf_context_print: prompt eval time =      32.42 ms /    11 tokens (    2.95 ms per token,   339.32 tokens per second)
llama_perf_context_print:        eval time =    1247.74 ms /    99 runs   (   12.60 ms per token,    79.34 tokens per second)
llama_perf_context_print:       total time =    1293.25 ms /   110 tokens
./sniff-test.sh -m ~/models/ibm-granite/granite-4.0-tiny-preview/
Model directory: /home/ghart/models/ibm-granite/granite-4.0-tiny-preview/
Prompt: Tell me a story about a developer and their dog
F16: /home/ghart/models/ibm-granite/granite-4.0-tiny-preview/Granite-4.0-Tiny-Preview-62x915M-F16.gguf
Q4_K_M: 
------> CPU / F16
llama_perf_sampler_print:    sampling time =       4.55 ms /   110 runs   (    0.04 ms per token, 24175.82 tokens per second)
llama_perf_context_print:        load time =     673.76 ms
llama_perf_context_print: prompt eval time =     105.89 ms /    10 tokens (   10.59 ms per token,    94.43 tokens per second)
llama_perf_context_print:        eval time =    3750.80 ms /    99 runs   (   37.89 ms per token,    26.39 tokens per second)
llama_perf_context_print:       total time =    3892.95 ms /   109 tokens

------> CPU / Q4_K_M
llama_perf_sampler_print:    sampling time =       4.34 ms /   110 runs   (    0.04 ms per token, 25345.62 tokens per second)
llama_perf_context_print:        load time =     379.04 ms
llama_perf_context_print: prompt eval time =      70.77 ms /    10 tokens (    7.08 ms per token,   141.30 tokens per second)
llama_perf_context_print:        eval time =    2267.82 ms /    99 runs   (   22.91 ms per token,    43.65 tokens per second)
llama_perf_context_print:       total time =    2374.33 ms /   109 tokens

------> GPU / F16
llama_perf_sampler_print:    sampling time =       3.97 ms /   110 runs   (    0.04 ms per token, 27707.81 tokens per second)
llama_perf_context_print:        load time =    2392.76 ms
llama_perf_context_print: prompt eval time =     151.07 ms /    10 tokens (   15.11 ms per token,    66.19 tokens per second)
llama_perf_context_print:        eval time =    1010.63 ms /    99 runs   (   10.21 ms per token,    97.96 tokens per second)
llama_perf_context_print:       total time =    1190.72 ms /   109 tokens

------> GPU / A4_K_M
llama_perf_sampler_print:    sampling time =       3.97 ms /   110 runs   (    0.04 ms per token, 27686.89 tokens per second)
llama_perf_context_print:        load time =     889.56 ms
llama_perf_context_print: prompt eval time =     210.62 ms /    10 tokens (   21.06 ms per token,    47.48 tokens per second)
llama_perf_context_print:        eval time =     857.46 ms /    99 runs   (    8.66 ms per token,   115.46 tokens per second)
llama_perf_context_print:       total time =    1096.69 ms /   109 tokens

@gabe-l-hart
Copy link
Contributor

I've also verified coherent responses are consistent for each model between CPU/GPU and F16/Q4_K_M (within expected quantization precision limits).

The machine I'm running on has the following setup:

  • RHEL 9
  • 2x L40S
  • CUDA 12.9

@gabe-l-hart
Copy link
Contributor

Running tests with llama-parallel:

$ ./bin/llama-parallel -m /path/to/model.gguf -np 5 -ns 12 --temp 0 --repeat-penalty 1.1 [-pps]
  • mamba2-2.7b/ggml-model-Q4_K_M.gguf:

    • w/ pps:
      • Results are coherent and correlated with the prompt
      • It ran the What is the meaning of life? prompt three time. The first two produced identical results, but the third one did not, indicating some nondeterminism related to parallelism and order.
    • w/out pps:
      • Results are coherent and correlated with the prompt
      • The same behavior shows up where different results come up for the same prompt when repeated
      • The results for prompts do differ from running with -pps
  • Mamba-Codestral-7B-v0.1/ggml-model-Q4_K_M.gguf:

    • w/ pps:
      • Results are coherent and correlated with the prompt
      • In my run, all responses for repeated prompts were identical (not necessarily conclusive evidence that it can't happen)
    • w/out pps:
      • Results are coherent and correlated with the prompt
      • Repeated prompts do result in different responses
  • granite-4.0-tiny-preview/ggml-model-Q4_K_M.gguf:

    • w/ pps
      • Results are coherent and correlated with the prompt
      • Repeated prompts do result in different responses
    • w/out pps
      • SEG FAULT! (debugging time...)

@gabe-l-hart
Copy link
Contributor

I think it's pretty clear that the segfault is something I'm doing wrong with Granite and the hybrid cache, so that shouldn't have any impact on this branch. The only thing from my testing that might be concerning is the inconsistency between results for the same prompt when run using llama-parallel. @compilade would you expect these to always return consistent responses with --temp 0?

@compilade
Copy link
Collaborator Author

The only thing from my testing that might be concerning is the inconsistency between results for the same prompt when run using llama-parallel. @compilade would you expect these to always return consistent responses with --temp 0?

@gabe-l-hart Yes I would expect consistent response; however, when the batch sizes differ (which can happen during continuous batching), the floating-point accumulations in matmuls can be different (because of non-associativity of float addition), which might cause the differences you're seeing (unless the different outputs look broken).

At least in test-model-random (from #14139) I did notice very small (on the order of 1e-14) differences in the outputs for Mamba (which might apply to Mamba-2 too. Right, I tried this in https://github.com/compilade/llama.cpp/tree/compilade/test-model-random-mamba2, and there are small differences on the order of at most 1e-12, which I think are caused by floating-point non-associativity. They are relatively negligible for correctness (for comparison, the same test for a Llama2-like model has differences on the order of 1e-7)).

Subclasses of llm_graph_context cannot have extra fields,
because the called destructor is not the one from the subclass.
This otherwise would cause problems when runnning Mamba-(1|2) inference
when compiled -DGGML_SANITIZE_ADDRESS=ON
@gabe-l-hart
Copy link
Contributor

@compilade In digging into the segfault for Granite 4, a member of our team (@AnmolS1) found some thread safety issues in the recurrent cache that were triggering the segfault. He put a std::mutex around apply and prepare and was able to see the issue resolve (draft PR inbound once he completes the checkboxes for IBM OSS contributions). I have two thoughts from what he found:

  1. Is it possible that thread safety issues could be contributing to the inconsistent responses above?
  2. Are there any thoughts you have on a more nuanced approach to thread safety rather than serializing invocation of apply / prepare?

@compilade
Copy link
Collaborator Author

compilade commented Jun 27, 2025

a member of our team (@AnmolS1) found some thread safety issues in the recurrent cache that were triggering the segfault. He put a std::mutex around apply and prepare and was able to see the issue resolve

@gabe-l-hart This is confusing to me, because I was under the impression that these methods were always called from a single thread (since I think llama_decode should never be called concurrently on the same llama_context? (Although there isn't any mutex preventing that for now, I think the examples/tools still don't call it concurrently, except maybe the server...)). I'm likely wrong here. @ggerganov is llama_decode intended to be callable concurrently on the same llama_context from multiple threads?

Under what conditions did the segfault happen? I assume it's with https://github.com/gabe-l-hart/llama.cpp/tree/GraniteFour? With which tool/example and which args?

I assume it's with ./bin/llama-parallel -m /path/to/model.gguf -np 5 -ns 12 --temp 0 --repeat-penalty 1.1 with https://huggingface.co/ibm-granite/granite-4.0-tiny-preview?

I'm currently downloading granite-4.0-tiny-preview to attempt reproducing the issue.

@ggerganov
Copy link
Member

I don't see how there could be a thread safety issue in the llama-parallel example. The apply and prepare should not require a mutex - they are called by the same thread.

@gabe-l-hart
Copy link
Contributor

Ok, interesting. It is indeed happening with this command

./bin/llama-parallel -m ~/models/granite-4.0-tiny-preview/ggml-model-Q4_K_M.gguf -np 5 -ns 12 --temp 0 --repeat-penalty 1.1

It only happens when using the hybrid cache, so it's definitely something about the interplay between the two. When I run with a full debug build, I see this error:

Assertion failed: (status == LLAMA_MEMORY_STATUS_SUCCESS), function apply, file llama-memory-recurrent.cpp, line 1074.
Abort trap: 6

It's happening here when the hybrid cache delegates apply to the child recurrent cache. I tried commenting the assertion out yesterday, but it then fell through to an invalid access trying to use a const llama_ubatch & that was null in the call to find_slot. The interesting thing is that it doesn't seem to happen with -pps, so it seems to have something to do with parallel prefill. I'll dig some more too.

@gabe-l-hart
Copy link
Contributor

I forgot to mention, the status when it hits that assertion is LLAMA_MEMORY_STATUS_NO_UPDATE, so it seems that somehow the recurrent and attention sub-contexts are getting out-of-sync.

@gabe-l-hart
Copy link
Contributor

(sorry if this is mixing PR contexts. We can take this discussion to the Granite PR too if we're pretty confident it isn't causing issues for non-hybrid models)

@gabe-l-hart
Copy link
Contributor

@compilade @ggerganov Now that we've tracked down the source of the hybrid cache seg fault, is there anything holding up this branch?

Copy link
Member

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seem good to merge. Up to @compilade for the final word in case something else is needed before merging.

@compilade
Copy link
Collaborator Author

I've tested most things I wasn't completely sure about (CUDA, SVE), and inference on those platform does seem to work properly for both Mamba-1 and Mamba-2 models, with -ngl 0 and -ngl 99 (and it looks like 0b6f6be also fixes RWKV inference when compiled with SVE on a c7g AWS instance).

Weird small models like https://huggingface.co/delphi-suite/v0-mamba-100k seem to work even when compiled with -DGGML_CUDA=ON since 71bef66 (it failed with an assert previously, but ran correctly in a CPU-only build).

I will merge this without further changes today (Wednesday, July 2nd) around 17:00 UTC (in approx. 8 hours), unless there's an objection or if I find other problems before then.

@compilade compilade merged commit 5d46bab into master Jul 2, 2025
53 checks passed
@gabe-l-hart
Copy link
Contributor

Huge thank you @compilade! It's wonderful to have this fully merged

gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Jul 2, 2025
* origin/master:
llama : initial Mamba-2 support (ggml-org#9126)
sync : ggml
ggml : add version function to get lib version (ggml/1286)
Set RPATH to "@loader_path" / "$ORIGIN" to ensure executables and dynamic libraries search for dependencies in their origin directory. (ggml-org#14309)
CUDA: add softmax broadcast (ggml-org#14475)
CUDA: broadcasting for FlashAttention mask (ggml-org#14500)
vulkan: support softmax/FA batch and broadcast (ggml-org#14449)
ggml : support bcast ggml_soft_max_ext, ggml_flash_attn_ext (ggml-org#14435)
opencl : fix possible buffer overflow in dump_tensor (ggml-org#14490)
simple-chat : fix context-exceeded condition (ggml-org#14494)
opencl : skip empty nodes on cgraph compute (ggml-org#14491)
opencl : update upscale to support align corners (ggml-org#14488)
ci : add OpenCL to labeler workflow (ggml-org#14496)
github : add OpenCL backend to issue templates (ggml-org#14492)
ggml : Callback before abort (ggml-org#14481)
ci : disable fast-math for Metal GHA CI (ggml-org#14478)
Minh141120 pushed a commit to menloresearch/llama.cpp that referenced this pull request Jul 5, 2025
* llama : initial Mamba-2 support

* ggml : SIMD ggml_ssm_scan for Mamba-2

* ggml : improve ggml_mul speed when masking recurrent states

* llama : support running Mamba-Codestral-7B-v0.1

* llama : fix Mamba-2 conv state saving

* ggml : make the ggml_mul fast broadcast path more consistently formatted

* llama : remove unused variable

* llama : add missing break

* convert_hf : prefer SentencePiece tokenizer for Mamba-2 when present

The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires
workarounds to work correctly.

* llama : avoid redundant state copy for Mamba 1 and 2

* metal : attempt to adapt SSM_SCAN for Mamba-2

* metal : fix SSM_SCAN pipeline scope

* metal : use log and exp instead of log1pf and expf in SSM_SCAN

* metal : remove unused arguments for SSM_SCAN

The max index is 31, so trimming the arguments is necessary.

* metal : add back n_seqs to SSM_SCAN args

Whoops, this is needed for the offset in the concatenated output.

* metal : fix SSM_SCAN state head offset

* metal : fix wrong number of tokens per sequence in SSM_SCAN

* ggml : remove unused fast broadcast path in GGML_MUL

This was initially added because states were masked with ggml_mul,
but this is no longer done and so this "optimisation" is no longer
necessary, or at least not worth the additional code complexity.

* ggml : avoid multiply by D in GGML_OP_SSM_SCAN

This makes the weight buft detection in src/llama.cpp simpler.

* convert : transpose Mamba-2 A, D and reshape SSM_NORM

This breaks existing conversions of Mamba-2 models
to avoid some reshapes.

Not sure if it's a good idea,
but it makes the graph slightly cleaner.

* llama : more appropriate SSM_SCAN and SSM_CONV buft support checks

* convert : fix flake8 lint

* metal : fix confusion between ; and ,

* metal : add missing args for nb references in ssm_scan_f32_group

* metal : single-user mamba2 inference works

* kv-cache : remove const_cast when setting inputs for s_copy

And also fix multi-user inference for recurrent models
by using cell_id instead of i as the kv cell index
when populating s_copy.

* convert : avoid AutoConfig for Mamba and Mamba2 hparams

* kv-cache : allow context shift for recurrent models

* graph : fix recurrent state copies when avoiding copies

Works, but using lambda functions might not be that clean.

* ggml : fix mamba2 ssm scan when compiled with SVE

* ggml-cpu : reorder SVE FMA for consistency with other SIMD arches

* cuda : implement ssm scan for Mamba2

There is still room for improvement, but it works!

* cuda : adapt Mamba1 ssm scan to shape changes from Mamba2

* mamba : fix mismatched new and delete size for llm_build_mamba

Subclasses of llm_graph_context cannot have extra fields,
because the called destructor is not the one from the subclass.
This otherwise would cause problems when runnning Mamba-(1|2) inference
when compiled -DGGML_SANITIZE_ADDRESS=ON

* cuda : graceful fallback for Mamba-1 models with weird embd size
qnixsynapse pushed a commit to menloresearch/llama.cpp that referenced this pull request Jul 6, 2025
* llama : initial Mamba-2 support

* ggml : SIMD ggml_ssm_scan for Mamba-2

* ggml : improve ggml_mul speed when masking recurrent states

* llama : support running Mamba-Codestral-7B-v0.1

* llama : fix Mamba-2 conv state saving

* ggml : make the ggml_mul fast broadcast path more consistently formatted

* llama : remove unused variable

* llama : add missing break

* convert_hf : prefer SentencePiece tokenizer for Mamba-2 when present

The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires
workarounds to work correctly.

* llama : avoid redundant state copy for Mamba 1 and 2

* metal : attempt to adapt SSM_SCAN for Mamba-2

* metal : fix SSM_SCAN pipeline scope

* metal : use log and exp instead of log1pf and expf in SSM_SCAN

* metal : remove unused arguments for SSM_SCAN

The max index is 31, so trimming the arguments is necessary.

* metal : add back n_seqs to SSM_SCAN args

Whoops, this is needed for the offset in the concatenated output.

* metal : fix SSM_SCAN state head offset

* metal : fix wrong number of tokens per sequence in SSM_SCAN

* ggml : remove unused fast broadcast path in GGML_MUL

This was initially added because states were masked with ggml_mul,
but this is no longer done and so this "optimisation" is no longer
necessary, or at least not worth the additional code complexity.

* ggml : avoid multiply by D in GGML_OP_SSM_SCAN

This makes the weight buft detection in src/llama.cpp simpler.

* convert : transpose Mamba-2 A, D and reshape SSM_NORM

This breaks existing conversions of Mamba-2 models
to avoid some reshapes.

Not sure if it's a good idea,
but it makes the graph slightly cleaner.

* llama : more appropriate SSM_SCAN and SSM_CONV buft support checks

* convert : fix flake8 lint

* metal : fix confusion between ; and ,

* metal : add missing args for nb references in ssm_scan_f32_group

* metal : single-user mamba2 inference works

* kv-cache : remove const_cast when setting inputs for s_copy

And also fix multi-user inference for recurrent models
by using cell_id instead of i as the kv cell index
when populating s_copy.

* convert : avoid AutoConfig for Mamba and Mamba2 hparams

* kv-cache : allow context shift for recurrent models

* graph : fix recurrent state copies when avoiding copies

Works, but using lambda functions might not be that clean.

* ggml : fix mamba2 ssm scan when compiled with SVE

* ggml-cpu : reorder SVE FMA for consistency with other SIMD arches

* cuda : implement ssm scan for Mamba2

There is still room for improvement, but it works!

* cuda : adapt Mamba1 ssm scan to shape changes from Mamba2

* mamba : fix mismatched new and delete size for llm_build_mamba

Subclasses of llm_graph_context cannot have extra fields,
because the called destructor is not the one from the subclass.
This otherwise would cause problems when runnning Mamba-(1|2) inference
when compiled -DGGML_SANITIZE_ADDRESS=ON

* cuda : graceful fallback for Mamba-1 models with weird embd size
qnixsynapse pushed a commit to menloresearch/llama.cpp that referenced this pull request Jul 6, 2025
* llama : initial Mamba-2 support

* ggml : SIMD ggml_ssm_scan for Mamba-2

* ggml : improve ggml_mul speed when masking recurrent states

* llama : support running Mamba-Codestral-7B-v0.1

* llama : fix Mamba-2 conv state saving

* ggml : make the ggml_mul fast broadcast path more consistently formatted

* llama : remove unused variable

* llama : add missing break

* convert_hf : prefer SentencePiece tokenizer for Mamba-2 when present

The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires
workarounds to work correctly.

* llama : avoid redundant state copy for Mamba 1 and 2

* metal : attempt to adapt SSM_SCAN for Mamba-2

* metal : fix SSM_SCAN pipeline scope

* metal : use log and exp instead of log1pf and expf in SSM_SCAN

* metal : remove unused arguments for SSM_SCAN

The max index is 31, so trimming the arguments is necessary.

* metal : add back n_seqs to SSM_SCAN args

Whoops, this is needed for the offset in the concatenated output.

* metal : fix SSM_SCAN state head offset

* metal : fix wrong number of tokens per sequence in SSM_SCAN

* ggml : remove unused fast broadcast path in GGML_MUL

This was initially added because states were masked with ggml_mul,
but this is no longer done and so this "optimisation" is no longer
necessary, or at least not worth the additional code complexity.

* ggml : avoid multiply by D in GGML_OP_SSM_SCAN

This makes the weight buft detection in src/llama.cpp simpler.

* convert : transpose Mamba-2 A, D and reshape SSM_NORM

This breaks existing conversions of Mamba-2 models
to avoid some reshapes.

Not sure if it's a good idea,
but it makes the graph slightly cleaner.

* llama : more appropriate SSM_SCAN and SSM_CONV buft support checks

* convert : fix flake8 lint

* metal : fix confusion between ; and ,

* metal : add missing args for nb references in ssm_scan_f32_group

* metal : single-user mamba2 inference works

* kv-cache : remove const_cast when setting inputs for s_copy

And also fix multi-user inference for recurrent models
by using cell_id instead of i as the kv cell index
when populating s_copy.

* convert : avoid AutoConfig for Mamba and Mamba2 hparams

* kv-cache : allow context shift for recurrent models

* graph : fix recurrent state copies when avoiding copies

Works, but using lambda functions might not be that clean.

* ggml : fix mamba2 ssm scan when compiled with SVE

* ggml-cpu : reorder SVE FMA for consistency with other SIMD arches

* cuda : implement ssm scan for Mamba2

There is still room for improvement, but it works!

* cuda : adapt Mamba1 ssm scan to shape changes from Mamba2

* mamba : fix mismatched new and delete size for llm_build_mamba

Subclasses of llm_graph_context cannot have extra fields,
because the called destructor is not the one from the subclass.
This otherwise would cause problems when runnning Mamba-(1|2) inference
when compiled -DGGML_SANITIZE_ADDRESS=ON

* cuda : graceful fallback for Mamba-1 models with weird embd size
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Apple Metal https://en.wikipedia.org/wiki/Metal_(API) ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs python python script changes Review Complexity : Medium Generally require more time to grok but manageable by beginner to medium expertise level testing Everything test related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature Request: Support Codestral Mamba llama : support Mamba-2