Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Model: Granite MoE shared #13269

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
May 13, 2025
Merged

Model: Granite MoE shared #13269

merged 11 commits into from
May 13, 2025

Conversation

gabe-l-hart
Copy link
Contributor

Description

This PR adds support for the GraniteMoEShared architecture, matching the implementation in transformers. The model is an iteration on top of GraniteMoE and adds a shared expert to each MoE layer.

NOTE: There is not a public model with this architecture for testing yet, but it is a key building block for the just-released Granite 4 architecture.

@github-actions github-actions bot added the python python script changes label May 2, 2025

// For Granite MoE Shared
if (arch == LLM_ARCH_GRANITE_MOE_SHARED) {
layer.ffn_gate_shexp = create_tensor(tn(LLM_TENSOR_FFN_GATE_SHEXP, "weight", i), {n_embd, hparams.n_ff_shexp}, 0);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can simply check hparams.n_ff_shexp > 0 then load these tensors. The value of hparams.n_ff_shexp will be 0 if it's not written in gguf

This way, you can remove 2/3 code of this PR, making it much easier

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, yes, this would be much simpler! All of the granite architectures are highly related and slight tweaks on other architectures. In transformers, the approach has always been to keep the architectures isolated and use their modular approach for shared code. I know that policy is less strict here, but wasn't sure how much to lean into collapsing all of these architectures into a single architecture string versus keeping them separate and sharing code underneath. Is there a strong preference here going forward?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO in this particular case, the shared exp is a common thing and we are quite sure that whatever we're adding can be reused later by another model.

But if you want even more isolated code, it's better to duplicate the whole builder function, instead of adding a new if branch. However, I don't quite like this approach (reason above)

CC @ggerganov too, WDYT?

Copy link
Contributor Author

@gabe-l-hart gabe-l-hart May 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. My personal preference is to keep a 1:1 mapping between architecture names in huggingface, then have maximal code reuse on the backend. That is a very weakly held preference, though, so I'm happy to take whatever direction is best from your collective maintainers' POV.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gabe-l-hart Not sure if you have a near deadline to merge this PR (?)

We never aim for 1:1 mapping between HF model_type <> llama.cpp arch name from the beginning, so it's probably fine to consider this new arch as a variant of GRANITE_MOE with n_ff_shexp > 0. I'm not sure in which case the 1:1 mapping will be useful?

In anw, after second thought, I have no strong opinion whether this need to be a dedicated arch name or not, you can probably keep it this way if you like. But maybe we should move the cgraph of granite family to a dedicated build_* function. My main concern is that we're adding things on top of build_llama which make it not very easy to read

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes total sense. I juggle a lot of different frameworks, so the opinion of keeping the architecture names aligned is mostly just for mental clarity on my part (and anyone else comparing across formats/engines). I like the idea of a dedicated build function so we don't keep clogging up llama.

Now that the full Granite 4 architecture is public, I can state that this shared MoE architecture is really just a building block for the full Granite 4 and we don't plan to release models with this as a standalone architecture. Given that, I'd be totally happy not merging this at all and bringing in the shared expert as part of the granitemoehybrid architecture.

EXAONE = auto()
GRANITE = auto()
GRANITE_MOE = auto()
GRANITE_MOE_SHARED = auto()
Copy link
Collaborator

@ngxson ngxson May 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think a new arch is needed here. Some other archs also have exp/shared exp and they are being controlled via the n_ff_shexp

Branch: GraniteMoEShared

Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteMoEShared

Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteMoEShared

Signed-off-by: Gabe Goodhart <[email protected]>
The hparam and architecture plumbing should be correct, but the
implementation of the shared experts seems to still be broken.

Branch: GraniteMoEShared

Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteMoEShared

Signed-off-by: Gabe Goodhart <[email protected]>
I had misread that the shared experts take the inputs _before_ the standard
MoE layer and was feeding the output of the MoE to the shared experts.

Branch: GraniteMoEShared

Signed-off-by: Gabe Goodhart <[email protected]>
This is a cleaner way that will allow more flexibility in architecture
strings going forward.

Branch: GraniteMoEShared

Signed-off-by: Gabe Goodhart <[email protected]>
This helps de-clutter the llama-family graph construction and allows
granite to diverge further (in preparation for Granite 4).

NOTE: I removed the granite scale factors from llm_build_deci because they
appear to only be there as copy-paste from llm_build_llama. The HF config
does not seem to set those values:
https://huggingface.co/Deci/DeciLM-7B/blob/main/config.json

Branch: GraniteMoEShared

Signed-off-by: Gabe Goodhart <[email protected]>
@gabe-l-hart
Copy link
Contributor Author

@ngxson @ggerganov I've rebased this on master and made the following changes based on your suggestions:

  1. Used checks based on hparams.n_ff_shexp rather than the architecture string (this is definitely cleaner and more extensible)
  2. Moved all granite model construction to llm_build_granite and removed granite-specific conditionals from llm_build_llama
    • NOTE: I also removed granite-specific conditionals from llm_build_deci which seem to have been there as copy-paste from llm_build_llama. I checked with the HF model config and it doesn't appear that the Deci models use these scale factors.

This should not have been reachable, but it warns on some compliers

Branch: GraniteMoEShared

Signed-off-by: Gabe Goodhart <[email protected]>
@ggerganov
Copy link
Member

I agree with @ngxson that adding separate arch for MoE-shared is a bit redundant, but it's OK either way. The separate build function for Granite models is good refactoring.

I think we are ready to merge. @gabe-l-hart Are the models out?

@zunigasllc

This comment was marked as spam.

@gabe-l-hart
Copy link
Contributor Author

@ggerganov @ngxson Now that I think more about it, I agree that we should not use a separate architecture name for this. We're not currently planning to release models with this architecture by itself, but we will be using it for the attention layers in the Granite 4.0 models which are a hybrid of mamba2 and this architecture (Granite MoE w/ shared expert).

I'll consolidate the changes to remove the extra enum later today.

Branch: GraniteMoEShared

Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteMoEShared

Signed-off-by: Gabe Goodhart <[email protected]>
@gabe-l-hart
Copy link
Contributor Author

@ggerganov @ngxson I've now removed GRANITE_MOE_SHARED as a standalone architecture and consolidated into GRANITE_MOE. I've verified that conversion and inference work as expected with both a GraniteMoeForCausalLM and GraniteMoeSharedForCausalLM model.

@CISC CISC merged commit d590cd4 into ggml-org:master May 13, 2025
47 checks passed
@gabe-l-hart gabe-l-hart deleted the GraniteMoEShared branch May 13, 2025 14:13
Comment on lines +12260 to +12262
if (!inp_pos) {
inp_pos = build_inp_pos();
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gabe-l-hart This should be moved at the beginning of the loop function, before the loop over the layers, to avoid creating the tensor for each layer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing this out. My thought with putting it there was that it would be lazily initialized on the first layer if use_rope is true and then not re-initialized on subsequent loop iterations because of the if check. Is there something I'm missing about how this tensor is used below that would cause the pointer to be nullptr on later loop iterations?

Regardless, there's no harm in putting it back at the top with the use_rope check, so I can get a small PR in place to do that or just lump the change in with my Granite 4 branch.

Copy link
Member

@ggerganov ggerganov May 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got somehow confused - it won't create the tensor for each layer. Still, it's better to move it before the loop for consistency with the other build functions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense! Quick PR to fix it: #13538

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
python python script changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants