-
Notifications
You must be signed in to change notification settings - Fork 11.8k
Model: Granite MoE shared #13269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model: Granite MoE shared #13269
Conversation
|
||
// For Granite MoE Shared | ||
if (arch == LLM_ARCH_GRANITE_MOE_SHARED) { | ||
layer.ffn_gate_shexp = create_tensor(tn(LLM_TENSOR_FFN_GATE_SHEXP, "weight", i), {n_embd, hparams.n_ff_shexp}, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can simply check hparams.n_ff_shexp > 0
then load these tensors. The value of hparams.n_ff_shexp
will be 0 if it's not written in gguf
This way, you can remove 2/3 code of this PR, making it much easier
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, yes, this would be much simpler! All of the granite architectures are highly related and slight tweaks on other architectures. In transformers
, the approach has always been to keep the architectures isolated and use their modular
approach for shared code. I know that policy is less strict here, but wasn't sure how much to lean into collapsing all of these architectures into a single architecture string versus keeping them separate and sharing code underneath. Is there a strong preference here going forward?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO in this particular case, the shared exp is a common thing and we are quite sure that whatever we're adding can be reused later by another model.
But if you want even more isolated code, it's better to duplicate the whole builder function, instead of adding a new if
branch. However, I don't quite like this approach (reason above)
CC @ggerganov too, WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. My personal preference is to keep a 1:1 mapping between architecture names in huggingface
, then have maximal code reuse on the backend. That is a very weakly held preference, though, so I'm happy to take whatever direction is best from your collective maintainers' POV.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gabe-l-hart Not sure if you have a near deadline to merge this PR (?)
We never aim for 1:1 mapping between HF model_type
<> llama.cpp arch name from the beginning, so it's probably fine to consider this new arch as a variant of GRANITE_MOE
with n_ff_shexp > 0
. I'm not sure in which case the 1:1 mapping will be useful?
In anw, after second thought, I have no strong opinion whether this need to be a dedicated arch name or not, you can probably keep it this way if you like. But maybe we should move the cgraph of granite family to a dedicated build_*
function. My main concern is that we're adding things on top of build_llama
which make it not very easy to read
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes total sense. I juggle a lot of different frameworks, so the opinion of keeping the architecture names aligned is mostly just for mental clarity on my part (and anyone else comparing across formats/engines). I like the idea of a dedicated build
function so we don't keep clogging up llama
.
Now that the full Granite 4 architecture is public, I can state that this shared MoE architecture is really just a building block for the full Granite 4 and we don't plan to release models with this as a standalone architecture. Given that, I'd be totally happy not merging this at all and bringing in the shared expert as part of the granitemoehybrid
architecture.
gguf-py/gguf/constants.py
Outdated
EXAONE = auto() | ||
GRANITE = auto() | ||
GRANITE_MOE = auto() | ||
GRANITE_MOE_SHARED = auto() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think a new arch is needed here. Some other archs also have exp/shared exp and they are being controlled via the n_ff_shexp
Branch: GraniteMoEShared Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteMoEShared Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteMoEShared Signed-off-by: Gabe Goodhart <[email protected]>
The hparam and architecture plumbing should be correct, but the implementation of the shared experts seems to still be broken. Branch: GraniteMoEShared Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteMoEShared Signed-off-by: Gabe Goodhart <[email protected]>
I had misread that the shared experts take the inputs _before_ the standard MoE layer and was feeding the output of the MoE to the shared experts. Branch: GraniteMoEShared Signed-off-by: Gabe Goodhart <[email protected]>
This is a cleaner way that will allow more flexibility in architecture strings going forward. Branch: GraniteMoEShared Signed-off-by: Gabe Goodhart <[email protected]>
97de56d
to
52d2ed6
Compare
This helps de-clutter the llama-family graph construction and allows granite to diverge further (in preparation for Granite 4). NOTE: I removed the granite scale factors from llm_build_deci because they appear to only be there as copy-paste from llm_build_llama. The HF config does not seem to set those values: https://huggingface.co/Deci/DeciLM-7B/blob/main/config.json Branch: GraniteMoEShared Signed-off-by: Gabe Goodhart <[email protected]>
@ngxson @ggerganov I've rebased this on
|
This should not have been reachable, but it warns on some compliers Branch: GraniteMoEShared Signed-off-by: Gabe Goodhart <[email protected]>
0753a7f
to
3d79214
Compare
I agree with @ngxson that adding separate arch for MoE-shared is a bit redundant, but it's OK either way. The separate build function for Granite models is good refactoring. I think we are ready to merge. @gabe-l-hart Are the models out? |
This comment was marked as spam.
This comment was marked as spam.
@ggerganov @ngxson Now that I think more about it, I agree that we should not use a separate architecture name for this. We're not currently planning to release models with this architecture by itself, but we will be using it for the attention layers in the Granite 4.0 models which are a hybrid of I'll consolidate the changes to remove the extra enum later today. |
Branch: GraniteMoEShared Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteMoEShared Signed-off-by: Gabe Goodhart <[email protected]>
@ggerganov @ngxson I've now removed |
if (!inp_pos) { | ||
inp_pos = build_inp_pos(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gabe-l-hart This should be moved at the beginning of the loop function, before the loop over the layers, to avoid creating the tensor for each layer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pointing this out. My thought with putting it there was that it would be lazily initialized on the first layer if use_rope
is true
and then not re-initialized on subsequent loop iterations because of the if
check. Is there something I'm missing about how this tensor is used below that would cause the pointer to be nullptr
on later loop iterations?
Regardless, there's no harm in putting it back at the top with the use_rope
check, so I can get a small PR in place to do that or just lump the change in with my Granite 4 branch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I got somehow confused - it won't create the tensor for each layer. Still, it's better to move it before the loop for consistency with the other build functions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense! Quick PR to fix it: #13538
Description
This PR adds support for the
GraniteMoEShared
architecture, matching the implementation in transformers. The model is an iteration on top ofGraniteMoE
and adds a shared expert to each MoE layer.NOTE: There is not a public model with this architecture for testing yet, but it is a key building block for the just-released Granite 4 architecture.