Thanks to visit codestin.com
Credit goes to github.com

Skip to content

mark llama4 as not supported with fa2 #37416

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 10, 2025

Conversation

winglian
Copy link
Contributor

What does this PR do?

While FA2 does "work" with Llama-4, according to @ArthurZucker it still needs kernel support. It's probably best to mark this as not supported otherwise folks might burn cycles wondering why it's not truly converging when training (or inferencing)

For example, using Scout with prefix of Roses are red,, the implementations complete these as follows:

  • FA2: ['<|begin_of_text|>Roses are red, of the1 in']
  • flex: ['<|begin_of_text|>Roses are red, violets are blue']

Fixes # (issue)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

@ArthurZucker
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@github-actions github-actions bot marked this pull request as draft April 10, 2025 09:43
Copy link

Hi 👋, thank you for opening this pull request! The pull request is converted to draft by default. The CI will be paused while the PR is in draft mode. When it is ready for review, please click the Ready for review button (at the bottom of the PR page). This will assign reviewers and trigger CI.

@winglian winglian marked this pull request as ready for review April 10, 2025 09:44
@ArthurZucker ArthurZucker added the for patch Tag issues / labels that should be included in the next patch label Apr 10, 2025
Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@ArthurZucker ArthurZucker merged commit 0ddad2d into huggingface:main Apr 10, 2025
11 of 20 checks passed
@acphile
Copy link

acphile commented Apr 16, 2025

Hi, team. In my case, however I did not meet the generation issue when using flash_attention_2. I load the model via:

config = Llama4Config.from_pretrained("meta-llama/Llama-4-Scout-17B-16E-Instruct")
config.text_config._attn_implementation = "flash_attention_2"
model = Llama4ForConditionalGeneration.from_pretrained(
    "meta-llama/Llama-4-Scout-17B-16E-Instruct",
    config=config,
    device_map="auto",
    torch_dtype=torch.bfloat16,
)

It does correctly generate violets are blue for Roses are red,

But I do find the vision attention part does not support flash_attention_2.

So I wonder what is the root cause of the wrong generation outputs mentioned here? @ArthurZucker @winglian

cyr0930 pushed a commit to cyr0930/transformers that referenced this pull request Apr 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
for patch Tag issues / labels that should be included in the next patch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants