Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@riunyfir
Copy link

@riunyfir riunyfir commented Nov 2, 2025

Description:

This PR fixes a bug where init_chat_model fails when used with Hugging Face models. The issue was that ChatHuggingFace requires an llm object (such as HuggingFaceEndpoint), but the code was attempting to pass model_id directly.

The fix:

  • Creates a HuggingFaceEndpoint instance using the model as repo_id
  • Properly separates ChatHuggingFace-specific parameters (tokenizer, system_message, custom_get_token_ids, verbose, metadata, tags) from HuggingFaceEndpoint parameters (temperature, max_new_tokens, etc.)
  • Passes the HuggingFaceEndpoint instance to ChatHuggingFace as the required llm parameter

This change ensures that init_chat_model("huggingface:microsoft/Phi-3-mini-4k-instruct", temperature=0.7) works correctly by creating the appropriate underlying LLM wrapper that ChatHuggingFace expects.

Issue:

N/A (bug fix without associated issue)

Dependencies:

No new dependencies. This fix uses existing langchain_huggingface package imports.

@github-actions github-actions bot added the langchain Related to the package `langchain` label Nov 2, 2025
Copy link
Collaborator

@ccurme ccurme left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this. Is it possible to do this without dropping support for HuggingFacePipeline models?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

langchain Related to the package `langchain`

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants