Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

stellasia
Copy link
Contributor

@stellasia stellasia commented Aug 26, 2025

Description

Restore LangChain chat models compatibility in the LLMInterface:

  • No breaking change.
  • The input parameter for all invoke method now accepts Union[str, list[LLMMessage]], where LLMMessage is a TypedDict with role/content keys.
  • This signature, including system_instructions and message_history is translated into a list[LLMMessage] that is being passed down to the corresponding _invoke method.
  • Subclasses now only need to implement the private _invoke methods.
    • Advantage: no need to repeat the parameter conversion nor the rate limit decorators
  • Updated unit tests:
    • Many tests from each implementation are removed because the logic has been moved elsewhere (either in utils or the base class)

Type of Change

  • New feature
  • Bug fix
  • Breaking change
  • Documentation update
  • Project configuration change

Complexity

Complexity: Low (many changes, but just refactoring in the end)

How Has This Been Tested?

  • Unit tests
  • E2E tests
  • Manual tests

Checklist

The following requirements should have been met (depending on the changes in the branch):

  • Documentation has been updated
  • Unit tests have been updated
  • E2E tests have been updated
  • Examples have been updated
  • New files have copyright header
  • CLA (https://neo4j.com/developer/cla/) has been signed
  • CHANGELOG.md updated if appropriate

@stellasia stellasia force-pushed the feature/improved-llm-interface branch 2 times, most recently from 3475e0b to 2c4f4e5 Compare August 28, 2025 16:51
res: LLMResponse = llm.invoke("text")
print(res.content)

# If rate_limit_handler and async_rate_limit_handler decorators are used and you want to use a custom rate limit handler
# Type variables for function signatures used in rate limit handlers
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the above comments on how to customise a rate limit handler are still valid no?

@stellasia stellasia force-pushed the feature/improved-llm-interface branch from c229aa3 to d9c0f21 Compare September 18, 2025 11:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants