Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@TKaluza
Copy link

@TKaluza TKaluza commented Nov 8, 2025

This pull request adds support for handling predicted outputs from Mistral models and integrates them following the approach used in #2098. The change ensures predicted outputs are parsed and validated using a Pydantic model. Closes #3336.

What I changed

  • Implement parsing and integration of Mistral predicted outputs into the codebase (following the pattern from Add support for predicted outputs in OpenAIModelSettings #2098).
  • Always use the Pydantic model variant from the Mistral SDK for runtime validation (see mistral-client:predicted).
  • Add a test that uses a cassette recording for a minimal example demonstrating the end-to-end behavior.
  • Add unitest for staticmethod that converts predicted outputs into the mistral representation.

Hope that works for you !

@TKaluza TKaluza marked this pull request as draft November 9, 2025 13:02
@TKaluza TKaluza marked this pull request as ready for review November 9, 2025 13:44
"""Settings used for a Mistral model request."""

# ALL FIELDS MUST BE `mistral_` PREFIXED SO YOU CAN MERGE THEM WITH OTHER MODELS.
mistral_prediction: str | MistralPrediction | MistralPredictionTypedDict | None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we support only str? It looks like the types don't have any additional fields, and None is unnecessary as they key can just be omitted from the dict.

Also if you're up for updating the OpenAI equivalent to support str as well, that'd be great :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine with me. It would result in the easiest-to-use variant for the user and be "convertible" to other models. I will update this and adjust the OpenAI equivalent to support STR. I will also update the documentation.
Any information about the software architecture would be greatly appreciated. If I have overlooked some documentation, please point it out to me. I wasn't sure whether the preferred data type for the Mistral SDK should be PydanticModel or TypedDict/Dict. I went with Pydantic.

At what point would the prediction be added to the general ModelSettings with at least two or three SDKs supporting it?


As far as I know, only OpenAI and Mistral support it right now.

Q2: Another approach, especially regarding usage, is prefill support. I wanted to link it here, but I agree with what you wrote, @DouweM, in #2825.

return MistralPrediction.model_validate(prediction)
else:
raise RuntimeError(
f'Unsupported prediction type: {type(prediction)} for MistralModelSettings. Expected str, dict, or MistralPrediction.'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the suggestion above this can be simplified a lot and we won't need this error anymore, but as a note for the future: we don't need errors like this, we can assume the user is type-checking their code.

@DouweM DouweM self-assigned this Nov 10, 2025
@TKaluza TKaluza marked this pull request as draft November 15, 2025 09:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add Support for predicted_outputs in MistralModelSettings

2 participants