Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@mi12-root
Copy link
Contributor

Including structured output streaming via Partial and tool loop.

Also add `Partial` associatedtype to `Generable` protocol with default Self.

All implementations currently throw "not implemented" errors.
`@Generable` now emits a nested subtype for streaming use cases:

```
@generable
struct T {
  // Fields here.
}
```

now expands to:

```
struct T {
  // Fields here.
}

extension T: Generable {
   Partial {
      // partial fields
   }

  // Other generated fields.
}
```

The rules for generating the partial subtypes are as follows:

- `T?` => `T.Partial`,
-  otherwise: `T` => `T.Partial`
This should help avoid unwanted automatic defaults
Currently none of the backends support streaming, so marking the tests are disabled.
Tool use is still not supported
When a message is truncated, OpenAI returns `incomplete` event rather than `completed` event.
- Unify generateResponse and generateResponseStream implementations
- Both methods now use the same underlying streaming mechanism
- Reduce code duplication and improve consistency
- Maintain same API contract and error handling
Removed the 'sending' keyword from all AsyncThrowingStream return types
across the streaming API. This affects:

- LLM protocol methods (replyStream functions)
- All backend implementations (OpenAI, Apple, MLX)
- Test implementations (FakeLLM, CrashingLLM)
- Documentation examples

The streaming functionality remains unchanged, only the function
signatures have been updated to remove the sending parameter.
@mi12-root mi12-root merged commit df0b659 into main Sep 28, 2025
2 checks passed
@mi12-root mi12-root deleted the streaming branch September 28, 2025 16:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants