Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@shuyangli
Copy link
Member

No description provided.

@shuyangli shuyangli changed the title Remove existing StoredInference PyO3 type [Stacked] Remove existing StoredInference PyO3 type Nov 13, 2025
@shuyangli shuyangli force-pushed the sl/remove-stored-inference-pyo3 branch 4 times, most recently from 890f2dc to ab29765 Compare November 13, 2025 22:03
@shuyangli shuyangli force-pushed the sl/create-python-list-get-inferences-api branch from 900ea55 to 3f095b0 Compare November 13, 2025 22:03
@shuyangli shuyangli force-pushed the sl/remove-stored-inference-pyo3 branch from ab29765 to 3efd6b8 Compare November 13, 2025 22:23
@shuyangli shuyangli marked this pull request as ready for review November 13, 2025 22:45
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 386 to +390
} else {
deserialize_from_pyobj(py, obj)
let wire: StoredInference = deserialize_from_pyobj(py, obj)?;
let storage = match wire.to_storage(config) {
Ok(s) => s,
Err(e) => return Err(tensorzero_core_error(py, &e.to_string())?),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve datapoint deserialization in render_samples

The new deserialize_from_stored_sample branch now treats every non‑Datapoint object as a StoredInference by deserializing with StoredInference and calling to_storage. Previously, arbitrary Python objects matching the StoredSample schema (including manually constructed datapoints) were deserialized as a StoredSampleItem. After this change, passing a dataclass or dict for a stored datapoint to experimental_render_samples will fail with Failed to deserialize JSON to StoredInference instead of being accepted. This breaks the documented ability to render datapoints that were created in Python without going through the Rust bindings.

Useful? React with 👍 / 👎.

@shuyangli shuyangli force-pushed the sl/create-python-list-get-inferences-api branch 4 times, most recently from 4c5e1e4 to cd26846 Compare November 14, 2025 15:55
@shuyangli shuyangli force-pushed the sl/create-python-list-get-inferences-api branch 2 times, most recently from 3ceea3b to e4349f9 Compare November 14, 2025 21:53
@shuyangli shuyangli force-pushed the sl/remove-stored-inference-pyo3 branch from 3efd6b8 to 70bba41 Compare November 14, 2025 21:59
@shuyangli shuyangli force-pushed the sl/create-python-list-get-inferences-api branch from e4349f9 to c8955bf Compare November 15, 2025 00:55
anndvision and others added 10 commits November 15, 2025 15:09
* add limit and offset support

* pass values as parameters

* explicit reverse chronological ordering

* refine docstrings

* update test
* Improve error information in rate limiting Fix #3556

* Fix

* Fix

* Fix
…ools (#4599)

* wip

* implemented migration 0041 to add tool columns

* updated migration to fix CH issue and add prtovider_tool colummn

* updated migration for ModelInference and dynamic_provider_tools

* ModelInference -> BatchModelInference

* fixed documentation;

* renamed Tool -> ClientSideFunctionTool

* refactored tool.rs (except tests) to new write pattern

* fixed tests in tool file

* mostly refactored function.rs

* everything compiles

* added tool deserializers

* almost all tests fixed

* tool_config -> tool_params

* unit tests pass

* temporarily removed rollback instrucitons

* built bindings

* re-enabled the rollback; added a sleep

* for real re-enable the rollback

* try a longer sleep

* removed sleep

* fixed migration 0041 by modifying EpisodeByID views to use corrected syntax

* removed unnecessary function in eval helper tests

* removed stray comment

* cleaned up stray comments

* wip

* all dataset tests pass

* I think all e2e tests pass

* clippies

* removed stray comment

* fixed some outstanding clickhouse tests

* fixed remaining clickhouse tests

* fixed issue with legacy endpoint

* Regenerate ModelInferenceCache fixtures

* removed stray comment

* removed unused serializer

* fixed one more stray query

* fixed issues with tests that I created

* wip

* added additional documentation of tool related types

* made provider_tools mandatory across codebase

* added a comment for why this is a custom deserialier

* cleaned up test helper

* fixed remaining PR issues

* fixed test for union all

* fixed all unit tests

* built bindings

* removed new_for_test and fixed UI type issue

* Regenerate ModelInferenceCache fixtures

* added pyo3 getters to ClientSideFunctionTool

* fixed tests that broke with PR comments

* wip

* fixed Python tests

* fixed Python tests

* fixed a couple more failing client tests

* fixed batch tests by adding one more deserializer case

* added some more unit tests

* fixed bad test assertion

* fixed bad test assertion

* batch tests should pass now

* wip on mocking

* wip on mocking

* tests run but fail

* wip on macros

* full coverage but definitely broken

* wip

* some more of these passed

* some more of these passed

* fixed a couple more

* fixed a couple more

* fixed a couple more

* fixed a couple more

* all mock tests pass

* clippies pass

* removed stray config.toml alias

* reverted changes to openai.rs

* reverted changes to test checking functions

* removed unnecessary toml file

* updated CI settings to use mocked tests in merge queue and live ones daily

* removed extra field from model, fixed clippies

* fixed clippies

* set up config for provider types for mock batch URL

* wip

* cleaned up batch URL mock injection for gcp

* injected batch API base to OpenAI config construction

* removed all unneccessary loose assertions for mock batch tests

* fixed all tests for batch + mock

* removed stray &s

* cfged out a step

* skip extra fields for bindings

* fixed lockfile

* skip loading fixtures in mock batch tests

* updated internal tool handling to allow sending all with a limited list of allowed tools

* changed GCP vertex preparation of tools to named function

* updated GCP vertex provider to send all but then specify allowed tools

* updated google AI studio to send but limit tools

* updated groq to send all tools with allowed

* implemented all allowed tools in prod

* updated all providers

* added unit tests for anthropic and strict tools avaialble

* unit tests pass

* fixed responses api

* AllAllowedTools -> OnlyAllowedTools

* AllowedToolsChoice::Explicit

* fireworks works

* stuff works besides vllm / sgl

* added handling for sglang and vllm

* fixed issue with python bindings

* all tests passing

* fixed clippies

* removed extra schemas

* removed nextest changes

* removed nextest changes

* cleaned up imports & tests

* cleaned up clippies;

* removed stray change

* cleaned up diff some more

* fixed generated types and typechecked

* fixed PR comments

* fixed more PR comments

---------

Co-authored-by: TensorZero Bot <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Claude <[email protected]>
Aaron1011 and others added 5 commits November 16, 2025 15:18
* Use fresh ClickHouse db for test_count_datasets

This prevents concurrent test executions from affecting the
final count

* Add 'clickhouse' to the test name to give it a longer timeout
* Clean up docs/tutorial

* Clean up docs/tutorial
@shuyangli shuyangli force-pushed the sl/remove-stored-inference-pyo3 branch from 70bba41 to eadc19c Compare November 16, 2025 16:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants