-
Notifications
You must be signed in to change notification settings - Fork 103
feat(chat_async): Adds tool_mode to choose between sequential or concurrent tool calling
#488
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…current tool calling Also applies to `stream_async()`
R/chat.R
Outdated
| #' best for interactive applications, especially when a tool may involve | ||
| #' an interactive user interface. Concurrent mode is best for automated | ||
| #' scripts or non-interactive applications. | ||
| chat_async = function(..., tool_mode = c("sequential", "concurrent")) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think we should default to concurrent or sequential tool calling in the async chat/stream methods?
- concurrent: status quo, makes sense for non-interactive use cases
- sequential: best when the chat is powering a user interface, e.g. shinychat
Ultimately it's unlikely to matter much for most tools unless they're async or correctly set up to use true parallel processing. "sequential" seems like a good default, but if you've gone through the trouble of setting up future or mirai, it'd be nicer for async tool calling to just work than to also need to change this argument value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Go with your gut. I think shinychat is going to be 99% of the users of this function 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Okay, my gut says "concurrent". "sequential" is the weirder use-case, and the distinction only becomes relevant at the most extreme end of the async spectrum: async streaming with async tool functions that do actual async work.
* main: (26 commits) Drop `completed` (tidyverse#495) Add PortkeyAI support (tidyverse#471) feat: Adds `stream = "content"` option to `$stream()` and `$stream_async()` (tidyverse#494) feat(Chat): `tool_request` and `tool_result` callbacks (tidyverse#493) feat: Add `tool_reject()` (tidyverse#490) fix: Tweak simple tools test to avoid Claude's empty strings (tidyverse#492) feat(chat_async): Adds `tool_mode` to choose between sequential or concurrent tool calling (tidyverse#488) And azure :| Update snapshot tests; fix bad bedrock model choice chore: Remove trailing whitespace from NEWS (tidyverse#489) Add (some) functions for listing models (tidyverse#454) Move `chat$extract_data_parallel()` to `parallel_chat_structured()` (tidyverse#486) refactor: Use generators for tool invocation (tidyverse#487) refactor: Remove private invoke_tools methods (tidyverse#485) Automatically convert tool inputs (tidyverse#463) Allow `$extract_data_parallel()` to return tokens + cost (tidyverse#449) Start work on programming vignette (tidyverse#458) Revise type coercion (tidyverse#484) Tweak `chat_openai_test()` (tidyverse#483) refactor(tools): `match_tools()` and `tool_results_as_turn()` (tidyverse#480) ...
Also applies to
stream_async()