-
Notifications
You must be signed in to change notification settings - Fork 23
feat: experimental inline buffer chat #142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
This is a whole new feature to be able to ask opencode for quick inline edits. It is not perfect, and will probably never will be. The format used for communication and replacement is largely inspired by Aider chat. I tried to control the LLM emit JSON for this but it was to error prone and not precise enough. The LLM will respond with a format like so Where SEARCH is the original code and REPLACE is the new one. See this as a really streamlined version of a diff with no reliance on line number. But I find myself using it quite often for quick tasks Examples:
Where the #diff #warn etc. are a new way of adding context to prompts they should work also with You can also use Code is still a bit rough but I figured it was time to ask feedback. Let me know your toughts on this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR introduces an experimental inline buffer chat feature aimed at quick code edits. The feature allows users to trigger a quick AI-assisted edit using the <leader>o/ keymap, which works with either the current cursor position or a visual selection. The AI responds using a SEARCH/REPLACE block format inspired by Aider, which is then parsed and applied to the buffer.
Key changes:
- New quick chat functionality with spinner UI for visual feedback during processing
- SEARCH/REPLACE block parser for structured code modifications
- Context system refactoring to support instance-based overrides
- Promise.finally method and improved async handling
- Quick context syntax support (e.g.,
#buffer #git_diff) for selective context inclusion
Reviewed changes
Copilot reviewed 20 out of 20 changed files in this pull request and generated 28 comments.
Show a summary per file
| File | Description |
|---|---|
| lua/opencode/quick_chat.lua | Core quick chat implementation with session management, context generation, and response processing |
| lua/opencode/quick_chat/spinner.lua | Animated spinner UI component displayed at cursor during quick chat processing |
| lua/opencode/quick_chat/search_replace.lua | Parser and applicator for SEARCH/REPLACE block format used in AI responses |
| lua/opencode/util.lua | Added parse_quick_context_args and get_visual_range utility functions |
| lua/opencode/context/base.lua | Extracted ContextInstance base class supporting config overrides |
| lua/opencode/context/json_formatter.lua | JSON formatter for traditional API message parts |
| lua/opencode/context/plain_text_formatter.lua | Plain text formatter for quick chat LLM consumption |
| lua/opencode/context.lua | Refactored as facade delegating to extracted context modules |
| lua/opencode/promise.lua | Added finally method and improved async arg handling |
| lua/opencode/api.lua | Added quick_chat API function with range support |
| lua/opencode/config.lua | Added quick_chat configuration section and keymap |
| lua/opencode/types.lua | Added type definitions for quick chat components |
| lua/opencode/init.lua | Integrated quick chat setup |
| tests/unit/util_spec.lua | Tests for parse_quick_context_args function |
| tests/unit/search_replace_spec.lua | Comprehensive tests for SEARCH/REPLACE parsing and application |
| tests/unit/context_spec.lua | Tests for context instance config override |
| README.md | Documentation for quick chat feature and updated configuration |
| test.lua | Test/scratch file with unrelated FizzBuzz/Fibonacci code (likely accidental) |
| lua/opencode/ui/completion/engines/base.lua | Minor refactor to static methods |
| lua/opencode/context.lua.backup | Backup file from refactoring (should not be in repo) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Interesting! I will take a look today or tomorrow. |
|
I've played around it with it and, I agree, it's surprisingly convenient and works surprisingly well. I only had one time where it left a codefence in after the replacement. Given that it'll likely never be perfect, I wonder if would make sense to have a way to view the "raw" results from opencode? That might help with bug reports / help refine prompts for different models to make sure it's returning data in the required form. We could also mark it as experimental initially until we get more feedback. |
|
@cameronr yes there is a way You can set both quick chat settings to true. so you can inspect the session in the opencode panel There is still some issues with the wrong context being sent. The context code is quite messy for this PR I am in the process of cleaning that up. Good idea for the experimental , I will add notes in the README about it |
8455457 to
350b69d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 21 out of 22 changed files in this pull request and generated 8 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
This seems like pretty significant architectural shift for this plugin, where instead of using the ACP protocol to allow the backend to handle core ACP tool implementation, we're shifting those responsibilities to the frontend. Are we sure we want to move in this direction? It might be worth considering implementations that rely on Opencode's implementation of the ACP protocol to do the actual edits. One way for example I think this can achieved (inspired also by aider's comment prompts) is to write the user's prompt at the cursor position as a comment like |
|
@aweis89 This is a fair point. I looked at the ACP protocol and it seemed pretty limited compared to the opencode http server. But I only reviewed it in the perspective of the chat window. It does not support all slash commands, does not support session listing etc. I didn't know ACP could do inline edits. The main issue I had is that the inline edits is that it could be performed in an unsaved buffer, where opencode tools read the files via his own tools so it was not aware of onsaved changes. As per the I will have to look again at ACP. |
Tbh this is a problem for none-inline edits as well (albeit to less of an extent). Because ideally any open buffer file would display the changes made by the AI agent as well without the user having to manually reload the file. The sidekick plugin for example handles this by using file watches and then auto reloading files on external edits. It would probably be ideal to find a solution for file reloading that works both for inline edits and for regular file modifications as well.
Personally I think "edit visual selection" where the user only wants the LLM to modify the highlighted portion, is very niche feature which I haven't seen in any other plugins, and I'd question if it's worth the complexity this introduces. Generally, when a user highlights text, it's more that they want that to be sent as part of the context so it knows what the prompt pertains to. But we can allow the LLM to decide what to edit based on the prompt imho. This is how text selection works in sidekick and other plugins I've played with as well. I think if a user highlights text with an inline prompt, we should just add the highlighted text, metadata like filename and line number and cursor position to the prompt, and then let the LLM decide what to do based on the prompt The main idea of the Of course this all just one user's opinion so take it with a grain of salt obviously! |
For this part the plugin already knows when file are edited since the opencode server sends an event, so we reload the file. Inline edits are done at the buffer level so we don't need to reload the file. As per the other stuff, I see your point and will consider in future versions of the idea. The behavoir of this is not set in stone and will be released as an experimental thing to gather users input on it. Once you get the hang of it quick edits on cursor or selection is really useful even with it's flaws. It was a feature I used all the time in avante.nvim and wanted to replicate. Thanks for the feedback though |
cd7b41c to
6f678e9
Compare
|
I greatly simplified the process and implemented a much simpller way of doing the replacement. It is inspired on |
9ecd416 to
9020c78
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 18 out of 19 changed files in this pull request and generated 15 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
d468dab to
280a97d
Compare
|
Nice! I'm traveling for the next few weeks but will look at it in early January. |
- git_diff: the current - current buffer: the current buffer instead of relying on the file reading (can chat with unsaved buffer)
Co-authored-by: Copilot <[email protected]>
- Refactored context system to use static modules (ChatContext, QuickChatContext, BaseContext) for better modularity and testability. - Removed legacy context instance class and JSON/plain-text formatters; now each context type has its own implementation. - Updated quick chat formatting, increased flexibility for selection and diagnostics context, and improved whitespace-tolerant patching. - Improved README: clarified quick chat, moved acknowledgements, added links and instructions for prompt guard. - Enhanced test coverage—especially for flexible whitespace patch application.
Co-authored-by: Copilot <[email protected]>
- Simplified whitespace normalization and Lua pattern handling. - Improved parsing logic for SEARCH/REPLACE blocks with better error handling.
Adds the currently selected range or line number to the Quick Chat input UI prompt, improving user clarity on the chat scope.
Remove SEARCH/REPLACE block parsing in favor of direct code output mode for simpler implementation and improved user experience
- Refactor regex for code fence and inline code removal - Update raw code insertion instructions for clarity and consistency - Fix display label length calculation in file completion
Co-authored-by: Copilot <[email protected]>
Track when current_file is sent in context to improve status updates, highlights, and delta computation logic. Refactor context read/write to access through get_context();
dae0a66 to
db1bb05
Compare
…iles - Deleted legacy context.lua.backup and unrelated test.lua/test.txt - Streamlined project by removing outdated or unnecessary files
aff58fc to
16b5017
Compare
…ltering - Enhance diagnostics to handle multiple selection ranges simultaneously - Add visual filter indicator in context bar for scoped diagnostic display - Improve chat context to automatically use stored selections for diagnostic filtering
c3b83ad to
4e1b968
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 26 out of 26 changed files in this pull request and generated 15 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| return false, 'Quick chat message cannot be empty' | ||
| end | ||
|
|
||
| return true |
Copilot
AI
Dec 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function validate_quick_chat_prerequisites always returns a boolean for the first return value, but the error handling code checks for both valid being false AND error_msg being nil or empty. However, when validation fails (lines 241-245), an error message is always provided. This means the or 'Unknown error' fallback on line 352 will never be used, but it's good defensive programming.
However, line 248 returns true without a second return value, which means error_msg will be nil when validation succeeds. This is correct but could be more explicit for clarity.
| return true | |
| return true, nil |
| local start_line = math.floor(range.start) - 1 -- Convert to 0-indexed integer | ||
| local end_line = math.floor(range.stop) - 1 -- Convert to 0-indexed integer | ||
| vim.api.nvim_buf_set_lines(buf, start_line, end_line + 1, false, lines) |
Copilot
AI
Dec 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The apply_raw_code_response function uses math.floor on lines 167-168 to convert range values to integers, but the range values are expected to already be integers based on the function signature and usage. If the range values can be floats, this conversion is necessary; otherwise, it's redundant. Consider documenting why this conversion is needed or verifying that range values are always integers at the source.
| local test_name = input and input ~= '' and ('parses "' .. input .. '"') or 'handles empty/nil input' | ||
|
|
||
| it(test_name, function() | ||
| parse_and_verify(input, expected_prompt, context_checks) | ||
| end) | ||
| end |
Copilot
AI
Dec 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test description on line 342 says "handles empty/nil input" but the actual test only verifies the returned prompt and config structure. It doesn't explicitly test that the function handles nil input gracefully in all edge cases. The test should verify that both empty string and nil inputs produce the expected default config with enabled = true.
Co-authored-by: Copilot <[email protected]>
Add 200ms debounced context loading to prevent excessive reloads during rapid buffer/diagnostic changes and add guard to skip loading when no active session exists

This PR is the begining of a very experimental inline buffer chats aimed a quick edits.
You can start a quick chat with the default keymap
<leader>o/and enter a small edit to make.This is still experimental and would really need some help testing and polishing it