Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@sudo-tee
Copy link
Owner

@sudo-tee sudo-tee commented Dec 17, 2025

This PR is the begining of a very experimental inline buffer chats aimed a quick edits.

You can start a quick chat with the default keymap <leader>o/ and enter a small edit to make.

image image

This is still experimental and would really need some help testing and polishing it

@sudo-tee
Copy link
Owner Author

@cameronr

This is a whole new feature to be able to ask opencode for quick inline edits. It is not perfect, and will probably never will be.

The format used for communication and replacement is largely inspired by Aider chat.

I tried to control the LLM emit JSON for this but it was to error prone and not precise enough.

The LLM will respond with a format like so

    <<<<<<< SEARCH
    function hello() {
     console.log("hello")
    }
    =======
    function hello() {
      console.log("hello");
    }
    >>>>>>> REPLACE

Where SEARCH is the original code and REPLACE is the new one. See this as a really streamlined version of a diff with no reliance on line number.

But I find myself using it quite often for quick tasks

Examples:

  • Transform to a lua array
  • Add lua annotations
  • Write a conventional commit message for my changes #diff
  • Fix this warnings #warn
  • complete this function

Where the #diff #warn etc. are a new way of adding context to prompts they should work also with Opencode run

You can also use Opencode quick_chat my message or '<'>Opencode quick_chat my message or the keymal <leader>o/

Code is still a bit rough but I figured it was time to ask feedback.

Let me know your toughts on this

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces an experimental inline buffer chat feature aimed at quick code edits. The feature allows users to trigger a quick AI-assisted edit using the <leader>o/ keymap, which works with either the current cursor position or a visual selection. The AI responds using a SEARCH/REPLACE block format inspired by Aider, which is then parsed and applied to the buffer.

Key changes:

  • New quick chat functionality with spinner UI for visual feedback during processing
  • SEARCH/REPLACE block parser for structured code modifications
  • Context system refactoring to support instance-based overrides
  • Promise.finally method and improved async handling
  • Quick context syntax support (e.g., #buffer #git_diff) for selective context inclusion

Reviewed changes

Copilot reviewed 20 out of 20 changed files in this pull request and generated 28 comments.

Show a summary per file
File Description
lua/opencode/quick_chat.lua Core quick chat implementation with session management, context generation, and response processing
lua/opencode/quick_chat/spinner.lua Animated spinner UI component displayed at cursor during quick chat processing
lua/opencode/quick_chat/search_replace.lua Parser and applicator for SEARCH/REPLACE block format used in AI responses
lua/opencode/util.lua Added parse_quick_context_args and get_visual_range utility functions
lua/opencode/context/base.lua Extracted ContextInstance base class supporting config overrides
lua/opencode/context/json_formatter.lua JSON formatter for traditional API message parts
lua/opencode/context/plain_text_formatter.lua Plain text formatter for quick chat LLM consumption
lua/opencode/context.lua Refactored as facade delegating to extracted context modules
lua/opencode/promise.lua Added finally method and improved async arg handling
lua/opencode/api.lua Added quick_chat API function with range support
lua/opencode/config.lua Added quick_chat configuration section and keymap
lua/opencode/types.lua Added type definitions for quick chat components
lua/opencode/init.lua Integrated quick chat setup
tests/unit/util_spec.lua Tests for parse_quick_context_args function
tests/unit/search_replace_spec.lua Comprehensive tests for SEARCH/REPLACE parsing and application
tests/unit/context_spec.lua Tests for context instance config override
README.md Documentation for quick chat feature and updated configuration
test.lua Test/scratch file with unrelated FizzBuzz/Fibonacci code (likely accidental)
lua/opencode/ui/completion/engines/base.lua Minor refactor to static methods
lua/opencode/context.lua.backup Backup file from refactoring (should not be in repo)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@cameronr
Copy link
Collaborator

Interesting! I will take a look today or tomorrow.

@cameronr
Copy link
Collaborator

I've played around it with it and, I agree, it's surprisingly convenient and works surprisingly well. I only had one time where it left a codefence in after the replacement. Given that it'll likely never be perfect, I wonder if would make sense to have a way to view the "raw" results from opencode? That might help with bug reports / help refine prompts for different models to make sure it's returning data in the required form.

We could also mark it as experimental initially until we get more feedback.

@sudo-tee
Copy link
Owner Author

sudo-tee commented Dec 18, 2025

@cameronr yes there is a way
image

You can set both quick chat settings to true. so you can inspect the session in the opencode panel

There is still some issues with the wrong context being sent. The context code is quite messy for this PR I am in the process of cleaning that up.

Good idea for the experimental , I will add notes in the README about it

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 21 out of 22 changed files in this pull request and generated 8 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@aweis89
Copy link
Contributor

aweis89 commented Dec 19, 2025

This seems like pretty significant architectural shift for this plugin, where instead of using the ACP protocol to allow the backend to handle core ACP tool implementation, we're shifting those responsibilities to the frontend. Are we sure we want to move in this direction?

It might be worth considering implementations that rely on Opencode's implementation of the ACP protocol to do the actual edits. One way for example I think this can achieved (inspired also by aider's comment prompts) is to write the user's prompt at the cursor position as a comment like # AI! <USER PROMPT>. Then send a prompt stating to read the current file and follow any instructions in the AI comments while also removing the comment prompt once complete. I've tried in a plugin I wrote a while ago and it worked really well for inline edits while also avoiding the need to do the edits on the plugin UI side

@sudo-tee
Copy link
Owner Author

@aweis89 This is a fair point.

I looked at the ACP protocol and it seemed pretty limited compared to the opencode http server. But I only reviewed it in the perspective of the chat window. It does not support all slash commands, does not support session listing etc.

I didn't know ACP could do inline edits. The main issue I had is that the inline edits is that it could be performed in an unsaved buffer, where opencode tools read the files via his own tools so it was not aware of onsaved changes.

As per the # AI! as comments, it is interesting, but it would not work to edit a visual selection like this PR did.

I will have to look again at ACP.

@aweis89
Copy link
Contributor

aweis89 commented Dec 19, 2025

I didn't know ACP could do inline edits. The main issue I had is that the inline edits is that it could be performed in an unsaved buffer, where opencode tools read the files via his own tools so it was not aware of onsaved changes.

Tbh this is a problem for none-inline edits as well (albeit to less of an extent). Because ideally any open buffer file would display the changes made by the AI agent as well without the user having to manually reload the file. The sidekick plugin for example handles this by using file watches and then auto reloading files on external edits. It would probably be ideal to find a solution for file reloading that works both for inline edits and for regular file modifications as well.

As per the # AI! as comments, it is interesting, but it would not work to edit a visual selection like this PR did.

Personally I think "edit visual selection" where the user only wants the LLM to modify the highlighted portion, is very niche feature which I haven't seen in any other plugins, and I'd question if it's worth the complexity this introduces. Generally, when a user highlights text, it's more that they want that to be sent as part of the context so it knows what the prompt pertains to. But we can allow the LLM to decide what to edit based on the prompt imho. This is how text selection works in sidekick and other plugins I've played with as well. I think if a user highlights text with an inline prompt, we should just add the highlighted text, metadata like filename and line number and cursor position to the prompt, and then let the LLM decide what to do based on the prompt

The main idea of the # AI! <PROMPT comment is really just so the LLM knowns that this prompts pertains to this specific line. But this isn't really needed when the user is highlighting text since we can just send the highlighted text and its metadata to the LLM. (Just sending the cursor position line-number and filename is often not sufficient as LLMs are often lacking when it comes to line counting).

Of course this all just one user's opinion so take it with a grain of salt obviously!

@sudo-tee
Copy link
Owner Author

sudo-tee commented Dec 19, 2025

I didn't know ACP could do inline edits. The main issue I had is that the inline edits is that it could be performed in an unsaved buffer, where opencode tools read the files via his own tools so it was not aware of onsaved changes.

Tbh this is a problem for none-inline edits as well (albeit to less of an extent). Because ideally any open buffer file would display the changes made by the AI agent as well without the user having to manually reload the file. The sidekick plugin for example handles this by using file watches and then auto reloading files on external edits. It would probably be ideal to find a solution for file reloading that works both for inline edits and for regular file modifications as well.

For this part the plugin already knows when file are edited since the opencode server sends an event, so we reload the file. Inline edits are done at the buffer level so we don't need to reload the file.

As per the other stuff, I see your point and will consider in future versions of the idea.

The behavoir of this is not set in stone and will be released as an experimental thing to gather users input on it. Once you get the hang of it quick edits on cursor or selection is really useful even with it's flaws. It was a feature I used all the time in avante.nvim and wanted to replicate.

Thanks for the feedback though

@sudo-tee sudo-tee force-pushed the feat/inline-buffer-chat branch from cd7b41c to 6f678e9 Compare December 22, 2025 15:30
@sudo-tee
Copy link
Owner Author

@cameronr

I greatly simplified the process and implemented a much simpller way of doing the replacement.

It is inspired on Codecompanion. It now simply ask for raw code and replace either the current line or the selection. Instead of trying to tell the LLM to respect a specific format.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 18 out of 19 changed files in this pull request and generated 15 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@sudo-tee sudo-tee force-pushed the feat/inline-buffer-chat branch 3 times, most recently from d468dab to 280a97d Compare December 22, 2025 18:49
@sudo-tee sudo-tee marked this pull request as ready for review December 22, 2025 18:55
@cameronr
Copy link
Collaborator

Nice! I'm traveling for the next few weeks but will look at it in early January.

- git_diff: the current
- current buffer: the current buffer instead of relying on the file
reading (can chat with unsaved buffer)
sudo-tee and others added 19 commits December 23, 2025 14:24
Co-authored-by: Copilot <[email protected]>
- Refactored context system to use static modules (ChatContext, QuickChatContext, BaseContext) for better modularity and testability.
- Removed legacy context instance class and JSON/plain-text formatters; now each context type has its own implementation.
- Updated quick chat formatting, increased flexibility for selection and diagnostics context, and improved whitespace-tolerant patching.
- Improved README: clarified quick chat, moved acknowledgements, added links and instructions for prompt guard.
- Enhanced test coverage—especially for flexible whitespace patch application.
Co-authored-by: Copilot <[email protected]>
- Simplified whitespace normalization and Lua pattern handling.
- Improved parsing logic for SEARCH/REPLACE blocks with better error handling.
Adds the currently selected range or line number to the Quick Chat input UI prompt, improving user clarity on the chat scope.
Remove SEARCH/REPLACE block parsing in favor of direct code output mode for simpler implementation and improved user experience
- Refactor regex for code fence and inline code removal
- Update raw code insertion instructions for clarity and consistency
- Fix display label length calculation in file completion
Track when current_file is sent in context to improve status updates, highlights, and delta computation logic.
Refactor context read/write to access through get_context();
@sudo-tee sudo-tee force-pushed the feat/inline-buffer-chat branch from dae0a66 to db1bb05 Compare December 23, 2025 19:39
…iles

- Deleted legacy context.lua.backup and unrelated test.lua/test.txt
- Streamlined project by removing outdated or unnecessary files
@sudo-tee sudo-tee force-pushed the feat/inline-buffer-chat branch from aff58fc to 16b5017 Compare December 24, 2025 11:44
…ltering

- Enhance diagnostics to handle multiple selection ranges simultaneously
- Add visual filter indicator in context bar for scoped diagnostic display
- Improve chat context to automatically use stored selections for diagnostic filtering
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 26 out of 26 changed files in this pull request and generated 15 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

return false, 'Quick chat message cannot be empty'
end

return true
Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function validate_quick_chat_prerequisites always returns a boolean for the first return value, but the error handling code checks for both valid being false AND error_msg being nil or empty. However, when validation fails (lines 241-245), an error message is always provided. This means the or 'Unknown error' fallback on line 352 will never be used, but it's good defensive programming.

However, line 248 returns true without a second return value, which means error_msg will be nil when validation succeeds. This is correct but could be more explicit for clarity.

Suggested change
return true
return true, nil

Copilot uses AI. Check for mistakes.
Comment on lines +167 to +169
local start_line = math.floor(range.start) - 1 -- Convert to 0-indexed integer
local end_line = math.floor(range.stop) - 1 -- Convert to 0-indexed integer
vim.api.nvim_buf_set_lines(buf, start_line, end_line + 1, false, lines)
Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The apply_raw_code_response function uses math.floor on lines 167-168 to convert range values to integers, but the range values are expected to already be integers based on the function signature and usage. If the range values can be floats, this conversion is necessary; otherwise, it's redundant. Consider documenting why this conversion is needed or verifying that range values are always integers at the source.

Copilot uses AI. Check for mistakes.
Comment on lines +342 to +347
local test_name = input and input ~= '' and ('parses "' .. input .. '"') or 'handles empty/nil input'

it(test_name, function()
parse_and_verify(input, expected_prompt, context_checks)
end)
end
Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test description on line 342 says "handles empty/nil input" but the actual test only verifies the returned prompt and config structure. It doesn't explicitly test that the function handles nil input gracefully in all edge cases. The test should verify that both empty string and nil inputs produce the expected default config with enabled = true.

Copilot uses AI. Check for mistakes.
sudo-tee and others added 2 commits December 24, 2025 10:14
Add 200ms debounced context loading to prevent excessive reloads during rapid buffer/diagnostic changes and add guard to skip loading when no active session exists
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants