-
Notifications
You must be signed in to change notification settings - Fork 1.3k
chore(wren-ai-service): update sql diagnosis #1956
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
wren-ai-service/src/pipelines/generation/sql_diagnosis.py (1)
85-89: Fix return-shape/type mismatch between generate and post_process.generate_sql_diagnosis returns a tuple (model_reply, generator_name) but post_process expects a dict with "replies". This will raise at runtime (tuple has no .get). Also post_process’ return type is annotated as str but returns a dict. Return only the raw generator result and align the annotation.
Apply:
@@ async def generate_sql_diagnosis( prompt: dict, generator: Any, generator_name: str ) -> dict: - return await generator(prompt=prompt.get("prompt")), generator_name + # trace_cost reads generator_name from args; return only the raw model response + return await generator(prompt=prompt.get("prompt")) @@ async def post_process( generate_sql_diagnosis: dict, -) -> str: +) -> dict: return orjson.loads(generate_sql_diagnosis.get("replies")[0])Also applies to: 91-96
🧹 Nitpick comments (2)
wren-ai-service/src/pipelines/generation/sql_diagnosis.py (2)
105-113: Tighten JSON output contract (strict schema).Enable strict JSON schema to reduce non‑JSON or extra‑field outputs.
Apply:
SQL_DIAGNOSIS_MODEL_KWARGS = { "response_format": { "type": "json_schema", + "strict": True, "json_schema": { "name": "sql_diagnosis_result", "schema": SqlDiagnosisResult.model_json_schema(), }, } }Verify your LLM provider supports the strict flag in response_format.
30-31: Minor prompt wording polish.Small clarity/grammar tweak; no behavior change.
-4. Reasoning should be in the language same as the language user provided in the INPUTS section. -5. Reasoning should be concise and to the point and within 50 words. +4. Reasoning must be in the same language as the user's input in the INPUTS section. +5. Keep it concise (≤ 50 words).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
wren-ai-service/src/pipelines/generation/sql_diagnosis.py(1 hunks)wren-ai-service/src/web/v1/services/ask.py(0 hunks)wren-ai-service/src/web/v1/services/ask_feedback.py(1 hunks)
💤 Files with no reviewable changes (1)
- wren-ai-service/src/web/v1/services/ask.py
🧰 Additional context used
🧬 Code graph analysis (1)
wren-ai-service/src/web/v1/services/ask_feedback.py (4)
wren-ai-service/src/pipelines/generation/sql_diagnosis.py (1)
run(138-158)wren-ai-service/src/pipelines/generation/sql_correction.py (1)
run(157-187)wren-ai-service/src/pipelines/generation/utils/sql.py (1)
run(29-69)wren-ai-service/src/web/v1/services/ask.py (1)
AskResult(49-52)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: pytest
- GitHub Check: pytest
- GitHub Check: Analyze (go)
🔇 Additional comments (3)
wren-ai-service/src/pipelines/generation/sql_diagnosis.py (2)
37-38: Final answer JSON alignment looks good.Model and prompt now expose only reasoning; consistent with flow changes.
137-158: Make language optional and default to English to avoid caller breakage.ask_feedback.run doesn't pass language; current signature requires it and will raise — change the signature to accept Optional[str] with a default and send a safe default into the pipeline inputs. File: wren-ai-service/src/pipelines/generation/sql_diagnosis.py (run method, lines 137-158).
Apply:
@@ -from typing import Any, List +from typing import Any, List, Optional @@ - async def run( + async def run( self, contexts: List[Document], original_sql: str, invalid_sql: str, error_message: str, - language: str, + language: Optional[str] = None, ): @@ - inputs={ + inputs={ "documents": contexts, "original_sql": original_sql, "invalid_sql": invalid_sql, "error_message": error_message, - "language": language, + "language": language or "English", **self._components, },Also consider propagating the client language from the incoming request when available.
wren-ai-service/src/web/v1/services/ask_feedback.py (1)
233-248: LGTM on result handling.Valid correction is surfaced as AskResult; failures propagate invalid_sql and error_message appropriately.
Summary by CodeRabbit