-
-
Notifications
You must be signed in to change notification settings - Fork 305
feat: more llm providers #3118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: more llm providers #3118
Conversation
WalkthroughThe changes introduce support for two new LLM providers, Anthropic and Google AI, across backend and frontend. This includes new enum values, API service classes, configuration options, UI styling updates, enhanced documentation, improved JSON extraction logic, and updated error handling with new message codes and translations. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant WebApp
participant Backend
participant AnthropicService
participant GoogleAIService
User->>WebApp: Selects LLM provider (Anthropic/Google AI)
WebApp->>Backend: Sends request with provider type
Backend->>Backend: getProviderService(providerType)
alt providerType == ANTHROPIC
Backend->>AnthropicService: Process request
AnthropicService-->>Backend: Return result
else providerType == GOOGLE_AI
Backend->>GoogleAIService: Process request
GoogleAIService-->>Backend: Return result
end
Backend-->>WebApp: Return processed result
WebApp-->>User: Display result
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (2)
✅ Files skipped from review due to trivial changes (2)
⏰ Context from checks skipped due to timeout of 90000ms (4)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (2)
ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/GoogleAiApiService.kt (1)
43-48
: Consider adding error handling for API failures.The current implementation lacks the comprehensive error handling present in
OpenaiApiService
(lines 75-82). Consider adding similar error handling for content filtering and other API-specific errors that Google AI might return.val response: ResponseEntity<ResponseBody> = - restTemplate.exchange<ResponseBody>( - "${config.apiUrl}/v1beta/models/${config.model}:generateContent?key=${config.apiKey}", - HttpMethod.POST, - request, - ) + try { + restTemplate.exchange<ResponseBody>( + "${config.apiUrl}/v1beta/models/${config.model}:generateContent?key=${config.apiKey}", + HttpMethod.POST, + request, + ) + } catch (e: HttpClientErrorException) { + // Add Google AI specific error handling here + throw e + }ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/AnthropicApiService.kt (1)
104-105
: Fix misleading comment reference.The comment mentions "AzureCognitive JSON response objects" but this is an Anthropic API service. Update the comment to accurately reflect the Anthropic API.
/** - * Data structure for mapping the AzureCognitive JSON response objects. + * Data structure for mapping the Anthropic API JSON request and response objects. */
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
backend/data/src/main/kotlin/io/tolgee/configuration/tolgee/machineTranslation/LlmProperties.kt
(2 hunks)backend/data/src/main/kotlin/io/tolgee/model/enums/LlmProviderType.kt
(1 hunks)ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/AnthropicApiService.kt
(2 hunks)ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/GoogleAiApiService.kt
(1 hunks)ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/OpenaiApiService.kt
(1 hunks)ee/backend/app/src/main/kotlin/io/tolgee/ee/service/LlmProviderService.kt
(2 hunks)ee/backend/app/src/main/kotlin/io/tolgee/ee/service/prompt/PromptServiceEeImpl.kt
(1 hunks)webapp/src/ee/llm/OrganizationLLMProviders/LlmProviderEdit/LlmProviderForm.tsx
(3 hunks)webapp/src/ee/llm/OrganizationLLMProviders/LlmProviderEdit/llmProvidersConfig.ts
(2 hunks)webapp/src/service/apiSchema.generated.ts
(2 hunks)webapp/src/translationTools/useLlmProviderTranslation.ts
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
webapp/src/ee/llm/OrganizationLLMProviders/LlmProviderEdit/LlmProviderForm.tsx (1)
webapp/src/component/common/LabelHint.tsx (1)
LabelHint
(17-26)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Frontend static check 🪲
- GitHub Check: Build backend 🏗️
🔇 Additional comments (18)
backend/data/src/main/kotlin/io/tolgee/model/enums/LlmProviderType.kt (1)
7-8
: LGTM! Clean enum extension for new LLM providers.The new enum values follow the established naming convention and are properly positioned within the enum.
ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/OpenaiApiService.kt (1)
196-196
: Good type safety improvement.Making
content
nullable aligns with the OpenAI API specification where content can be absent in certain response scenarios. The existing null-handling logic at line 86 already accounts for this case.webapp/src/translationTools/useLlmProviderTranslation.ts (1)
18-21
: LGTM! Consistent translation key implementation.The new translation cases follow the established naming pattern and maintain the exhaustive switch structure with proper typing.
ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/GoogleAiApiService.kt (2)
23-23
: Well-structured service implementation.The
GoogleAiApiService
follows the established pattern and correctly extendsAbstractLlmApiService
.
45-45
:❓ Verification inconclusive
Verify the beta API version usage.
The implementation uses the
v1beta
API version. Ensure this is the intended version and consider the stability implications of using a beta API in production.
🌐 Web query:
What is the current stable version of the Google AI Generative API and are there any known limitations with the v1beta version?
💡 Result:
As of June 2025, Google's AI Generative API utilizes the Gemini 2.5 Pro model, which is considered the first "long-term stable release" in the Gemini series. This model offers enhanced coding capabilities and improved performance over its predecessors. (techradar.com)
Regarding the v1beta version, specific known limitations are not detailed in the available sources. However, it's common for beta versions to have issues such as performance regressions, limited features, or stability concerns. For instance, earlier versions of Gemini Pro faced challenges with coding capabilities and response coherence, which have been addressed in the latest stable release. (techradar.com)
For the most accurate and up-to-date information on known limitations, it's advisable to consult the official Google AI documentation or support channels.
Citations:
- 1: https://www.techradar.com/computing/artificial-intelligence/google-upgrades-gemini-2-5-pros-already-formidable-coding-abilities?utm_source=openai
- 2: https://www.techradar.com/computing/artificial-intelligence/google-upgrades-gemini-2-5-pros-already-formidable-coding-abilities?utm_source=openai
Verify usage of the v1beta Google AI Generative API endpoint
It looks like you’re calling:
ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/GoogleAiApiService.kt (line 45)"${config.apiUrl}/v1beta/models/${config.model}:generateContent?key=${config.apiKey}"
As of June 2025, Google’s Gemini 2.5 Pro model is the first long-term stable release, but the API path remains
v1beta
. Beta endpoints can have performance regressions, limited feature sets, or stability issues.Please:
- Confirm via Google’s official Generative AI docs whether a production-grade
/v1
endpoint is available.- If so, update the path to
/v1/models/{config.model}:generateContent
.- Otherwise, ensure you’re OK with any beta-stage limitations in your production environment.
webapp/src/ee/llm/OrganizationLLMProviders/LlmProviderEdit/LlmProviderForm.tsx (3)
12-12
: Good addition of theme import for consistent styling.The
useTheme
hook import enables proper theming for the new divider element.
95-100
: Excellent visual separation with proper theming.The horizontal divider effectively separates the dynamic provider configuration fields from the priority setting, improving the form's visual organization. The use of
theme.palette.divider
ensures consistency with the Material-UI theme.
101-133
: Well-implemented priority field with proper accessibility.The priority select field implementation is excellent:
- Uses
LabelHint
for contextual help- Maintains proper data-cy attributes for testing
- Follows the established pattern for select fields
- Provides clear translation keys for all options
ee/backend/app/src/main/kotlin/io/tolgee/ee/service/LlmProviderService.kt (2)
48-49
: LGTM! New LLM provider services properly injected.The addition of
AnthropicApiService
andGoogleAiApiService
as constructor parameters follows the established pattern and maintains consistency with the existing service injection approach.
204-205
:✅ Verification successful
Verify the new API service classes exist and are properly implemented.
The enum handling looks correct, but ensure that the
AnthropicApiService
andGoogleAiApiService
classes are properly implemented and available for dependency injection.
🏁 Script executed:
#!/bin/bash # Description: Verify that the new API service classes exist and are properly annotated as Spring components # Check if AnthropicApiService exists and is a Spring component echo "Searching for AnthropicApiService..." fd "AnthropicApiService.kt" --exec cat {} \; | head -25 echo -e "\n\nSearching for GoogleAiApiService..." fd "GoogleAiApiService.kt" --exec cat {} \; | head -25 echo -e "\n\nVerifying both services extend AbstractLlmApiService..." rg "class.*ApiService.*extends|class.*ApiService.*:" --type kotlin -A 2Length of output: 3999
Verified: AnthropicApiService and GoogleAiApiService are properly implemented and available for injection
- Both classes exist under
ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm
- Annotated with
@Component
and@Scope(ConfigurableBeanFactory.SCOPE_SINGLETON)
- Extend
AbstractLlmApiService
and implementLogging
, matching the pattern of other LLM servicesNo further changes required.
backend/data/src/main/kotlin/io/tolgee/configuration/tolgee/machineTranslation/LlmProperties.kt (2)
10-12
: Excellent documentation enhancement!The class-level
@DocProperty
annotation with a clear display name improves the configuration interface clarity.
23-39
: Comprehensive and helpful provider documentation.The detailed description with YAML example and documentation link significantly improves the developer experience. The example configuration is realistic and demonstrates proper syntax.
ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/AnthropicApiService.kt (2)
22-22
: LGTM! Appropriate class rename.The rename from
ClaudeApiService
toAnthropicApiService
better reflects the provider name and aligns with the new enum valueANTHROPIC
.
97-97
: Improved user message for JSON output.The updated message is more explicit and informative about the JSON requirement and machine parsing expectation.
webapp/src/service/apiSchema.generated.ts (2)
3301-3301
: Ensure new LLM provider types align with backend enum and generation process
This union inLlmProviderModel.type
now includes"ANTHROPIC"
and"GOOGLE_AI"
. Confirm these literals exactly match the backendLlmProviderType
enum values and update the API schema source (e.g. OpenAPI/GraphQL) so this generated file can be safely regenerated rather than manually patched.
3312-3312
: Validate provider type literals in simple model schema
Thetype
union inLlmProviderSimpleModel
was similarly extended. Ensure that the upstream schema definition reflects these additions and that running the codegen will persist them in this file.Also applies to: 3317-3317
webapp/src/ee/llm/OrganizationLLMProviders/LlmProviderEdit/llmProvidersConfig.ts (1)
40-42
: LGTM: Good default API URL additionAdding the default API URL for OpenAI improves the user experience by pre-filling commonly used values.
ee/backend/app/src/main/kotlin/io/tolgee/ee/service/prompt/PromptServiceEeImpl.kt (1)
208-208
: LGTM: Improved JSON extraction strategyThe change from direct
parseJsonSafely
toextractJsonFromResponse
is a good improvement for handling varied response formats from different LLM providers. This aligns well with the addition of new providers like Anthropic and Google AI.
webapp/src/ee/llm/OrganizationLLMProviders/LlmProviderEdit/llmProvidersConfig.ts
Show resolved
Hide resolved
webapp/src/ee/llm/OrganizationLLMProviders/LlmProviderEdit/llmProvidersConfig.ts
Show resolved
Hide resolved
ee/backend/app/src/main/kotlin/io/tolgee/ee/service/prompt/PromptServiceEeImpl.kt
Show resolved
Hide resolved
ee/backend/app/src/main/kotlin/io/tolgee/ee/service/prompt/PromptServiceEeImpl.kt
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/OllamaApiService.kt (2)
49-49
: Approve the standardized error handling, but consider the debugging impact.The change to use
LLM_PROVIDER_EMPTY_RESPONSE
provides more specific error messaging and aligns with other LLM service classes. However, removing the response context (response.body.toString()
) may make debugging more difficult when investigating API issues.Consider adding debug logging to preserve troubleshooting information:
return PromptResult( response.body?.message?.content - ?: throw BadRequestException(Message.LLM_PROVIDER_EMPTY_RESPONSE), + ?: run { + logger().debug("Ollama API returned empty response: ${response.body}") + throw BadRequestException(Message.LLM_PROVIDER_EMPTY_RESPONSE) + }, usage = null, )
79-79
: Fix the incorrect comment.The comment references "AzureCognitive" but this service is for Ollama API integration.
/** - * Data structure for mapping the AzureCognitive JSON response objects. + * Data structure for mapping the Ollama API JSON request/response objects. */
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
backend/data/src/main/kotlin/io/tolgee/constants/Message.kt
(1 hunks)ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/AnthropicApiService.kt
(3 hunks)ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/GoogleAiApiService.kt
(2 hunks)ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/OllamaApiService.kt
(1 hunks)ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/OpenaiApiService.kt
(2 hunks)webapp/src/translationTools/useErrorTranslation.ts
(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- webapp/src/translationTools/useErrorTranslation.ts
- backend/data/src/main/kotlin/io/tolgee/constants/Message.kt
🚧 Files skipped from review as they are similar to previous changes (3)
- ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/GoogleAiApiService.kt
- ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/AnthropicApiService.kt
- ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/OpenaiApiService.kt
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Frontend static check 🪲
- GitHub Check: Build backend 🏗️
- GitHub Check: Build frontend 🏗️
- GitHub Check: Ktlint 🪲
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
webapp/src/component/layout/TopBanner/useAnnouncement.tsx (2)
86-104
: Parameter-name style is inconsistent with earlier bannersPrevious announcements use kebab-case placeholders (
'link-delivery'
,'link-webhooks'
).
The new strings use camelCase (linkPlayground
,linkProviders
). Mixing styles makes translation keys harder to skim and increases localisation mistakes.- params={{ - linkPlayground: ( + params={{ + 'link-playground': ( … - linkProviders: ( + 'link-providers': (Consider aligning the placeholder naming scheme for consistency across all announcements.
105-106
: Silently swallowing unknown announcement types may hide bugsReturning
null
keeps the UI from crashing, but it also masks server / enum desynchronisation. A lightweight compromise is to log the unexpected value in non-production builds:- default: - return assertUnreachableReturnNull(value); + default: + if (process.env.NODE_ENV !== 'production') { + // eslint-disable-next-line no-console + console.error('Unhandled announcement type:', value); + } + return null;This preserves resilience while still surfacing issues during development.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
backend/data/src/main/kotlin/io/tolgee/model/enums/announcement/Announcement.kt
(1 hunks)webapp/src/component/layout/TopBanner/useAnnouncement.tsx
(2 hunks)webapp/src/fixtures/assertUnreachable.ts
(1 hunks)webapp/src/service/apiSchema.generated.ts
(5 hunks)
✅ Files skipped from review due to trivial changes (2)
- webapp/src/fixtures/assertUnreachable.ts
- backend/data/src/main/kotlin/io/tolgee/model/enums/announcement/Announcement.kt
🚧 Files skipped from review as they are similar to previous changes (1)
- webapp/src/service/apiSchema.generated.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
webapp/src/component/layout/TopBanner/useAnnouncement.tsx (2)
webapp/src/component/layout/TopBanner/Announcement.tsx (1)
Announcement
(36-62)webapp/src/fixtures/assertUnreachable.ts (1)
assertUnreachableReturnNull
(5-7)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: Build backend 🏗️
- GitHub Check: Frontend static check 🪲
- GitHub Check: Build frontend 🏗️
🔇 Additional comments (1)
webapp/src/component/layout/TopBanner/useAnnouncement.tsx (1)
2-2
: Import change looks fineThe switch now intentionally resolves to
null
instead of throwing, so swappingassertUnreachable
for the…ReturnNull
variant is consistent. No issues here.
# [3.122.0](v3.121.1...v3.122.0) (2025-06-11) ### Features * more llm providers ([#3118](#3118)) ([ea1507b](ea1507b))
tolgee/documentation#939
Summary by CodeRabbit