Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

stepan662
Copy link
Contributor

@stepan662 stepan662 commented Jun 10, 2025

tolgee/documentation#939

Summary by CodeRabbit

  • New Features
    • Added support for two new LLM providers: Anthropic and Google AI, including configuration options and UI integration.
  • Enhancements
    • Improved configuration forms with default values and helpful hints for provider setup.
    • Enhanced robustness of JSON extraction from LLM responses.
    • Updated provider selection and translation to include new providers.
    • Improved documentation for LLM provider configuration.
    • Added new feature flag and error code related to LLM providers and playground.
  • UI Improvements
    • Added a visual divider above the priority selector in the LLM provider configuration form.
    • Added announcement banner with links to AI playground and LLM providers documentation.
  • Bug Fixes
    • Improved handling of potentially missing content in LLM responses.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 10, 2025

Walkthrough

The changes introduce support for two new LLM providers, Anthropic and Google AI, across backend and frontend. This includes new enum values, API service classes, configuration options, UI styling updates, enhanced documentation, improved JSON extraction logic, and updated error handling with new message codes and translations.

Changes

File(s) Change Summary
backend/data/src/main/kotlin/io/tolgee/configuration/tolgee/machineTranslation/LlmProperties.kt Added @DocProperty annotations with detailed descriptions and example YAML for LLM providers.
backend/data/src/main/kotlin/io/tolgee/model/enums/LlmProviderType.kt Added ANTHROPIC and GOOGLE_AI enum constants to LlmProviderType.
ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/AnthropicApiService.kt Renamed class from ClaudeApiService to AnthropicApiService, removed response_format logic and nested class, updated JSON output prompt and error handling.
ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/GoogleAiApiService.kt Renamed class from GeminiApiService to GoogleAiApiService, removed unused import and totalTokenCount property from nested class, updated error handling.
ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/OpenaiApiService.kt Made ResponseMessage.content nullable and refined empty content error handling.
ee/backend/app/src/main/kotlin/io/tolgee/ee/service/LlmProviderService.kt Added AnthropicApiService and GoogleAiApiService dependencies; extended getProviderService to handle new provider types.
ee/backend/app/src/main/kotlin/io/tolgee/ee/service/prompt/PromptServiceEeImpl.kt Added extractJsonFromResponse method with multi-strategy JSON extraction; replaced direct JSON parsing with this method.
ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/OllamaApiService.kt Updated error handling for empty response content with a specific empty response error message.
backend/data/src/main/kotlin/io/tolgee/constants/Message.kt Added new enum constant LLM_PROVIDER_EMPTY_RESPONSE.
webapp/src/ee/llm/OrganizationLLMProviders/LlmProviderEdit/LlmProviderForm.tsx Added a themed divider and container styling above the priority select field.
webapp/src/ee/llm/OrganizationLLMProviders/LlmProviderEdit/llmProvidersConfig.ts Added default values and hints; introduced ANTHROPIC and GOOGLE_AI providers with configuration metadata.
webapp/src/service/apiSchema.generated.ts Extended union types to include "ANTHROPIC" and "GOOGLE_AI" for LLM provider types, added new feature and error code literals.
webapp/src/translationTools/useLlmProviderTranslation.ts Added translation keys for new provider types 'ANTHROPIC' and 'GOOGLE_AI'.
webapp/src/translationTools/useErrorTranslation.ts Added translation key for new error code 'llm_provider_empty_response'.
backend/data/src/main/kotlin/io/tolgee/model/enums/announcement/Announcement.kt Added new enum constant FEATURE_LLM_PROVIDERS_AND_PLAYGROUND with expiration timestamp.
webapp/src/component/layout/TopBanner/useAnnouncement.tsx Added handling for new announcement feature with banner linking to documentation; replaced unreachable assertion with one returning null.
webapp/src/fixtures/assertUnreachable.ts Added new function assertUnreachableReturnNull returning null for unreachable code paths.
webapp/src/ee/billing/Subscriptions/cloud/useCloudPlans.tsx and webapp/src/ee/billing/Subscriptions/selfHosted/PlansSelfHostedList.tsx Reordered features in Enterprise plan, removed and re-added AI_PROMPT_CUSTOMIZATION, and added new feature GLOSSARY.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant WebApp
    participant Backend
    participant AnthropicService
    participant GoogleAIService

    User->>WebApp: Selects LLM provider (Anthropic/Google AI)
    WebApp->>Backend: Sends request with provider type
    Backend->>Backend: getProviderService(providerType)
    alt providerType == ANTHROPIC
        Backend->>AnthropicService: Process request
        AnthropicService-->>Backend: Return result
    else providerType == GOOGLE_AI
        Backend->>GoogleAIService: Process request
        GoogleAIService-->>Backend: Return result
    end
    Backend-->>WebApp: Return processed result
    WebApp-->>User: Display result
Loading

Poem

In fields of code, two new friends appear—
Anthropic and Google AI, we cheer!
With docs improved and JSON clear,
Our forms look sharp, the path sincere.
Providers grow, the future bright,
Rabbits code through day and night!
🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7f90c9c and 647bf54.

📒 Files selected for processing (2)
  • webapp/src/ee/billing/Subscriptions/cloud/useCloudPlans.tsx (1 hunks)
  • webapp/src/ee/billing/Subscriptions/selfHosted/PlansSelfHostedList.tsx (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • webapp/src/ee/billing/Subscriptions/cloud/useCloudPlans.tsx
  • webapp/src/ee/billing/Subscriptions/selfHosted/PlansSelfHostedList.tsx
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Ktlint 🪲
  • GitHub Check: Build backend 🏗️
  • GitHub Check: Build frontend 🏗️
  • GitHub Check: Frontend static check 🪲
✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@stepan662
Copy link
Contributor Author

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 10, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (2)
ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/GoogleAiApiService.kt (1)

43-48: Consider adding error handling for API failures.

The current implementation lacks the comprehensive error handling present in OpenaiApiService (lines 75-82). Consider adding similar error handling for content filtering and other API-specific errors that Google AI might return.

 val response: ResponseEntity<ResponseBody> =
-      restTemplate.exchange<ResponseBody>(
-        "${config.apiUrl}/v1beta/models/${config.model}:generateContent?key=${config.apiKey}",
-        HttpMethod.POST,
-        request,
-      )
+      try {
+        restTemplate.exchange<ResponseBody>(
+          "${config.apiUrl}/v1beta/models/${config.model}:generateContent?key=${config.apiKey}",
+          HttpMethod.POST,
+          request,
+        )
+      } catch (e: HttpClientErrorException) {
+        // Add Google AI specific error handling here
+        throw e
+      }
ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/AnthropicApiService.kt (1)

104-105: Fix misleading comment reference.

The comment mentions "AzureCognitive JSON response objects" but this is an Anthropic API service. Update the comment to accurately reflect the Anthropic API.

  /**
-   * Data structure for mapping the AzureCognitive JSON response objects.
+   * Data structure for mapping the Anthropic API JSON request and response objects.
   */
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 624d55d and ccaac4b.

📒 Files selected for processing (11)
  • backend/data/src/main/kotlin/io/tolgee/configuration/tolgee/machineTranslation/LlmProperties.kt (2 hunks)
  • backend/data/src/main/kotlin/io/tolgee/model/enums/LlmProviderType.kt (1 hunks)
  • ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/AnthropicApiService.kt (2 hunks)
  • ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/GoogleAiApiService.kt (1 hunks)
  • ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/OpenaiApiService.kt (1 hunks)
  • ee/backend/app/src/main/kotlin/io/tolgee/ee/service/LlmProviderService.kt (2 hunks)
  • ee/backend/app/src/main/kotlin/io/tolgee/ee/service/prompt/PromptServiceEeImpl.kt (1 hunks)
  • webapp/src/ee/llm/OrganizationLLMProviders/LlmProviderEdit/LlmProviderForm.tsx (3 hunks)
  • webapp/src/ee/llm/OrganizationLLMProviders/LlmProviderEdit/llmProvidersConfig.ts (2 hunks)
  • webapp/src/service/apiSchema.generated.ts (2 hunks)
  • webapp/src/translationTools/useLlmProviderTranslation.ts (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
webapp/src/ee/llm/OrganizationLLMProviders/LlmProviderEdit/LlmProviderForm.tsx (1)
webapp/src/component/common/LabelHint.tsx (1)
  • LabelHint (17-26)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Frontend static check 🪲
  • GitHub Check: Build backend 🏗️
🔇 Additional comments (18)
backend/data/src/main/kotlin/io/tolgee/model/enums/LlmProviderType.kt (1)

7-8: LGTM! Clean enum extension for new LLM providers.

The new enum values follow the established naming convention and are properly positioned within the enum.

ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/OpenaiApiService.kt (1)

196-196: Good type safety improvement.

Making content nullable aligns with the OpenAI API specification where content can be absent in certain response scenarios. The existing null-handling logic at line 86 already accounts for this case.

webapp/src/translationTools/useLlmProviderTranslation.ts (1)

18-21: LGTM! Consistent translation key implementation.

The new translation cases follow the established naming pattern and maintain the exhaustive switch structure with proper typing.

ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/GoogleAiApiService.kt (2)

23-23: Well-structured service implementation.

The GoogleAiApiService follows the established pattern and correctly extends AbstractLlmApiService.


45-45:

❓ Verification inconclusive

Verify the beta API version usage.

The implementation uses the v1beta API version. Ensure this is the intended version and consider the stability implications of using a beta API in production.


🌐 Web query:

What is the current stable version of the Google AI Generative API and are there any known limitations with the v1beta version?

💡 Result:

As of June 2025, Google's AI Generative API utilizes the Gemini 2.5 Pro model, which is considered the first "long-term stable release" in the Gemini series. This model offers enhanced coding capabilities and improved performance over its predecessors. (techradar.com)

Regarding the v1beta version, specific known limitations are not detailed in the available sources. However, it's common for beta versions to have issues such as performance regressions, limited features, or stability concerns. For instance, earlier versions of Gemini Pro faced challenges with coding capabilities and response coherence, which have been addressed in the latest stable release. (techradar.com)

For the most accurate and up-to-date information on known limitations, it's advisable to consult the official Google AI documentation or support channels.

Citations:


Verify usage of the v1beta Google AI Generative API endpoint

It looks like you’re calling:
ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/GoogleAiApiService.kt (line 45)

"${config.apiUrl}/v1beta/models/${config.model}:generateContent?key=${config.apiKey}"

As of June 2025, Google’s Gemini 2.5 Pro model is the first long-term stable release, but the API path remains v1beta. Beta endpoints can have performance regressions, limited feature sets, or stability issues.

Please:

  • Confirm via Google’s official Generative AI docs whether a production-grade /v1 endpoint is available.
  • If so, update the path to /v1/models/{config.model}:generateContent.
  • Otherwise, ensure you’re OK with any beta-stage limitations in your production environment.
webapp/src/ee/llm/OrganizationLLMProviders/LlmProviderEdit/LlmProviderForm.tsx (3)

12-12: Good addition of theme import for consistent styling.

The useTheme hook import enables proper theming for the new divider element.


95-100: Excellent visual separation with proper theming.

The horizontal divider effectively separates the dynamic provider configuration fields from the priority setting, improving the form's visual organization. The use of theme.palette.divider ensures consistency with the Material-UI theme.


101-133: Well-implemented priority field with proper accessibility.

The priority select field implementation is excellent:

  • Uses LabelHint for contextual help
  • Maintains proper data-cy attributes for testing
  • Follows the established pattern for select fields
  • Provides clear translation keys for all options
ee/backend/app/src/main/kotlin/io/tolgee/ee/service/LlmProviderService.kt (2)

48-49: LGTM! New LLM provider services properly injected.

The addition of AnthropicApiService and GoogleAiApiService as constructor parameters follows the established pattern and maintains consistency with the existing service injection approach.


204-205:

✅ Verification successful

Verify the new API service classes exist and are properly implemented.

The enum handling looks correct, but ensure that the AnthropicApiService and GoogleAiApiService classes are properly implemented and available for dependency injection.


🏁 Script executed:

#!/bin/bash
# Description: Verify that the new API service classes exist and are properly annotated as Spring components

# Check if AnthropicApiService exists and is a Spring component
echo "Searching for AnthropicApiService..."
fd "AnthropicApiService.kt" --exec cat {} \; | head -25

echo -e "\n\nSearching for GoogleAiApiService..."
fd "GoogleAiApiService.kt" --exec cat {} \; | head -25

echo -e "\n\nVerifying both services extend AbstractLlmApiService..."
rg "class.*ApiService.*extends|class.*ApiService.*:" --type kotlin -A 2

Length of output: 3999


Verified: AnthropicApiService and GoogleAiApiService are properly implemented and available for injection

  • Both classes exist under ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm
  • Annotated with @Component and @Scope(ConfigurableBeanFactory.SCOPE_SINGLETON)
  • Extend AbstractLlmApiService and implement Logging, matching the pattern of other LLM services

No further changes required.

backend/data/src/main/kotlin/io/tolgee/configuration/tolgee/machineTranslation/LlmProperties.kt (2)

10-12: Excellent documentation enhancement!

The class-level @DocProperty annotation with a clear display name improves the configuration interface clarity.


23-39: Comprehensive and helpful provider documentation.

The detailed description with YAML example and documentation link significantly improves the developer experience. The example configuration is realistic and demonstrates proper syntax.

ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/AnthropicApiService.kt (2)

22-22: LGTM! Appropriate class rename.

The rename from ClaudeApiService to AnthropicApiService better reflects the provider name and aligns with the new enum value ANTHROPIC.


97-97: Improved user message for JSON output.

The updated message is more explicit and informative about the JSON requirement and machine parsing expectation.

webapp/src/service/apiSchema.generated.ts (2)

3301-3301: Ensure new LLM provider types align with backend enum and generation process
This union in LlmProviderModel.type now includes "ANTHROPIC" and "GOOGLE_AI". Confirm these literals exactly match the backend LlmProviderType enum values and update the API schema source (e.g. OpenAPI/GraphQL) so this generated file can be safely regenerated rather than manually patched.


3312-3312: Validate provider type literals in simple model schema
The type union in LlmProviderSimpleModel was similarly extended. Ensure that the upstream schema definition reflects these additions and that running the codegen will persist them in this file.

Also applies to: 3317-3317

webapp/src/ee/llm/OrganizationLLMProviders/LlmProviderEdit/llmProvidersConfig.ts (1)

40-42: LGTM: Good default API URL addition

Adding the default API URL for OpenAI improves the user experience by pre-filling commonly used values.

ee/backend/app/src/main/kotlin/io/tolgee/ee/service/prompt/PromptServiceEeImpl.kt (1)

208-208: LGTM: Improved JSON extraction strategy

The change from direct parseJsonSafely to extractJsonFromResponse is a good improvement for handling varied response formats from different LLM providers. This aligns well with the addition of new providers like Anthropic and Google AI.

@stepan662 stepan662 marked this pull request as ready for review June 10, 2025 13:06
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/OllamaApiService.kt (2)

49-49: Approve the standardized error handling, but consider the debugging impact.

The change to use LLM_PROVIDER_EMPTY_RESPONSE provides more specific error messaging and aligns with other LLM service classes. However, removing the response context (response.body.toString()) may make debugging more difficult when investigating API issues.

Consider adding debug logging to preserve troubleshooting information:

 return PromptResult(
   response.body?.message?.content
-    ?: throw BadRequestException(Message.LLM_PROVIDER_EMPTY_RESPONSE),
+    ?: run {
+      logger().debug("Ollama API returned empty response: ${response.body}")
+      throw BadRequestException(Message.LLM_PROVIDER_EMPTY_RESPONSE)
+    },
   usage = null,
 )

79-79: Fix the incorrect comment.

The comment references "AzureCognitive" but this service is for Ollama API integration.

 /**
-  * Data structure for mapping the AzureCognitive JSON response objects.
+  * Data structure for mapping the Ollama API JSON request/response objects.
  */
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 93b5982 and 2c75827.

📒 Files selected for processing (6)
  • backend/data/src/main/kotlin/io/tolgee/constants/Message.kt (1 hunks)
  • ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/AnthropicApiService.kt (3 hunks)
  • ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/GoogleAiApiService.kt (2 hunks)
  • ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/OllamaApiService.kt (1 hunks)
  • ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/OpenaiApiService.kt (2 hunks)
  • webapp/src/translationTools/useErrorTranslation.ts (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • webapp/src/translationTools/useErrorTranslation.ts
  • backend/data/src/main/kotlin/io/tolgee/constants/Message.kt
🚧 Files skipped from review as they are similar to previous changes (3)
  • ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/GoogleAiApiService.kt
  • ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/AnthropicApiService.kt
  • ee/backend/app/src/main/kotlin/io/tolgee/ee/component/llm/OpenaiApiService.kt
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Frontend static check 🪲
  • GitHub Check: Build backend 🏗️
  • GitHub Check: Build frontend 🏗️
  • GitHub Check: Ktlint 🪲

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
webapp/src/component/layout/TopBanner/useAnnouncement.tsx (2)

86-104: Parameter-name style is inconsistent with earlier banners

Previous announcements use kebab-case placeholders ('link-delivery', 'link-webhooks').
The new strings use camelCase (linkPlayground, linkProviders). Mixing styles makes translation keys harder to skim and increases localisation mistakes.

-                params={{
-                  linkPlayground: (
+                params={{
+                  'link-playground': (-                  linkProviders: (
+                  'link-providers': (

Consider aligning the placeholder naming scheme for consistency across all announcements.


105-106: Silently swallowing unknown announcement types may hide bugs

Returning null keeps the UI from crashing, but it also masks server / enum desynchronisation. A lightweight compromise is to log the unexpected value in non-production builds:

-      default:
-        return assertUnreachableReturnNull(value);
+      default:
+        if (process.env.NODE_ENV !== 'production') {
+          // eslint-disable-next-line no-console
+          console.error('Unhandled announcement type:', value);
+        }
+        return null;

This preserves resilience while still surfacing issues during development.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2c75827 and 7f90c9c.

📒 Files selected for processing (4)
  • backend/data/src/main/kotlin/io/tolgee/model/enums/announcement/Announcement.kt (1 hunks)
  • webapp/src/component/layout/TopBanner/useAnnouncement.tsx (2 hunks)
  • webapp/src/fixtures/assertUnreachable.ts (1 hunks)
  • webapp/src/service/apiSchema.generated.ts (5 hunks)
✅ Files skipped from review due to trivial changes (2)
  • webapp/src/fixtures/assertUnreachable.ts
  • backend/data/src/main/kotlin/io/tolgee/model/enums/announcement/Announcement.kt
🚧 Files skipped from review as they are similar to previous changes (1)
  • webapp/src/service/apiSchema.generated.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
webapp/src/component/layout/TopBanner/useAnnouncement.tsx (2)
webapp/src/component/layout/TopBanner/Announcement.tsx (1)
  • Announcement (36-62)
webapp/src/fixtures/assertUnreachable.ts (1)
  • assertUnreachableReturnNull (5-7)
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: Build backend 🏗️
  • GitHub Check: Frontend static check 🪲
  • GitHub Check: Build frontend 🏗️
🔇 Additional comments (1)
webapp/src/component/layout/TopBanner/useAnnouncement.tsx (1)

2-2: Import change looks fine

The switch now intentionally resolves to null instead of throwing, so swapping assertUnreachable for the …ReturnNull variant is consistent. No issues here.

@stepan662 stepan662 merged commit ea1507b into main Jun 11, 2025
38 checks passed
@stepan662 stepan662 deleted the stepangranat/more-llm-providers branch June 11, 2025 08:56
TolgeeMachine added a commit that referenced this pull request Jun 11, 2025
# [3.122.0](v3.121.1...v3.122.0) (2025-06-11)

### Features

* more llm providers ([#3118](#3118)) ([ea1507b](ea1507b))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants