Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Feature request: Custom OpenAI-compatible Image Generation endpoint (LiteLLM / OpenAI proxy) support #381

@freinold

Description

@freinold

Problem

Presenton supports a custom OpenAI-compatible base URL for LLMs (LLM="custom" + CUSTOM_LLM_URL), but image generation is handled via separate image providers (OpenAI direct, ComfyUI, stock providers). There is currently no way to point image generation to an OpenAI-compatible /v1/images/* endpoint (e.g., LiteLLM proxy, vLLM, Azure/OpenAI-compatible gateway), which makes unified routing/governance (logging, rate limiting, policy, tenancy) difficult.

Requested capability
Allow configuring an OpenAI-compatible Image API base URL + key independently from the text LLM base URL.


Recommended implementation (rough)

1) Add configuration knobs

Introduce env vars (names are suggestions, align with current conventions):

  • IMAGE_PROVIDER=openai_compatible (new provider)
  • OPENAI_COMPAT_IMAGE_BASE_URL=https://… (e.g., LiteLLM proxy)
  • OPENAI_COMPAT_IMAGE_API_KEY=…
  • OPENAI_COMPAT_IMAGE_MODEL=gpt-image-1|dall-e-3|… (model string passed through)

2) Implement a provider adapter (small surface)

Add an OpenAICompatibleImageProvider that calls the OpenAI Images API endpoints:

  • Prefer supporting both:

    • POST /v1/images/generations (classic)
    • POST /v1/images (newer variants some gateways expose)
  • Accept Presenton’s internal request shape (prompt, size, quality, n, etc.) and pass through supported params; drop/ignore unsupported ones gracefully.

  • Return a normalized internal response (URLs or base64) consistent with existing code paths.

3) Route selection in one place

Wherever Presenton currently branches on image provider, add:

  • if IMAGE_PROVIDER == "openai_compatible": use OpenAICompatibleImageProvider(...)

4) Keep it safe and predictable

  • Validate config at startup (fail fast if base URL/key missing).
  • Add minimal unit tests with mocked HTTP responses (at least: success URL response, base64 response, non-200 error handling).
  • Document that the endpoint must be OpenAI-compatible and expose the Images routes.

Why this is worth it

  • Enables LiteLLM and other proxies as a single control plane (auth, quota, audit, caching, policy).
  • Works with self-hosted OpenAI-compatible gateways.
  • Keeps Presenton’s UX consistent while expanding deployment options.

Offer

I’m happy to provide a PR for this, if one of the core maintainers can support a bit on:

  • where you’d like the provider interface to live / naming conventions, and
  • which image endpoint variant you prefer as canonical (/v1/images vs /v1/images/generations).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    Status

    In progress

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions