Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@roshanasingh4
Copy link
Contributor

Fixes #1004.

Problem

  • MemorySearch indexing can run in watch/interval mode via fire-and-forget sync() calls. When remote embeddings fail (e.g. OpenAI 429 insufficient_quota), the rejected promise can become an unhandled rejection and the gateway exits.
  • Embeddings auth was hardcoded to provider openai, so working openai-codex:* OAuth profiles (used for chat) were never attempted as a fallback when OPENAI_API_KEY is out of quota.

Fix

  • Catch and log failures from watch/interval-triggered sync so the gateway stays up even when embeddings fail.
  • Allow agents.defaults.memorySearch.provider = "openai-codex".
  • When the provider is openai and the embeddings endpoint returns 429 insufficient_quota, retry once using openai-codex credentials (if available).

Tests

  • Add regression test that 429 insufficient_quota triggers a retry with openai-codex.
  • Add regression test ensuring watch-triggered sync failures do not raise unhandledRejection.

…back

- Catch watch/interval sync failures to avoid unhandled rejections that crash the gateway.\n- Allow memorySearch provider openai-codex and retry embeddings with openai-codex credentials when OpenAI returns 429 insufficient_quota.\n- Add regression tests for quota fallback and unhandled rejection protection.
@steipete steipete self-assigned this Jan 17, 2026
@steipete
Copy link
Contributor

Does this work for you?

• hi Peter. clear signal.

Test result (live, just now)

Meaning

  • Codex OAuth token (profile openai-codex:codex-cli) cannot call
    embeddings here.
  • So provider: "openai-codex" for memory embeddings fails; the 429
    fallback in PR won’t save you unless the token has model.request
    scope.

State

  • Used local branch test/codex-embeddings-oauth, now back on main.

If you want embeddings to work, use OPENAI_API_KEY or a provider key
that has embeddings scope.

@steipete steipete added the question Further information is requested label Jan 17, 2026
@steipete
Copy link
Contributor

Thanks! I cherry-picked only the memory sync error handling + regression test + changelog entry.

Not merged: openai-codex embeddings provider/fallback (Codex OAuth token returns 401 missing model.request scope).

Landed in: 1a4fc8d

@steipete
Copy link
Contributor

Roshan. please from now on always write the amount of testing you did for a PR. If you only point to an issue and don't test it, it takes me 5x as long to review a PR then fixing the issue myself.

@roshanasingh4
Copy link
Contributor Author

Roshan. please from now on always write the amount of testing you did for a PR. If you only point to an issue and don't test it, it takes me 5x as long to review a PR then fixing the issue myself.

Noted

@steipete
Copy link
Contributor

I wasted loads of time with that openai-codex crap. That's a halucinating in the issue and I assumed you tested this.

@steipete steipete closed this Jan 17, 2026
@roshanasingh4
Copy link
Contributor Author

I wasted loads of time with that openai-codex crap. That's a halucinating in the issue and I assumed you tested this.

Sincere apologies peter, will be more careful moving forward

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

question Further information is requested

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Crash: memorySearch embeddings 429 with no failover to openai-codex OAuth

2 participants