-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Improve cron/payouts/aggregate-due-commissions #3003
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Warning Rate limit exceeded@steven-tey has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 12 minutes and 36 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (3)
WalkthroughIntroduces a batched internal payouts aggregation handler (used for both GET and POST) that verifies Vercel/QStash signatures, groups due commissions by program and partner, creates/updates payouts, marks commissions processed, and schedules subsequent batches via QStash when more work remains. The old single-pass cron route was removed and vercel cron entry updated. Changes
Sequence Diagram(s)sequenceDiagram
participant Vercel
participant Handler
participant DB as Database
participant QStash
rect rgb(240,248,255)
Note over Vercel,Handler: Initial cron trigger (GET)
Vercel->>Handler: GET /api/cron/payouts/aggregate-due-commissions
Handler->>Handler: verify Vercel signature
Handler->>DB: fetch due/pending commissions (limit BATCH_SIZE, skip)
DB-->>Handler: commission batch
end
rect rgb(240,255,240)
Note over Handler,DB: Per-program processing
Handler->>Handler: group by program & partner
Handler->>Handler: compute periodStart/periodEnd per group
Handler->>DB: create/update payouts, set commissions.payoutId, mark processed
DB-->>Handler: update confirmations
end
rect rgb(255,248,230)
Note over Handler,QStash: Batch continuation
alt batch size == BATCH_SIZE
Handler->>QStash: schedule POST with skip += BATCH_SIZE (signed)
QStash-->>Handler: scheduled
Handler->>Vercel: 200 with scheduling summary
else batch size < BATCH_SIZE
Handler->>Vercel: 200 completion summary
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (1)
208-232: Possible double-processing and payout overcount under concurrency.Two runners can select the same commissions and both increment payout amount, even if one updateMany raced and affected 0 rows. Guard the update and only increment by the actually updated set, ideally inside a transaction.
- // update the commissions to have the payoutId - await prisma.commission.updateMany({ - where: { - id: { in: commissions.map((c) => c.id) }, - }, - data: { - status: "processed", - payoutId: payoutToUse.id, - }, - }); - - // if we're reusing a pending payout, we need to update the amount - if (existingPendingPayouts.find((p) => p.id === payoutToUse.id)) { - await prisma.payout.update({ - where: { - id: payoutToUse.id, - }, - data: { - amount: { - increment: totalEarnings, - }, - periodEnd, - }, - }); - } + // Claim commissions defensively and increment by the actually updated set. + const ids = commissions.map((c) => c.id); + const updated = await prisma.commission.updateMany({ + where: { + id: { in: ids }, + status: "pending", + payoutId: null, + }, + data: { + status: "processed", + payoutId: payoutToUse.id, + }, + }); + + // If reusing an existing pending payout, increment only by what we truly claimed. + if (existingPendingPayouts.some((p) => p.id === payoutToUse.id) && updated.count > 0) { + const { _sum } = await prisma.commission.aggregate({ + where: { id: { in: ids }, payoutId: payoutToUse.id }, + _sum: { earnings: true }, + }); + const incrementBy = Number(_sum.earnings ?? 0); + if (incrementBy > 0) { + await prisma.payout.update({ + where: { id: payoutToUse.id }, + data: { amount: { increment: incrementBy }, periodEnd }, + }); + } + }Optional: wrap per-partner work in prisma.$transaction to tighten race windows.
🧹 Nitpick comments (4)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (4)
171-176: Redundant sort; DB already returns createdAt asc.dueCommissionsForProgram is ordered by createdAt asc. Removing this sort saves work.
- // sort the commissions by createdAt - const sortedCommissions = commissions.sort( - (a, b) => a.createdAt.getTime() - b.createdAt.getTime(), - ); + const sortedCommissions = commissions; // already ordered by DB
13-16: BATCH_SIZE tuning and configurability.5000 may risk timeouts for large programs. Consider env-configurable batch size (e.g., DUE_COMMISSIONS_BATCH_SIZE) with a conservative default (e.g., 500–1000).
-const BATCH_SIZE = 5000; +const BATCH_SIZE = Number(process.env.DUE_COMMISSIONS_BATCH_SIZE ?? 1000);
37-48: Query performance and determinism.Add deterministic order on both grouping keys to ensure stable page boundaries and leverage an index on (status, payoutId, programId, partnerId). If not present, add it in a follow-up migration.
- orderBy: { - partnerId: "asc", - }, + orderBy: [{ programId: "asc" }, { partnerId: "asc" }],Follow-up: ensure an index like commission(status, payoutId, programId, partnerId, createdAt).
244-255: Optional: Add deduplication ID to prevent duplicate next-batch messages.QStash's TypeScript SDK supports deduplicationId in publishJSON calls. The codebase already uses this pattern in similar batch-processing scenarios (e.g.,
ab-test-scheduler.ts,process-domain-renewal-failure.ts). Consider settingdeduplicationId: skip(or similar) to avoid duplicate batch enqueues on retries.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts(9 hunks)apps/web/vercel.json(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (2)
apps/web/lib/cron/verify-vercel.ts (1)
verifyVercelSignature(3-20)apps/web/lib/api/errors.ts (1)
handleAndReturnErrorResponse(175-178)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build
🔇 Additional comments (2)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (1)
281-281: GET/POST alias export looks good.Unified handler reduces divergence between methods. LGTM.
apps/web/vercel.json (1)
47-50: Code changes verified as correct; ready to merge.The new cron entry
/api/cron/payouts/aggregate-due-commissionswith hourly schedule is properly added to vercel.json. No lingering old/api/cron/payoutsentries exist. CRON_SECRET is correctly referenced and validated inapps/web/lib/cron/verify-vercel.ts.Note: Ensure CRON_SECRET is configured in the Vercel project environment (this is an operational deployment step outside code review scope).
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts
Outdated
Show resolved
Hide resolved
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (3)
209-217: Guard commission updates against races.Add
status: "pending"andpayoutId: nullto the WHERE to avoid reprocessing/overwrites if another worker already processed them.Apply:
- await prisma.commission.updateMany({ - where: { - id: { in: commissions.map((c) => c.id) }, - }, - data: { - status: "processed", - payoutId: payoutToUse.id, - }, - }); + await prisma.commission.updateMany({ + where: { + id: { in: commissions.map((c) => c.id) }, + status: "pending", + payoutId: null, + }, + data: { + status: "processed", + payoutId: payoutToUse.id, + }, + });
221-231: Make payout amount updates idempotent (recompute, don’t increment).Increment can double-count under concurrency. Recalculate from commissions linked to the payout, then set the amount.
Apply:
- await prisma.payout.update({ - where: { - id: payoutToUse.id, - }, - data: { - amount: { - increment: totalEarnings, - }, - periodEnd, - }, - }); + const sum = await prisma.commission.aggregate({ + where: { payoutId: payoutToUse.id, status: "processed" }, + _sum: { earnings: true }, + }); + await prisma.payout.update({ + where: { id: payoutToUse.id }, + data: { + amount: sum._sum.earnings ?? 0, + periodEnd, + }, + });
156-206: Wrap per-partner work in a transaction and use upsert/unique constraint for “one pending payout per (program, partner)”.Two workers can create duplicate pending payouts or diverge amounts. Use
prisma.$transactionper-partner andupsertagainst a unique composite (e.g., unique index on(programId, partnerId, status)forstatus='pending').Example sketch:
await prisma.$transaction(async (tx) => { const payout = await tx.payout.upsert({ where: { programId_partnerId_status: { programId, partnerId, status: "pending" } }, create: { id: createId({ prefix: "po_" }), programId, partnerId, periodStart, periodEnd, amount: 0, description: `Dub Partners payout (${program.name})` }, update: { periodEnd }, }); await tx.commission.updateMany({ where: { id: { in: commissions.map(c => c.id) }, status: "pending", payoutId: null }, data: { status: "processed", payoutId: payout.id }, }); const sum = await tx.commission.aggregate({ where: { payoutId: payout.id, status: "processed" }, _sum: { earnings: true }, }); await tx.payout.update({ where: { id: payout.id }, data: { amount: sum._sum.earnings ?? 0 } }); });Note: add the composite unique index in your Prisma schema to enable the
whereon upsert.Also applies to: 208-235
♻️ Duplicate comments (4)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (4)
21-35: Verify QStash signature before parsing/using POST body (and don’t trust client-supplied pagination).Parse/use nothing until verification succeeds. Also, ignore client-provided “skip” entirely.
Apply:
- } else if (req.method === "POST") { - const rawBody = await req.text(); - const jsonBody = z - .object({ - skip: z.number(), - }) - .parse(JSON.parse(rawBody)); - skip = jsonBody.skip; - await verifyQstashSignature({ - req, - rawBody, - }); - } + } else if (req.method === "POST") { + const rawBody = await req.text(); + await verifyQstashSignature({ req, rawBody }); + // Ignore any client-provided pagination controls. + }
19-20: Offset pagination while mutating the dataset will drop pairs; remove skip and use deterministic order.Always drain the first page repeatedly; do not use
skip.Apply:
- let skip: number = 0; + // No offset pagination; always rescan from the beginning. @@ - skip, - take: BATCH_SIZE, - orderBy: { - partnerId: "asc", - }, + take: BATCH_SIZE, + // Deterministic order across batches + orderBy: [{ programId: "asc" }, { partnerId: "asc" }],Also applies to: 43-47
244-265: Schedule next batch without offset; always start from first page.Processed pairs are no longer pending; rescanning page 1 is correct.
Apply:
- if (groupedCommissions.length === BATCH_SIZE) { - const qstashResponse = await qstash.publishJSON({ - url: `${APP_DOMAIN_WITH_NGROK}/api/cron/payouts/aggregate-due-commissions`, - body: { - skip: skip + BATCH_SIZE, - }, - }); + if (groupedCommissions.length === BATCH_SIZE) { + const qstashResponse = await qstash.publishJSON({ + url: `${APP_DOMAIN_WITH_NGROK}/api/cron/payouts/aggregate-due-commissions`, + body: {}, // always rescan from the beginning + });
272-276: Safer error logging for unknown values.Avoid accessing
error.messagedirectly.Apply:
- await log({ - message: `Error aggregating commissions into payouts: ${error.message}`, + const msg = error instanceof Error ? error.message : String(error); + await log({ + message: `Error aggregating commissions into payouts: ${msg}`, type: "errors", mention: true, });
🧹 Nitpick comments (2)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (2)
56-58: Prefer centralized logging over console.log.Use your
log/logAndRespondhelper for consistency and structured logs.Also applies to: 132-136, 166-171, 239-241
3-9: Remove unused imports after dropping client-provided pagination.If
zis no longer used, remove it to keep the file clean.Apply:
import { qstash } from "@/lib/cron"; import { verifyQstashSignature } from "@/lib/cron/verify-qstash"; import { verifyVercelSignature } from "@/lib/cron/verify-vercel"; import { prisma } from "@dub/prisma"; import { APP_DOMAIN_WITH_NGROK, log } from "@dub/utils"; -import { z } from "zod"; import { logAndRespond } from "../../utils";
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts(9 hunks)apps/web/vercel.json(1 hunks)
🔇 Additional comments (2)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (2)
13-14: Validate BATCH_SIZE against DB and memory limits.5000 partner-program pairs can expand to many commissions per loop. Confirm this is safe for your largest tenants.
178-181: No changes required—earnings is Int, not Decimal.The
Commission.earningsfield is defined asIntin the Prisma schema, notDecimal. Using the+operator for summing integers is safe and correct. The review comment's concern does not apply.Likely an incorrect or invalid review comment.
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (1)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (1)
253-257: Safer error logging for unknown error values.Mirrors a prior suggestion; please guard access to message.
- await log({ - message: `Error aggregating due commissions into payouts: ${error.message}`, + const msg = error instanceof Error ? error.message : String(error); + await log({ + message: `Error aggregating due commissions into payouts: ${msg}`, type: "errors", mention: true, });
🧹 Nitpick comments (3)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (3)
90-94: Deterministic ordering across batches: add a tiebreaker.createdAt can collide; add id asc as secondary key.
- orderBy: - { - createdAt: "asc", - }, + orderBy: [{ createdAt: "asc" }, { id: "asc" }],
12-12: Right-size and externalize batch size.10k may exceed timeout/memory under load; make it env-configurable with a safer default (e.g., 1000).
-const BATCH_SIZE = 10000; +const BATCH_SIZE = Number(process.env.PAYOUTS_BATCH_SIZE ?? "1000");
52-55: Prefer structured logging over console.log.Use the existing log helper with consistent metadata for easier tracing/search.
Also applies to: 96-101, 107-110, 142-146, 218-221, 224-227, 233-235
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (3)
apps/web/lib/cron/verify-vercel.ts (1)
verifyVercelSignature(3-20)apps/web/lib/api/create-id.ts (1)
createId(66-71)apps/web/lib/api/errors.ts (1)
handleAndReturnErrorResponse(175-178)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build
🔇 Additional comments (3)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (3)
21-26: Good: verify QStash before parsing/using body.Order is correct; body is not trusted.
229-231: Good: re-queue next batch without offset/skip.Prevents dropped records after mutations.
56-95: Add composite index for Payout queries on hot path.The Payout query at lines 130-140 filters across
programId,partnerId, andstatustogether. Currently these are three separate single-column indexes—a composite index would optimize this pattern:@@index([programId, partnerId, status])For Commission, the existing index
@@index([programId, createdAt, status, amount, earnings])at lines 56-95 already covers the filtered/ordered fields (status, programId, createdAt), though with a different column order that may be suboptimal.
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
Summary by CodeRabbit
Refactor
Chores