Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@steven-tey
Copy link
Collaborator

@steven-tey steven-tey commented Oct 25, 2025

Summary by CodeRabbit

  • Refactor

    • Commission aggregation moved to batched, resumable processing for more reliable, scalable payout creation.
    • GET/POST unified under a single handler for consistent behavior and improved error handling and logging.
    • Improved logging and per-period metrics for better operational visibility.
    • Added request signature verification to strengthen API security.
  • Chores

    • Updated scheduled task to run the new aggregated payout job.

@vercel
Copy link
Contributor

vercel bot commented Oct 25, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Updated (UTC)
dub Ready Ready Preview Oct 25, 2025 9:48pm

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 25, 2025

Warning

Rate limit exceeded

@steven-tey has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 12 minutes and 36 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between badaf48 and a9ece82.

📒 Files selected for processing (3)
  • apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (1 hunks)
  • apps/web/app/(ee)/api/cron/payouts/route.ts (0 hunks)
  • apps/web/vercel.json (1 hunks)

Walkthrough

Introduces a batched internal payouts aggregation handler (used for both GET and POST) that verifies Vercel/QStash signatures, groups due commissions by program and partner, creates/updates payouts, marks commissions processed, and schedules subsequent batches via QStash when more work remains. The old single-pass cron route was removed and vercel cron entry updated.

Changes

Cohort / File(s) Summary
New batched aggregation handler
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts
Adds a batched handler (BATCH_SIZE + skip pagination) that verifies Vercel GET and QStash POST signatures, gathers due commissions per program (including custom or created-before holdingPeriod), groups by (partnerId, programId), computes totals and periodEnd, creates/updates payouts, marks commissions processed, logs progress, and schedules next batch via QStash when needed. Exports handler as both GET and POST.
Removed legacy single-pass route
apps/web/app/(ee)/api/cron/payouts/route.ts
Deletes the previous cron endpoint that performed a single-pass aggregation of pending commissions into payouts (including its GET and dynamic exports).
Cron configuration
apps/web/vercel.json
Removes /api/cron/payouts cron entry and adds /api/cron/payouts/aggregate-due-commissions with the same schedule (0 * * * *), preserving surrounding order.

Sequence Diagram(s)

sequenceDiagram
    participant Vercel
    participant Handler
    participant DB as Database
    participant QStash

    rect rgb(240,248,255)
    Note over Vercel,Handler: Initial cron trigger (GET)
    Vercel->>Handler: GET /api/cron/payouts/aggregate-due-commissions
    Handler->>Handler: verify Vercel signature
    Handler->>DB: fetch due/pending commissions (limit BATCH_SIZE, skip)
    DB-->>Handler: commission batch
    end

    rect rgb(240,255,240)
    Note over Handler,DB: Per-program processing
    Handler->>Handler: group by program & partner
    Handler->>Handler: compute periodStart/periodEnd per group
    Handler->>DB: create/update payouts, set commissions.payoutId, mark processed
    DB-->>Handler: update confirmations
    end

    rect rgb(255,248,230)
    Note over Handler,QStash: Batch continuation
    alt batch size == BATCH_SIZE
        Handler->>QStash: schedule POST with skip += BATCH_SIZE (signed)
        QStash-->>Handler: scheduled
        Handler->>Vercel: 200 with scheduling summary
    else batch size < BATCH_SIZE
        Handler->>Vercel: 200 completion summary
    end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Review signature verification differences (Vercel vs QStash) and raw-body parsing.
  • Validate skip arithmetic, batching termination, and QStash scheduling payload/signing.
  • Inspect DB operations for grouping, payout upsert/update, and commission state transitions for atomicity.
  • Check logging and error handling to avoid partial/duplicate processing.

Possibly related PRs

Poem

🐰 I hop through batches, one by one,
QStash hums softly — the next job begun,
Programs gather carrots, totals spun,
Timestamps close the period, payouts done,
The rabbit twitches a satisfied run.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The PR title "Improve cron/payouts/aggregate-due-commissions" is directly related to the main changes in the pull request. The changeset refactors and enhances the commission aggregation logic by introducing a new batched processing handler with recursive QStash scheduling, replacing the previous implementation. The title accurately identifies the specific component being improved (the aggregate-due-commissions endpoint) and clearly summarizes the primary intent without being vague or misleading. While the title doesn't elaborate on implementation details like batching or QStash scheduling, PR titles are appropriately concise and such specifics belong in descriptions and code review.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@steven-tey
Copy link
Collaborator Author

@coderabbitai full review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 25, 2025

✅ Actions performed

Full review triggered.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (1)

208-232: Possible double-processing and payout overcount under concurrency.

Two runners can select the same commissions and both increment payout amount, even if one updateMany raced and affected 0 rows. Guard the update and only increment by the actually updated set, ideally inside a transaction.

-        // update the commissions to have the payoutId
-        await prisma.commission.updateMany({
-          where: {
-            id: { in: commissions.map((c) => c.id) },
-          },
-          data: {
-            status: "processed",
-            payoutId: payoutToUse.id,
-          },
-        });
-
-        // if we're reusing a pending payout, we need to update the amount
-        if (existingPendingPayouts.find((p) => p.id === payoutToUse.id)) {
-          await prisma.payout.update({
-            where: {
-              id: payoutToUse.id,
-            },
-            data: {
-              amount: {
-                increment: totalEarnings,
-              },
-              periodEnd,
-            },
-          });
-        }
+        // Claim commissions defensively and increment by the actually updated set.
+        const ids = commissions.map((c) => c.id);
+        const updated = await prisma.commission.updateMany({
+          where: {
+            id: { in: ids },
+            status: "pending",
+            payoutId: null,
+          },
+          data: {
+            status: "processed",
+            payoutId: payoutToUse.id,
+          },
+        });
+
+        // If reusing an existing pending payout, increment only by what we truly claimed.
+        if (existingPendingPayouts.some((p) => p.id === payoutToUse.id) && updated.count > 0) {
+          const { _sum } = await prisma.commission.aggregate({
+            where: { id: { in: ids }, payoutId: payoutToUse.id },
+            _sum: { earnings: true },
+          });
+          const incrementBy = Number(_sum.earnings ?? 0);
+          if (incrementBy > 0) {
+            await prisma.payout.update({
+              where: { id: payoutToUse.id },
+              data: { amount: { increment: incrementBy }, periodEnd },
+            });
+          }
+        }

Optional: wrap per-partner work in prisma.$transaction to tighten race windows.

🧹 Nitpick comments (4)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (4)

171-176: Redundant sort; DB already returns createdAt asc.

dueCommissionsForProgram is ordered by createdAt asc. Removing this sort saves work.

-        // sort the commissions by createdAt
-        const sortedCommissions = commissions.sort(
-          (a, b) => a.createdAt.getTime() - b.createdAt.getTime(),
-        );
+        const sortedCommissions = commissions; // already ordered by DB

13-16: BATCH_SIZE tuning and configurability.

5000 may risk timeouts for large programs. Consider env-configurable batch size (e.g., DUE_COMMISSIONS_BATCH_SIZE) with a conservative default (e.g., 500–1000).

-const BATCH_SIZE = 5000;
+const BATCH_SIZE = Number(process.env.DUE_COMMISSIONS_BATCH_SIZE ?? 1000);

37-48: Query performance and determinism.

Add deterministic order on both grouping keys to ensure stable page boundaries and leverage an index on (status, payoutId, programId, partnerId). If not present, add it in a follow-up migration.

-      orderBy: {
-        partnerId: "asc",
-      },
+      orderBy: [{ programId: "asc" }, { partnerId: "asc" }],

Follow-up: ensure an index like commission(status, payoutId, programId, partnerId, createdAt).


244-255: Optional: Add deduplication ID to prevent duplicate next-batch messages.

QStash's TypeScript SDK supports deduplicationId in publishJSON calls. The codebase already uses this pattern in similar batch-processing scenarios (e.g., ab-test-scheduler.ts, process-domain-renewal-failure.ts). Consider setting deduplicationId: skip (or similar) to avoid duplicate batch enqueues on retries.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between badaf48 and 4515180.

📒 Files selected for processing (2)
  • apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (9 hunks)
  • apps/web/vercel.json (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (2)
apps/web/lib/cron/verify-vercel.ts (1)
  • verifyVercelSignature (3-20)
apps/web/lib/api/errors.ts (1)
  • handleAndReturnErrorResponse (175-178)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build
🔇 Additional comments (2)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (1)

281-281: GET/POST alias export looks good.

Unified handler reduces divergence between methods. LGTM.

apps/web/vercel.json (1)

47-50: Code changes verified as correct; ready to merge.

The new cron entry /api/cron/payouts/aggregate-due-commissions with hourly schedule is properly added to vercel.json. No lingering old /api/cron/payouts entries exist. CRON_SECRET is correctly referenced and validated in apps/web/lib/cron/verify-vercel.ts.

Note: Ensure CRON_SECRET is configured in the Vercel project environment (this is an operational deployment step outside code review scope).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (3)

209-217: Guard commission updates against races.

Add status: "pending" and payoutId: null to the WHERE to avoid reprocessing/overwrites if another worker already processed them.

Apply:

-        await prisma.commission.updateMany({
-          where: {
-            id: { in: commissions.map((c) => c.id) },
-          },
-          data: {
-            status: "processed",
-            payoutId: payoutToUse.id,
-          },
-        });
+        await prisma.commission.updateMany({
+          where: {
+            id: { in: commissions.map((c) => c.id) },
+            status: "pending",
+            payoutId: null,
+          },
+          data: {
+            status: "processed",
+            payoutId: payoutToUse.id,
+          },
+        });

221-231: Make payout amount updates idempotent (recompute, don’t increment).

Increment can double-count under concurrency. Recalculate from commissions linked to the payout, then set the amount.

Apply:

-          await prisma.payout.update({
-            where: {
-              id: payoutToUse.id,
-            },
-            data: {
-              amount: {
-                increment: totalEarnings,
-              },
-              periodEnd,
-            },
-          });
+          const sum = await prisma.commission.aggregate({
+            where: { payoutId: payoutToUse.id, status: "processed" },
+            _sum: { earnings: true },
+          });
+          await prisma.payout.update({
+            where: { id: payoutToUse.id },
+            data: {
+              amount: sum._sum.earnings ?? 0,
+              periodEnd,
+            },
+          });

156-206: Wrap per-partner work in a transaction and use upsert/unique constraint for “one pending payout per (program, partner)”.

Two workers can create duplicate pending payouts or diverge amounts. Use prisma.$transaction per-partner and upsert against a unique composite (e.g., unique index on (programId, partnerId, status) for status='pending').

Example sketch:

await prisma.$transaction(async (tx) => {
  const payout = await tx.payout.upsert({
    where: { programId_partnerId_status: { programId, partnerId, status: "pending" } },
    create: { id: createId({ prefix: "po_" }), programId, partnerId, periodStart, periodEnd, amount: 0, description: `Dub Partners payout (${program.name})` },
    update: { periodEnd },
  });

  await tx.commission.updateMany({
    where: { id: { in: commissions.map(c => c.id) }, status: "pending", payoutId: null },
    data: { status: "processed", payoutId: payout.id },
  });

  const sum = await tx.commission.aggregate({
    where: { payoutId: payout.id, status: "processed" },
    _sum: { earnings: true },
  });

  await tx.payout.update({ where: { id: payout.id }, data: { amount: sum._sum.earnings ?? 0 } });
});

Note: add the composite unique index in your Prisma schema to enable the where on upsert.

Also applies to: 208-235

♻️ Duplicate comments (4)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (4)

21-35: Verify QStash signature before parsing/using POST body (and don’t trust client-supplied pagination).

Parse/use nothing until verification succeeds. Also, ignore client-provided “skip” entirely.

Apply:

-    } else if (req.method === "POST") {
-      const rawBody = await req.text();
-      const jsonBody = z
-        .object({
-          skip: z.number(),
-        })
-        .parse(JSON.parse(rawBody));
-      skip = jsonBody.skip;
-      await verifyQstashSignature({
-        req,
-        rawBody,
-      });
-    }
+    } else if (req.method === "POST") {
+      const rawBody = await req.text();
+      await verifyQstashSignature({ req, rawBody });
+      // Ignore any client-provided pagination controls.
+    }

19-20: Offset pagination while mutating the dataset will drop pairs; remove skip and use deterministic order.

Always drain the first page repeatedly; do not use skip.

Apply:

-    let skip: number = 0;
+    // No offset pagination; always rescan from the beginning.

@@
-      skip,
-      take: BATCH_SIZE,
-      orderBy: {
-        partnerId: "asc",
-      },
+      take: BATCH_SIZE,
+      // Deterministic order across batches
+      orderBy: [{ programId: "asc" }, { partnerId: "asc" }],

Also applies to: 43-47


244-265: Schedule next batch without offset; always start from first page.

Processed pairs are no longer pending; rescanning page 1 is correct.

Apply:

-    if (groupedCommissions.length === BATCH_SIZE) {
-      const qstashResponse = await qstash.publishJSON({
-        url: `${APP_DOMAIN_WITH_NGROK}/api/cron/payouts/aggregate-due-commissions`,
-        body: {
-          skip: skip + BATCH_SIZE,
-        },
-      });
+    if (groupedCommissions.length === BATCH_SIZE) {
+      const qstashResponse = await qstash.publishJSON({
+        url: `${APP_DOMAIN_WITH_NGROK}/api/cron/payouts/aggregate-due-commissions`,
+        body: {}, // always rescan from the beginning
+      });

272-276: Safer error logging for unknown values.

Avoid accessing error.message directly.

Apply:

-    await log({
-      message: `Error aggregating commissions into payouts: ${error.message}`,
+    const msg = error instanceof Error ? error.message : String(error);
+    await log({
+      message: `Error aggregating commissions into payouts: ${msg}`,
       type: "errors",
       mention: true,
     });
🧹 Nitpick comments (2)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (2)

56-58: Prefer centralized logging over console.log.

Use your log/logAndRespond helper for consistency and structured logs.

Also applies to: 132-136, 166-171, 239-241


3-9: Remove unused imports after dropping client-provided pagination.

If z is no longer used, remove it to keep the file clean.

Apply:

 import { qstash } from "@/lib/cron";
 import { verifyQstashSignature } from "@/lib/cron/verify-qstash";
 import { verifyVercelSignature } from "@/lib/cron/verify-vercel";
 import { prisma } from "@dub/prisma";
 import { APP_DOMAIN_WITH_NGROK, log } from "@dub/utils";
-import { z } from "zod";
 import { logAndRespond } from "../../utils";
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between badaf48 and 4515180.

📒 Files selected for processing (2)
  • apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (9 hunks)
  • apps/web/vercel.json (1 hunks)
🔇 Additional comments (2)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (2)

13-14: Validate BATCH_SIZE against DB and memory limits.

5000 partner-program pairs can expand to many commissions per loop. Confirm this is safe for your largest tenants.


178-181: No changes required—earnings is Int, not Decimal.

The Commission.earnings field is defined as Int in the Prisma schema, not Decimal. Using the + operator for summing integers is safe and correct. The review comment's concern does not apply.

Likely an incorrect or invalid review comment.

@steven-tey
Copy link
Collaborator Author

@coderabbitai full review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 25, 2025

✅ Actions performed

Full review triggered.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (1)

253-257: Safer error logging for unknown error values.

Mirrors a prior suggestion; please guard access to message.

-    await log({
-      message: `Error aggregating due commissions into payouts: ${error.message}`,
+    const msg = error instanceof Error ? error.message : String(error);
+    await log({
+      message: `Error aggregating due commissions into payouts: ${msg}`,
       type: "errors",
       mention: true,
     });
🧹 Nitpick comments (3)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (3)

90-94: Deterministic ordering across batches: add a tiebreaker.

createdAt can collide; add id asc as secondary key.

-        orderBy:
-          {
-            createdAt: "asc",
-          },
+        orderBy: [{ createdAt: "asc" }, { id: "asc" }],

12-12: Right-size and externalize batch size.

10k may exceed timeout/memory under load; make it env-configurable with a safer default (e.g., 1000).

-const BATCH_SIZE = 10000;
+const BATCH_SIZE = Number(process.env.PAYOUTS_BATCH_SIZE ?? "1000");

52-55: Prefer structured logging over console.log.

Use the existing log helper with consistent metadata for easier tracing/search.

Also applies to: 96-101, 107-110, 142-146, 218-221, 224-227, 233-235

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4515180 and cd53048.

📒 Files selected for processing (1)
  • apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (3)
apps/web/lib/cron/verify-vercel.ts (1)
  • verifyVercelSignature (3-20)
apps/web/lib/api/create-id.ts (1)
  • createId (66-71)
apps/web/lib/api/errors.ts (1)
  • handleAndReturnErrorResponse (175-178)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build
🔇 Additional comments (3)
apps/web/app/(ee)/api/cron/payouts/aggregate-due-commissions/route.ts (3)

21-26: Good: verify QStash before parsing/using body.

Order is correct; body is not trusted.


229-231: Good: re-queue next batch without offset/skip.

Prevents dropped records after mutations.


56-95: Add composite index for Payout queries on hot path.

The Payout query at lines 130-140 filters across programId, partnerId, and status together. Currently these are three separate single-column indexes—a composite index would optimize this pattern:

@@index([programId, partnerId, status])

For Commission, the existing index @@index([programId, createdAt, status, amount, earnings]) at lines 56-95 already covers the filtered/ordered fields (status, programId, createdAt), though with a different column order that may be suboptimal.

@steven-tey
Copy link
Collaborator Author

@coderabbitai full review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 25, 2025

✅ Actions performed

Full review triggered.

@steven-tey steven-tey merged commit 31bfdf5 into main Oct 25, 2025
7 checks passed
@steven-tey steven-tey deleted the improve-payouts-cron branch October 25, 2025 21:57
@coderabbitai coderabbitai bot mentioned this pull request Nov 5, 2025
This was referenced Nov 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants