A light-weight job scheduling library for Node.js
Migrating from v5? See the Migration Guide for all breaking changes.
- ESM-only - Modern ES modules (Node.js 18+)
- Pluggable backend system - New
AgendaBackendinterface for storage and notifications - Real-time notifications - Optional notification channels for instant job processing
- MongoDB 6 driver - Updated to latest MongoDB driver
- Persistent job logging - Optional structured logging of job lifecycle events to database
- Monorepo - Now includes
agenda,agendash, andagenda-restpackages
- Complete rewrite in TypeScript (fully typed!)
- Pluggable backend - MongoDB by default, implement your own (see Custom Backend Driver)
- Real-time notifications - Use Redis, PostgreSQL LISTEN/NOTIFY, or custom pub/sub
- MongoDB 6 driver support
touch()with optional progress parameter (0-100)getRunningStats()for monitoring- Fork mode for sandboxed job execution
- Automatic connection handling
- Creates indexes automatically by default
- Minimal overhead. Agenda aims to keep its code base small.
- Mongo backed persistence layer.
- Promises based API.
- Scheduling with configurable priority, concurrency, repeating and persistence of job results.
- Scheduling via cron or human readable syntax.
- Event backed job queue that you can hook into.
- Agenda-rest: optional standalone REST API.
- Inversify-agenda - Some utilities for the development of agenda workers with Inversify.
- Agendash: optional standalone web-interface.
Since there are a few job queue solutions, here a table comparing them to help you use the one that better suits your needs.
| Feature | BullMQ | Bull | Bee | pg-boss | Agenda |
|---|---|---|---|---|---|
| Backend | redis | redis | redis | postgres | mongo, postgres, redis |
| Status | Active | Maintenance | Stale | Active | Active |
| TypeScript | âś“ | âś“ | âś“ | ||
| Priorities | âś“ | âś“ | âś“ | âś“ | |
| Concurrency | âś“ | âś“ | âś“ | âś“ | âś“ |
| Delayed jobs | âś“ | âś“ | âś“ | âś“ | |
| Global events | âś“ | âś“ | âś“ | ||
| Rate Limiter | âś“ | âś“ | âś“ | ||
| Debouncing | âś“ | âś“ | âś“ | ||
| Pause/Resume | âś“ | âś“ | âś“ | ||
| Sandboxed worker | âś“ | âś“ | âś“ | ||
| Repeatable jobs | âś“ | âś“ | âś“ | âś“ | |
| Auto-retry with backoff | âś“ | âś“ | âś“ | âś“ | |
| Dead letter queues | âś“ | âś“ | âś“ | ||
| Job dependencies | âś“ | ||||
| Atomic ops | âś“ | âś“ | âś“ | âś“ | ~ |
| Persistence | âś“ | âś“ | âś“ | âś“ | âś“ |
| UI | âś“ | âś“ | âś“ | ||
| REST API | âś“ | ||||
| Central (Scalable) Queue | âś“ | âś“ | âś“ | ||
| Supports long running jobs | âś“ | ||||
| Human-readable intervals | âś“ | ||||
| Real-time notifications | âś“ | âś“ | âś“ | ||
| Optimized for | Jobs / Messages | Jobs / Messages | Messages | Jobs | Jobs |
Kudos for making the comparison chart goes to Bull maintainers.
Install via NPM
npm install agenda
For MongoDB: Install the official MongoDB backend:
npm install @agendajs/mongo-backend
You will need a working MongoDB database (v4+).
For PostgreSQL: Install the official PostgreSQL backend:
npm install @agendajs/postgres-backend
For Redis: Install the official Redis backend:
npm install @agendajs/redis-backend
import { Agenda } from 'agenda';
import { MongoBackend } from '@agendajs/mongo-backend';
const mongoConnectionString = 'mongodb://127.0.0.1/agenda';
const agenda = new Agenda({
backend: new MongoBackend({ address: mongoConnectionString })
});
// Or override the default collection name:
// const agenda = new Agenda({
// backend: new MongoBackend({ address: mongoConnectionString, collection: 'jobCollectionName' })
// });
// or pass in an existing mongodb-native Db instance
// const agenda = new Agenda({
// backend: new MongoBackend({ mongo: myMongoDb })
// });
agenda.define('delete old users', async job => {
await User.remove({ lastLogIn: { $lt: twoDaysAgo } });
});
(async function () {
// IIFE to give access to async/await
await agenda.start();
await agenda.every('3 minutes', 'delete old users');
// Alternatively, you could also do:
await agenda.every('*/3 * * * *', 'delete old users');
})();agenda.define(
'send email report',
async job => {
const { to } = job.attrs.data;
await emailClient.send({
to,
from: '[email protected]',
subject: 'Email Report',
body: '...'
});
},
{ priority: 'high', concurrency: 10 }
);
(async function () {
await agenda.start();
await agenda.schedule('in 20 minutes', 'send email report', { to: '[email protected]' });
})();(async function () {
const weeklyReport = agenda.create('send email report', { to: '[email protected]' });
await agenda.start();
await weeklyReport.repeatEvery('1 week').save();
})();See also https://agenda.github.io/agenda/
Agenda's basic control structure is an instance of an agenda. Agenda's are mapped to a database collection and load the jobs from within.
- Migration Guide (v5 to v6)
- Configuring an agenda
- Agenda Events
- Defining job processors
- Automatic Retry with Backoff
- Job Debouncing
- Auto-Cleanup of Completed Jobs
- Persistent Job Logging
- Creating jobs
- Managing jobs
- Starting the job processor
- Multiple job processors
- Manually working with jobs
- Job Queue Events
- Frequently asked questions
- Example Project structure
- Known Issues
- Debugging Issues
- Acknowledgements
Possible agenda config options:
{
// Required: Backend for storage (and optionally notifications)
backend: AgendaBackend;
// Optional: Override notification channel from backend
notificationChannel?: NotificationChannel;
// Agenda instance name (used in lastModifiedBy field)
name?: string;
// Job processing options
defaultConcurrency?: number;
processEvery?: string | number;
maxConcurrency?: number;
defaultLockLimit?: number;
lockLimit?: number;
defaultLockLifetime?: number;
// Auto-remove one-time jobs after successful completion
removeOnComplete?: boolean;
// Persistent job logging
logging?: boolean | JobLogger | { logger?: JobLogger; default?: boolean };
// Fork mode options
forkHelper?: { path: string; options?: ForkOptions };
forkedWorker?: boolean;
}MongoBackend config options:
{
// MongoDB connection string
address?: string;
// Or existing MongoDB database instance
mongo?: Db;
// Collection name (default: 'agendaJobs')
collection?: string;
// MongoDB client options
options?: MongoClientOptions;
// Create indexes on connect (default: true)
ensureIndex?: boolean;
// Sort order for job queries
sort?: { [key: string]: SortDirection };
// Name for lastModifiedBy field
name?: string;
}Agenda uses Human Interval for specifying the intervals. It supports the following units:
seconds, minutes, hours, days,weeks, months -- assumes 30 days, years -- assumes 365 days
More sophisticated examples
agenda.processEvery('one minute');
agenda.processEvery('1.5 minutes');
agenda.processEvery('3 days and 4 hours');
agenda.processEvery('3 days, 4 hours and 36 seconds');Agenda uses a pluggable backend system. The backend provides storage and optionally real-time notifications.
Using MongoBackend:
import { Agenda } from 'agenda';
import { MongoBackend } from '@agendajs/mongo-backend';
// Via connection string
const agenda = new Agenda({
backend: new MongoBackend({ address: 'mongodb://localhost:27017/agenda-test' })
});
// Via existing MongoDB connection
const agenda = new Agenda({
backend: new MongoBackend({ mongo: mongoClientInstance.db('agenda-test') })
});
// With custom collection name
const agenda = new Agenda({
backend: new MongoBackend({
address: 'mongodb://localhost:27017/agenda-test',
collection: 'myJobs'
})
});Using PostgresBackend:
npm install @agendajs/postgres-backendimport { Agenda } from 'agenda';
import { PostgresBackend } from '@agendajs/postgres-backend';
const agenda = new Agenda({
backend: new PostgresBackend({
connectionString: 'postgresql://user:pass@localhost:5432/mydb'
})
});
// PostgresBackend provides both storage AND real-time notifications via LISTEN/NOTIFYUsing RedisBackend:
npm install @agendajs/redis-backendimport { Agenda } from 'agenda';
import { RedisBackend } from '@agendajs/redis-backend';
const agenda = new Agenda({
backend: new RedisBackend({
connectionString: 'redis://localhost:6379'
})
});
// RedisBackend provides both storage AND real-time notifications via Pub/SubCustom backend:
You can implement a custom backend by implementing the AgendaBackend interface. See Custom Database Driver for details.
const agenda = new Agenda({ backend: myCustomBackend });Agenda will emit a ready event (see Agenda Events) when properly connected to the backend.
It is safe to call agenda.start() without waiting for this event, as this is handled internally.
By default, Agenda uses periodic polling (controlled by processEvery) to check for new jobs. For faster job processing in distributed environments, you can configure a notification channel that triggers immediate job processing when jobs are created or updated.
Using the built-in InMemoryNotificationChannel (single process):
import { Agenda, InMemoryNotificationChannel } from 'agenda';
import { MongoBackend } from '@agendajs/mongo-backend';
const agenda = new Agenda({
backend: new MongoBackend({ mongo: db }),
processEvery: '30 seconds', // Fallback polling interval
notificationChannel: new InMemoryNotificationChannel()
});Using the fluent API:
const channel = new InMemoryNotificationChannel();
const agenda = new Agenda({ backend: new MongoBackend({ mongo: db }) })
.notifyVia(channel);The InMemoryNotificationChannel is useful for testing and single-process deployments. For multi-process or distributed deployments, you can implement custom notification channels using Redis pub/sub, PostgreSQL LISTEN/NOTIFY, or other messaging systems.
Unified backend with notifications:
A backend can provide both storage AND notifications. For example, a PostgreSQL backend could use LISTEN/NOTIFY:
// PostgresBackend implements both repository and notificationChannel
const agenda = new Agenda({
backend: new PostgresBackend({ connectionString: 'postgres://...' })
// No need for separate notificationChannel - PostgresBackend provides it!
});Mixing backends (storage from one system, notifications from another):
// MongoDB for storage, Redis for notifications
const agenda = new Agenda({
backend: new MongoBackend({ mongo: db }),
notificationChannel: new RedisNotificationChannel({ url: 'redis://...' })
});Implementing a custom notification channel:
Extend BaseNotificationChannel or implement NotificationChannel:
import { BaseNotificationChannel, JobNotification } from 'agenda';
class RedisNotificationChannel extends BaseNotificationChannel {
async connect(): Promise<void> {
// Connect to Redis
this.setState('connected');
}
async disconnect(): Promise<void> {
// Disconnect from Redis
this.setState('disconnected');
}
async publish(notification: JobNotification): Promise<void> {
// Publish to Redis channel
}
}The notification channel is automatically connected when agenda.start() is called and disconnected when agenda.stop() is called.
Sets the lastModifiedBy field to name in the jobs collection.
Useful if you have multiple job processors (agendas) and want to see which
job queue last ran the job.
agenda.name(os.hostname + '-' + process.pid);You can also specify it during instantiation
const agenda = new Agenda({ name: 'test queue' });Takes a string interval which can be either a traditional javascript number,
or a string such as 3 minutes
Specifies the frequency at which agenda will query the database looking for jobs
that need to be processed. Agenda internally uses setTimeout to guarantee that
jobs run at (close to ~3ms) the right time.
Decreasing the frequency will result in fewer database queries, but more jobs being stored in memory.
Also worth noting is that if the job queue is shutdown, any jobs stored in memory
that haven't run will still be locked, meaning that you may have to wait for the
lock to expire. By default it is '5 seconds'.
agenda.processEvery('1 minute');You can also specify it during instantiation
const agenda = new Agenda({ processEvery: '30 seconds' });Takes a number which specifies the max number of jobs that can be running at
any given moment. By default it is 20.
agenda.maxConcurrency(20);You can also specify it during instantiation
const agenda = new Agenda({ maxConcurrency: 20 });Takes a number which specifies the default number of a specific job that can be running at
any given moment. By default it is 5.
agenda.defaultConcurrency(5);You can also specify it during instantiation
const agenda = new Agenda({ defaultConcurrency: 5 });Takes a number which specifies the max number jobs that can be locked at any given moment. By default it is 0 for no max.
agenda.lockLimit(0);You can also specify it during instantiation
const agenda = new Agenda({ lockLimit: 0 });Takes a number which specifies the default number of a specific job that can be locked at any given moment. By default it is 0 for no max.
agenda.defaultLockLimit(0);You can also specify it during instantiation
const agenda = new Agenda({ defaultLockLimit: 0 });Takes a number which specifies the default lock lifetime in milliseconds. By
default it is 10 minutes. This can be overridden by specifying the
lockLifetime option to a defined job.
A job will unlock if it is finished (ie. the returned Promise resolves/rejects
or done is specified in the params and done() is called) before the
lockLifetime. The lock is useful if the job crashes or times out.
agenda.defaultLockLifetime(10000);You can also specify it during instantiation
const agenda = new Agenda({ defaultLockLifetime: 10000 });An instance of an agenda will emit the following events:
ready- called when Agenda mongo connection is successfully opened and indices created. If you're passing agenda an existing connection, you shouldn't need to listen for this, asagenda.start()will not resolve until indices have been created. If you're using thedboptions, or calldatabase, then you may still need to listen for thereadyevent before saving jobs.agenda.start()will still wait for the connection to be opened.error- called when Agenda mongo connection process has thrown an error
await agenda.start();Before you can use a job, you must define its processing behavior.
Defines a job with the name of jobName. When a job of jobName gets run, it
will be passed to fn(job, done). To maintain asynchronous behavior, you may
either provide a Promise-returning function in fn or provide done as a
second parameter to fn. If done is specified in the function signature, you
must call done() when you are processing the job. If your function is
synchronous or returns a Promise, you may omit done from the signature.
options is an optional argument which can overwrite the defaults. It can take
the following:
concurrency:numbermaximum number of that job that can be running at once (per instance of agenda)lockLimit:numbermaximum number of that job that can be locked at once (per instance of agenda)lockLifetime:numberinterval in ms of how long the job stays locked for (see multiple job processors for more info). A job will automatically unlock once a returned promise resolves/rejects (or ifdoneis specified in the signature anddone()is called).priority:(lowest|low|normal|high|highest|number)specifies the priority of the job. Higher priority jobs will run first. See the priority mapping belowbackoff:BackoffStrategya function that determines retry delay on failure. See Automatic Retry with Backoff for detailsremoveOnComplete:booleanautomatically remove the job from the database after successful completion (one-time jobs only). Overrides the globalremoveOnCompletesetting. See Auto-Cleanup of Completed Jobs for detailslogging:booleanoverride whether this job type's lifecycle events are persisted. See Persistent Job Logging
Priority mapping:
{
highest: 20,
high: 10,
normal: 0,
low: -10,
lowest: -20
}
Async Job:
agenda.define('some long running job', async job => {
const data = await doSomelengthyTask();
await formatThatData(data);
await sendThatData(data);
});Async Job (using done):
agenda.define('some long running job', (job, done) => {
doSomelengthyTask(data => {
formatThatData(data);
sendThatData(data);
done();
});
});Sync Job:
agenda.define('say hello', job => {
console.log('Hello!');
});define() acts like an assignment: if define(jobName, ...) is called multiple times (e.g. every time your script starts), the definition in the last call will overwrite the previous one. Thus, if you define the jobName only once in your code, it's safe for that call to execute multiple times.
Agenda supports automatic retry with configurable backoff strategies. When a job fails, it can be automatically rescheduled based on the backoff strategy you define.
import { Agenda, backoffStrategies } from 'agenda';
import { MongoBackend } from '@agendajs/mongo-backend';
const agenda = new Agenda({
backend: new MongoBackend({ address: 'mongodb://localhost/agenda' })
});
// Define a job with exponential backoff
agenda.define(
'send email',
async job => {
await sendEmail(job.attrs.data);
},
{
backoff: backoffStrategies.exponential({
delay: 1000, // Start with 1 second
maxRetries: 5, // Retry up to 5 times
factor: 2, // Double the delay each time
jitter: 0.1 // Add 10% randomness to prevent thundering herd
})
}
);
// Retries at: ~1s, ~2s, ~4s, ~8s, ~16s (then gives up)Agenda provides three built-in backoff strategies:
Same delay between each retry attempt.
import { constant } from 'agenda';
agenda.define('my-job', handler, {
backoff: constant({
delay: 5000, // 5 seconds between each retry
maxRetries: 3 // Retry up to 3 times
})
});
// Retries at: 5s, 5s, 5sDelay increases by a fixed amount each retry.
import { linear } from 'agenda';
agenda.define('my-job', handler, {
backoff: linear({
delay: 1000, // Start with 1 second
increment: 2000, // Add 2 seconds each retry (default: same as delay)
maxRetries: 4,
maxDelay: 10000 // Cap at 10 seconds
})
});
// Retries at: 1s, 3s, 5s, 7sDelay multiplies by a factor each retry. Best for rate-limited APIs.
import { exponential } from 'agenda';
agenda.define('my-job', handler, {
backoff: exponential({
delay: 100, // Start with 100ms
factor: 2, // Double each time (default: 2)
maxRetries: 5,
maxDelay: 30000, // Cap at 30 seconds
jitter: 0.2 // Add 20% randomness
})
});
// Retries at: ~100ms, ~200ms, ~400ms, ~800ms, ~1600msFor common use cases, Agenda provides preset strategies:
import { backoffStrategies } from 'agenda';
// Aggressive: Fast retries for transient failures
// 100ms, 200ms, 400ms (3 retries in ~700ms)
agenda.define('quick-job', handler, {
backoff: backoffStrategies.aggressive()
});
// Standard: Balanced approach (default recommendation)
// ~1s, ~2s, ~4s, ~8s, ~16s with 10% jitter (5 retries)
agenda.define('normal-job', handler, {
backoff: backoffStrategies.standard()
});
// Relaxed: Gentle backoff for rate-limited APIs
// ~5s, ~15s, ~45s, ~135s with 10% jitter (4 retries)
agenda.define('api-job', handler, {
backoff: backoffStrategies.relaxed()
});You can define your own backoff logic by providing a function:
agenda.define('custom-job', handler, {
backoff: (context) => {
// context contains: { attempt, error, jobName, jobData }
// Return delay in milliseconds, or null to stop retrying
if (context.attempt > 3) return null;
// Fibonacci-like sequence
const fibDelays = [1000, 1000, 2000, 3000, 5000];
return fibDelays[context.attempt - 1];
}
});Use combine() to chain multiple strategies:
import { combine, constant, exponential } from 'agenda';
agenda.define('complex-job', handler, {
backoff: combine(
// First 2 retries: quick constant delay
(ctx) => ctx.attempt <= 2 ? 100 : null,
// Then switch to exponential
(ctx) => {
if (ctx.attempt > 5) return null;
return 1000 * Math.pow(2, ctx.attempt - 3);
}
)
});Use when() to retry only for specific errors:
import { when, exponential } from 'agenda';
agenda.define('api-job', handler, {
backoff: when(
// Only retry on timeout or rate limit errors
(ctx) =>
ctx.error.message.includes('timeout') ||
ctx.error.message.includes('rate limit'),
exponential({ delay: 1000, maxRetries: 3 })
)
});Listen for retry events to monitor job behavior:
// When a job is scheduled for retry
agenda.on('retry', (job, details) => {
console.log(`Job ${job.attrs.name} retry #${details.attempt}`);
console.log(` Next run: ${details.nextRunAt}`);
console.log(` Delay: ${details.delay}ms`);
console.log(` Error: ${details.error.message}`);
});
// Job-specific retry event
agenda.on('retry:send email', (job, details) => {
metrics.increment('email.retries');
});
// When all retries are exhausted
agenda.on('retry exhausted', (error, job) => {
console.log(`Job ${job.attrs.name} failed after ${job.attrs.failCount} attempts`);
alertOps(job, error);
});
// Job-specific exhaustion
agenda.on('retry exhausted:critical-job', (error, job) => {
// Move to dead letter queue, send alert, etc.
});| Option | Type | Default | Description |
|---|---|---|---|
delay |
number | 1000 | Initial delay in milliseconds |
maxRetries |
number | 3 | Maximum retry attempts |
maxDelay |
number | Infinity | Maximum delay cap |
jitter |
number | 0 | Randomness factor (0-1) |
factor |
number | 2 | Multiplier for exponential backoff |
increment |
number | delay | Amount to add for linear backoff |
- failCount tracks attempts: Use
job.attrs.failCountto see how many times a job has failed - Backoff is per-definition: Set the backoff strategy when defining the job, not when scheduling it
- Repeating jobs can use backoff: If a repeating job (created with
every()) has a backoff configured and fails, it will retry immediately rather than waiting for the next scheduled run - Manual retry still works: You can still listen to
failevents and manually reschedule if needed
Debouncing allows you to combine multiple rapid job submissions into a single execution. This is useful for scenarios like:
- Updating a search index after rapid document changes
- Syncing user data after multiple rapid updates
- Rate-limiting notifications
Debouncing uses the unique() constraint combined with a .debounce() modifier. When multiple saves occur for the same unique key within the debounce window, only one job execution happens.
Timeline: job.save() calls for same unique key
↓ ↓ ↓
T=0 T=2s T=4s T=9s
TRAILING (default):
nextRunAt: 5s → 7s → 9s executes→ ✓
Effect: Waits for "quiet period", runs once at end
LEADING:
nextRunAt: 0 → 0 → 0 executes→ ✓ (at T=0)
Effect: Runs immediately on first call, ignores rest during window
import { Agenda } from 'agenda';
import { MongoBackend } from '@agendajs/mongo-backend';
const agenda = new Agenda({
backend: new MongoBackend({ address: 'mongodb://localhost/agenda' })
});
// Debounce job - execute 2s after last save
await agenda.create('updateSearchIndex', { entityType: 'products' })
.unique({ 'data.entityType': 'products' })
.debounce(2000)
.save();
// Multiple rapid calls → single execution after 2s quiet period
for (const change of rapidChanges) {
await agenda.create('updateSearchIndex', { entityType: 'products', change })
.unique({ 'data.entityType': 'products' })
.debounce(2000)
.save();
}
// → Executes once with the last change's dataThe job executes after a quiet period. Each save resets the timer.
await agenda.create('syncUserActivity', { userId: 123 })
.unique({ 'data.userId': 123 })
.debounce(5000) // Wait 5s after last save
.save();The job executes immediately on first call. Subsequent calls within the window are ignored.
await agenda.create('sendNotification', { channel: '#alerts' })
.unique({ 'data.channel': '#alerts' })
.debounce(60000, { strategy: 'leading' })
.save();
// → First call executes immediately, subsequent calls within 60s are ignoredWith trailing strategy, maxWait guarantees execution within a maximum time even if saves keep coming.
await agenda.create('syncUserActivity', { userId: 123 })
.unique({ 'data.userId': 123 })
.debounce(5000, { maxWait: 30000 })
.save();
// → Even with continuous saves, job runs within 30s| Option | Type | Default | Description |
|---|---|---|---|
delay |
number | - | Debounce window in milliseconds (required, first argument) |
strategy |
'trailing' | 'leading' |
'trailing' |
When to execute the job |
maxWait |
number | - | Max time before forced execution (trailing only) |
- Requires
unique()constraint: Debounce identifies which jobs to combine using the unique key - Without
unique(): Each save creates a new job (no debouncing occurs) - Persistence: Debounce state is stored in the database, surviving process restarts
For convenience, use nowDebounced() to create a debounced job in one call:
// Equivalent to create().unique().debounce().save()
await agenda.nowDebounced(
'updateSearchIndex',
{ entityType: 'products' },
{ 'data.entityType': 'products' }, // unique query
{ delay: 2000 } // debounce options
);By default, one-time jobs remain in the database after completion. If you want to automatically remove them after they succeed, use the removeOnComplete option.
Set removeOnComplete in the Agenda constructor to apply to all jobs:
const agenda = new Agenda({
backend: new MongoBackend({ address: 'mongodb://localhost/agenda' }),
removeOnComplete: true
});With this setting, any one-time job (i.e. a job with no nextRunAt after completion) will be removed from the database after it succeeds. Recurring jobs are never removed, and failed jobs are always kept.
You can override the global setting for specific job types via define():
// Global removeOnComplete is false (default), but this job type opts in
agenda.define('send-welcome-email', async job => {
await sendEmail(job.attrs.data.to);
}, { removeOnComplete: true });
// Global removeOnComplete is true, but this job type opts out
agenda.define('audit-log', async job => {
await writeAuditLog(job.attrs.data);
}, { removeOnComplete: false });- Only one-time jobs: Recurring jobs (created with
every()) are never removed, since they always have anextRunAt - Only on success: Failed jobs are always kept in the database regardless of the setting
- Events fire first: The
completeandsuccessevents are emitted before the job is removed, so listeners can still access job data - Safe removal: If the removal fails (e.g. due to a database error), the error is logged but does not affect the processing loop
Agenda can persist structured job lifecycle events (start, success, fail, complete, retry, etc.) to the backend's database. This is useful for auditing, debugging, and monitoring — events can be queried programmatically via agenda.getLogs() or viewed in Agendash.
Logging is disabled by default and must be explicitly enabled via the logging option.
import { Agenda } from 'agenda';
import { MongoBackend } from '@agendajs/mongo-backend';
// Enable — log all jobs using the backend's built-in logger
const agenda = new Agenda({
backend: new MongoBackend({ mongo: db }),
logging: true
});
// Enable — opt-in per job (logger active, but jobs are NOT logged by default)
const agenda = new Agenda({
backend: new MongoBackend({ mongo: db }),
logging: { default: false }
});
agenda.define('important', handler, { logging: true }); // this job IS logged
agenda.define('noisy', handler); // this job is NOT logged
// Enable — custom logger (e.g., log to Postgres while using Mongo for storage)
import { PostgresJobLogger } from '@agendajs/postgres-backend';
const pgLogger = new PostgresJobLogger({ pool: myPool });
const agenda = new Agenda({
backend: new MongoBackend({ mongo: db }),
logging: pgLogger
});
// Enable — custom logger + opt-in per job
const agenda = new Agenda({
backend: new MongoBackend({ mongo: db }),
logging: { logger: pgLogger, default: false }
});| Value | Logger used | Jobs logged by default |
|---|---|---|
false / omitted |
none | n/a |
true |
backend's built-in | all |
JobLogger instance |
the provided logger | all |
{ default: false } |
backend's built-in | none (opt-in per job) |
{ logger: JobLogger } |
the provided logger | all |
{ logger: JobLogger, default: false } |
the provided logger | none (opt-in per job) |
Each job definition can override the global default:
agenda.define('important-job', handler, { logging: true }); // always logged
agenda.define('noisy-job', handler, { logging: false }); // never logged
agenda.define('default-job', handler); // follows global default// Get recent logs for a specific job
const { entries, total } = await agenda.getLogs({
jobName: 'myJob',
limit: 100,
sort: 'desc'
});
// Filter by level, event, time range
const { entries } = await agenda.getLogs({
level: 'error',
event: ['fail', 'retry:exhausted'],
from: new Date('2025-01-01'),
to: new Date()
});
// Clear old logs
await agenda.clearLogs({ to: new Date(Date.now() - 30 * 24 * 60 * 60 * 1000) });Each backend provides a standalone JobLogger that can be used independently. This lets you store jobs in one backend while logging to another:
import { RedisJobLogger } from '@agendajs/redis-backend';
import Redis from 'ioredis';
const logger = new RedisJobLogger({ redis: new Redis('redis://localhost:6379') });
const agenda = new Agenda({
backend: new MongoBackend({ mongo: db }),
logging: logger
});Available standalone loggers:
MongoJobLoggerfrom@agendajs/mongo-backend— stores in a MongoDB collection (agenda_logs)PostgresJobLoggerfrom@agendajs/postgres-backend— stores in a PostgreSQL table (agenda_logs)RedisJobLoggerfrom@agendajs/redis-backend— stores in Redis sorted sets + hashes
The following lifecycle events are recorded:
| Event | Level | Description |
|---|---|---|
start |
info | Job started executing |
success |
info | Job completed successfully |
fail |
error | Job threw an error |
complete |
info | Job finished (regardless of outcome) |
retry |
warn | Job scheduled for retry after failure |
retry:exhausted |
error | All retry attempts exhausted |
locked |
debug | Job was locked for processing |
expired |
warn | Job lock expired (timed out) |
Runs job name at the given interval. Optionally, data and options can be passed in.
Every creates a job of type single, which means that it will only create one
job in the database, even if that line is run multiple times. This lets you put
it in a file that may get run multiple times, such as webserver.js which may
reboot from time to time.
interval can be a human-readable format String, a cron format String, or a Number.
data is an optional argument that will be passed to the processing function
under job.attrs.data.
options is an optional argument containing:
timezone: Timezone for cron expressions (e.g.,'America/New_York')skipImmediate: Iftrue, skip the immediate first runforkMode: Iftrue, run in a forked child processstartDate:Dateor string - job won't run before this dateendDate:Dateor string - job won't run after this dateskipDays: Array of days to skip (0=Sunday, 1=Monday, ..., 6=Saturday)
In order to use this argument, data must also be specified.
Returns the job.
agenda.define('printAnalyticsReport', async job => {
const users = await User.doSomethingReallyIntensive();
processUserData(users);
console.log('I print a report!');
});
agenda.every('15 minutes', 'printAnalyticsReport');With date constraints (business hours only, weekdays):
await agenda.every('1 hour', 'business-metrics', { type: 'hourly' }, {
startDate: new Date('2024-06-01'),
endDate: new Date('2024-12-31'),
skipDays: [0, 6], // Skip weekends
timezone: 'America/New_York'
});Optionally, name could be array of job names, which is convenient for scheduling
different jobs for same interval.
agenda.every('15 minutes', ['printAnalyticsReport', 'sendNotifications', 'updateUserRecords']);In this case, every returns array of jobs.
Schedules a job to run name once at a given time. when can be a Date or a
String such as tomorrow at 5pm.
data is an optional argument that will be passed to the processing function
under job.attrs.data.
options is an optional argument containing:
startDate:Dateor string - job won't run before this dateendDate:Dateor string - job won't run after this date (setsnextRunAttonull)skipDays: Array of days to skip (0=Sunday, 1=Monday, ..., 6=Saturday)
Returns the job.
agenda.schedule('tomorrow at noon', 'printAnalyticsReport', { userCount: 100 });With date constraints:
// Schedule for Saturday, but skip weekends - will run on Monday instead
await agenda.schedule('next saturday', 'weekday-task', { id: 123 }, {
skipDays: [0, 6] // Skip weekends
});Optionally, name could be array of job names, similar to the every method.
agenda.schedule('tomorrow at noon', [
'printAnalyticsReport',
'sendNotifications',
'updateUserRecords'
]);In this case, schedule returns array of jobs.
Schedules a job to run name once immediately.
data is an optional argument that will be passed to the processing function
under job.attrs.data.
Returns the job.
agenda.now('do the hokey pokey');Returns an instance of a jobName with data. This does NOT save the job in
the database. See below to learn how to manually work with jobs.
const job = agenda.create('printAnalyticsReport', { userCount: 100 });
await job.save();
console.log('Job successfully saved');Lets you query (then sort, limit and skip the result) all of the jobs in the agenda job's database. These are full mongodb-native find, sort, limit and skip commands. See mongodb-native's documentation for details.
const jobs = await agenda.jobs({ name: 'printAnalyticsReport' }, { data: -1 }, 3, 1);
// Work with jobs (see below)Cancels any jobs matching the passed mongodb-native query, and removes them from the database. Returns a Promise resolving to the number of cancelled jobs, or rejecting on error.
const numRemoved = await agenda.cancel({ name: 'printAnalyticsReport' });This functionality can also be achieved by first retrieving all the jobs from the database using agenda.jobs(), looping through the resulting array and calling job.remove() on each. It is however preferable to use agenda.cancel() for this use case, as this ensures the operation is atomic.
Disables any jobs matching the passed mongodb-native query, preventing any matching jobs from being run by the Job Processor.
const numDisabled = await agenda.disable({ name: 'pollExternalService' });Similar to agenda.cancel(), this functionality can be acheived with a combination of agenda.jobs() and job.disable()
Enables any jobs matching the passed mongodb-native query, allowing any matching jobs to be run by the Job Processor.
const numEnabled = await agenda.enable({ name: 'pollExternalService' });Similar to agenda.cancel(), this functionality can be acheived with a combination of agenda.jobs() and job.enable()
Removes all jobs in the database without defined behaviors. Useful if you change a definition name and want to remove old jobs. Returns a Promise resolving to the number of removed jobs, or rejecting on error.
IMPORTANT: Do not run this before you finish defining all of your jobs. If you do, you will nuke your database of jobs.
const numRemoved = await agenda.purge();To get agenda to start processing jobs from the database you must start it. This
will schedule an interval (based on processEvery) to check for new jobs and
run them. You can also stop the queue.
Starts the job queue processing, checking processEvery time to see if there
are new jobs. Must be called after processEvery, and before any job scheduling (e.g. every).
Stops the job queue processing. Unlocks currently running jobs.
This can be very useful for graceful shutdowns so that currently running/grabbed jobs are abandoned so that other job queues can grab them / they are unlocked should the job queue start again. Here is an example of how to do a graceful shutdown.
async function graceful() {
await agenda.stop();
process.exit(0);
}
process.on('SIGTERM', graceful);
process.on('SIGINT', graceful);Waits for all currently running jobs to finish before stopping the job queue processing. Unlike stop(), this method does not unlock jobs - it lets them complete their work.
This is useful for graceful shutdowns where you want to ensure all in-progress work finishes before the process exits.
async function graceful() {
await agenda.drain();
process.exit(0);
}
process.on('SIGTERM', graceful);
process.on('SIGINT', graceful);With timeout - useful when you need to shutdown within a time limit (e.g., cloud platforms like Heroku give 30 seconds):
async function graceful() {
const result = await agenda.drain(30000); // 30 second timeout
if (result.timedOut) {
console.log(`Shutdown timeout: ${result.running} jobs still running`);
}
process.exit(0);
}With AbortSignal - for external control over the drain operation:
const controller = new AbortController();
// Abort drain after 30 seconds
setTimeout(() => controller.abort(), 30000);
const result = await agenda.drain({ signal: controller.signal });
if (result.aborted) {
console.log(`Drain aborted: ${result.running} jobs still running`);
}DrainResult - drain() returns information about what happened:
interface DrainResult {
completed: number; // jobs that finished during drain
running: number; // jobs still running (if timed out or aborted)
timedOut: boolean; // true if timeout was reached
aborted: boolean; // true if signal was aborted
}Comparison of stop() vs drain():
| Method | Running Jobs | New Jobs | Use Case |
|---|---|---|---|
stop() |
Unlocks immediately | Stops accepting | Quick shutdown, jobs picked up by other workers |
drain() |
Waits for completion | Stops accepting | Graceful shutdown, ensure work finishes |
Sometimes you may want to have multiple node instances / machines process from the same queue. Agenda supports a locking mechanism to ensure that multiple queues don't process the same job.
You can configure the locking mechanism by specifying lockLifetime as an
interval when defining the job.
agenda.define(
'someJob',
(job, cb) => {
// Do something in 10 seconds or less...
},
{ lockLifetime: 10000 }
);This will ensure that no other job processor (this one included) attempts to run the job again for the next 10 seconds. If you have a particularly long running job, you will want to specify a longer lockLifetime.
By default it is 10 minutes. Typically you shouldn't have a job that runs for 10 minutes, so this is really insurance should the job queue crash before the job is unlocked.
When a job is finished (i.e. the returned promise resolves/rejects or done is
specified in the signature and done() is called), it will automatically unlock.
A job instance has many instance methods. All mutating methods must be followed
with a call to await job.save() in order to persist the changes to the database.
Specifies an interval on which the job should repeat. The job runs at the time of defining as well in configured intervals, that is "run now and in intervals".
interval can be a human-readable format String, a cron format String, or a Number.
options is an optional argument containing:
options.timezone: should be a string as accepted by moment-timezone and is considered when using an interval in the cron string format.
options.skipImmediate: true | false (default) Setting this true will skip the immediate run. The first run will occur only in configured interval.
job.repeatEvery('10 minutes');
await job.save();job.repeatEvery('3 minutes', {
skipImmediate: true
});
await job.save();job.repeatEvery('0 6 * * *', {
timezone: 'America/New_York'
});
await job.save();Specifies a time when the job should repeat. Possible values
job.repeatAt('3:30pm');
await job.save();Specifies the next time at which the job should run.
job.schedule('tomorrow at 6pm');
await job.save();Sets the start date for the job. The job will not run before this date. If nextRunAt is computed to be before startDate, it will be adjusted to startDate.
job.startDate(new Date('2024-06-01'));
// Or with a string
job.startDate('2024-06-01T00:00:00Z');
await job.save();Sets the end date for the job. The job will not run after this date. If nextRunAt would be after endDate, it will be set to null and the job stops running.
job.endDate(new Date('2024-12-31'));
// Or with a string
job.endDate('2024-12-31T23:59:59Z');
await job.save();Sets the days of the week to skip. The job will not run on these days. Days are specified as an array of numbers where 0 = Sunday, 1 = Monday, ..., 6 = Saturday.
// Skip weekends
job.skipDays([0, 6]);
await job.save();// Skip Monday and Wednesday
job.skipDays([1, 3]);
await job.save();Combining date constraints:
const job = agenda.create('business-report', { type: 'daily' });
job.startDate('2024-06-01')
.endDate('2024-12-31')
.skipDays([0, 6]) // Skip weekends
.repeatEvery('1 day', { timezone: 'America/New_York' });
await job.save();Specifies the priority weighting of the job. Can be a number or a string from
the above priority table.
job.priority('low');
await job.save();Ensure that only one instance of this job exists with the specified properties
options is an optional argument which can overwrite the defaults. It can take
the following:
insertOnly:booleanwill prevent any properties from persisting if the job already exists. Defaults to false.
job.unique({ 'data.type': 'active', 'data.userId': '123', nextRunAt: date });
await job.save();IMPORTANT: To avoid high CPU usage by MongoDB, make sure to create an index on the used fields, like data.type and data.userId for the example above.
Configures debouncing for the job. Requires a unique() constraint to be set. See Job Debouncing for detailed documentation.
delay is the debounce window in milliseconds.
options is an optional argument:
strategy:'trailing'(default) or'leading'- when to execute the jobmaxWait: number - maximum time before forced execution (trailing only)
job.unique({ 'data.userId': 123 });
job.debounce(5000); // 5 second debounce
await job.save();
// With options
job.unique({ 'data.channel': '#alerts' });
job.debounce(60000, { strategy: 'leading' });
await job.save();Sets job.attrs.failedAt to now, and sets job.attrs.failReason to reason.
Optionally, reason can be an error, in which case job.attrs.failReason will
be set to error.message
job.fail('insufficient disk space');
// or
job.fail(new Error('insufficient disk space'));
await job.save();Runs the given job and calls callback(err, job) upon completion. Normally
you never need to call this manually.
job.run((err, job) => {
console.log("I don't know why you would need to do this...");
});Saves the job.attrs into the database. Returns a Promise resolving to a Job instance, or rejecting on error.
try {
await job.save();
cosole.log('Successfully saved job to collection');
} catch (e) {
console.error('Error saving job to collection');
}Removes the job from the database. Returns a Promise resolving to the number of jobs removed, or rejecting on error.
try {
await job.remove();
console.log('Successfully removed job from collection');
} catch (e) {
console.error('Error removing job from collection');
}Disables the job. Upcoming runs won't execute.
Enables the job if it got disabled before. Upcoming runs will execute.
Resets the lock on the job. Useful to indicate that the job hasn't timed out when you have very long running jobs. The call returns a promise that resolves when the job's lock has been renewed.
agenda.define('super long job', async job => {
await doSomeLongTask();
await job.touch();
await doAnotherLongTask();
await job.touch();
await finishOurLongTasks();
});An instance of an agenda will emit the following events:
start- called just before a job startsstart:job name- called just before the specified job starts
agenda.on('start', job => {
console.log('Job %s starting', job.attrs.name);
});complete- called when a job finishes, regardless of if it succeeds or failscomplete:job name- called when a job finishes, regardless of if it succeeds or fails
agenda.on('complete', job => {
console.log(`Job ${job.attrs.name} finished`);
});success- called when a job finishes successfullysuccess:job name- called when a job finishes successfully
agenda.on('success:send email', job => {
console.log(`Sent Email Successfully to ${job.attrs.data.to}`);
});fail- called when a job throws an errorfail:job name- called when a job throws an error
agenda.on('fail:send email', (err, job) => {
console.log(`Job failed with error: ${err.message}`);
});retry- called when a job is scheduled for automatic retry (requires backoff strategy)retry:job name- called when a specific job is scheduled for retry
agenda.on('retry', (job, details) => {
// details: { attempt, delay, nextRunAt, error }
console.log(`Retrying ${job.attrs.name} in ${details.delay}ms (attempt ${details.attempt})`);
});retry exhausted- called when a job has exhausted all retry attemptsretry exhausted:job name- called when a specific job exhausts retries
agenda.on('retry exhausted:send email', (err, job) => {
console.log(`Email job failed permanently after ${job.attrs.failCount} attempts`);
});Jobs are run with priority in a first in first out order (so they will be run in the order they were scheduled AND with respect to highest priority).
For example, if we have two jobs named "send-email" queued (both with the same priority), and the first job is queued at 3:00 PM and second job is queued at 3:05 PM with the same priority value, then the first job will run first if we start to send "send-email" jobs at 3:10 PM. However if the first job has a priority of 5 and the second job has a priority of 10, then the second will run first (priority takes precedence) at 3:10 PM.
The default sort order is { nextRunAt: 'asc', priority: 'desc' } and can be changed through the sort option when configuring the backend.
Agenda will lock jobs 1 by one, setting the lockedAt property in mongoDB, and creating an instance of the Job class which it caches into the _lockedJobs array. This defaults to having no limit, but can be managed using lockLimit. If all jobs will need to be run before agenda's next interval (set via agenda.processEvery), then agenda will attempt to lock all jobs.
Agenda will also pull jobs from _lockedJobs and into _runningJobs. These jobs are actively being worked on by user code, and this is limited by maxConcurrency (defaults to 20).
If you have multiple instances of agenda processing the same job definition with a fast repeat time you may find they get unevenly loaded. This is because they will compete to lock as many jobs as possible, even if they don't have enough concurrency to process them. This can be resolved by tweaking the maxConcurrency and lockLimit properties.
Agenda doesn't have a preferred project structure and leaves it to the user to choose how they would like to use it. That being said, you can check out the example project structure below.
Agenda itself does not have a web interface built in but we do offer stand-alone web interface Agendash:
Agenda v6 supports multiple storage backends. Choose based on your infrastructure:
| Backend | Package | Best For |
|---|---|---|
| MongoDB | Built-in (agenda) |
Default choice, excellent for most use cases. Strong consistency, flexible queries. |
| PostgreSQL | @agendajs/postgres-backend |
Teams already using PostgreSQL. LISTEN/NOTIFY provides real-time notifications without additional infrastructure. |
| Redis | @agendajs/redis-backend |
High-throughput scenarios. Fast Pub/Sub notifications. Configure persistence for durability. |
MongoDB remains the default and most battle-tested backend. PostgreSQL is great when you want to consolidate on a single database. Redis offers the lowest latency for job notifications but requires proper persistence configuration (RDB/AOF) for durability.
Each backend provides different capabilities for storage and real-time notifications:
| Backend | Storage | Notifications | Notes |
|---|---|---|---|
MongoDB (MongoBackend) |
✅ | ❌ | Storage only. Use with external notification channel for real-time. |
PostgreSQL (PostgresBackend) |
âś… | âś… | Full backend. Uses LISTEN/NOTIFY for notifications. |
Redis (RedisBackend) |
âś… | âś… | Full backend. Uses Pub/Sub for notifications. |
| InMemoryNotificationChannel | ❌ | ✅ | Notifications only. For single-process/testing. |
| RedisNotificationChannel | ❌ | ✅ | Notifications only. For multi-process with MongoDB storage. |
You can combine MongoDB storage with a separate notification channel for real-time job processing:
import { Agenda } from 'agenda';
import { MongoBackend } from '@agendajs/mongo-backend';
import { RedisBackend } from '@agendajs/redis-backend';
// MongoDB for storage + Redis for real-time notifications
const redisBackend = new RedisBackend({ connectionString: 'redis://localhost:6379' });
const agenda = new Agenda({
backend: new MongoBackend({ mongo: db }),
notificationChannel: redisBackend.notificationChannel
});
// Or use PostgreSQL notifications with MongoDB storage
import { PostgresBackend } from '@agendajs/postgres-backend';
const pgBackend = new PostgresBackend({ connectionString: 'postgres://...' });
const agenda = new Agenda({
backend: new MongoBackend({ mongo: db }),
notificationChannel: pgBackend.notificationChannel
});This is useful when you want to keep MongoDB for job storage (proven durability, flexible queries) but need faster real-time notifications across multiple processes.
See Backend Configuration for setup details.
Ultimately Agenda can work from a single job queue across multiple machines, node processes, or forks. If you are interested in having more than one worker, Bars3s has written up a fantastic example of how one might do it:
const cluster = require('cluster');
const os = require('os');
const httpServer = require('./app/http-server');
const jobWorker = require('./app/job-worker');
const jobWorkers = [];
const webWorkers = [];
if (cluster.isMaster) {
const cpuCount = os.cpus().length;
// Create a worker for each CPU
for (let i = 0; i < cpuCount; i += 1) {
addJobWorker();
addWebWorker();
}
cluster.on('exit', (worker, code, signal) => {
if (jobWorkers.indexOf(worker.id) !== -1) {
console.log(
`job worker ${worker.process.pid} exited (signal: ${signal}). Trying to respawn...`
);
removeJobWorker(worker.id);
addJobWorker();
}
if (webWorkers.indexOf(worker.id) !== -1) {
console.log(
`http worker ${worker.process.pid} exited (signal: ${signal}). Trying to respawn...`
);
removeWebWorker(worker.id);
addWebWorker();
}
});
} else {
if (process.env.web) {
console.log(`start http server: ${cluster.worker.id}`);
// Initialize the http server here
httpServer.start();
}
if (process.env.job) {
console.log(`start job server: ${cluster.worker.id}`);
// Initialize the Agenda here
jobWorker.start();
}
}
function addWebWorker() {
webWorkers.push(cluster.fork({ web: 1 }).id);
}
function addJobWorker() {
jobWorkers.push(cluster.fork({ job: 1 }).id);
}
function removeWebWorker(id) {
webWorkers.splice(webWorkers.indexOf(id), 1);
}
function removeJobWorker(id) {
jobWorkers.splice(jobWorkers.indexOf(id), 1);
}Agenda is configured by default to automatically reconnect indefinitely, emitting an error event when no connection is available on each process tick, allowing you to restore the Mongo instance without having to restart the application.
However, if you are using an existing Mongo client
you'll need to configure the reconnectTries and reconnectInterval connection settings
manually, otherwise you'll find that Agenda will throw an error with the message "MongoDB connection is not recoverable,
application restart required" if the connection cannot be recovered within 30 seconds.
Agenda will only process jobs that it has definitions for. This allows you to selectively choose which jobs a given agenda will process.
Consider the following project structure, which allows us to share models with the rest of our code base, and specify which jobs a worker processes, if any at all.
- server.js
- worker.js
lib/
- agenda.js
controllers/
- user-controller.js
jobs/
- email.js
- video-processing.js
- image-processing.js
models/
- user-model.js
- blog-post.model.js
Sample job processor (eg. jobs/email.js)
let email = require('some-email-lib'),
User = require('../models/user-model.js');
module.exports = function (agenda) {
agenda.define('registration email', async job => {
const user = await User.get(job.attrs.data.userId);
await email(user.email(), 'Thanks for registering', 'Thanks for registering ' + user.name());
});
agenda.define('reset password', async job => {
// Etc
});
// More email related jobs
};lib/agenda.js
import { Agenda } from 'agenda';
import { MongoBackend } from '@agendajs/mongo-backend';
const agenda = new Agenda({
backend: new MongoBackend({
address: 'mongodb://localhost:27017/agenda-test',
collection: 'agendaJobs'
})
});
const jobTypes = process.env.JOB_TYPES ? process.env.JOB_TYPES.split(',') : [];
jobTypes.forEach(type => {
require('./jobs/' + type)(agenda);
});
if (jobTypes.length) {
agenda.start(); // Returns a promise, which should be handled appropriately
}
module.exports = agenda;lib/controllers/user-controller.js
let app = express(),
User = require('../models/user-model'),
agenda = require('../worker.js');
app.post('/users', (req, res, next) => {
const user = new User(req.body);
user.save(err => {
if (err) {
return next(err);
}
agenda.now('registration email', { userId: user.primary() });
res.send(201, user.toJson());
});
});worker.js
require('./lib/agenda.js');Now you can do the following in your project:
node server.jsFire up an instance with no JOB_TYPES, giving you the ability to process jobs,
but not wasting resources processing jobs.
JOB_TYPES=email node server.jsAllow your http server to process email jobs.
JOB_TYPES=email node worker.jsFire up an instance that processes email jobs.
JOB_TYPES=video-processing,image-processing node worker.jsFire up an instance that processes video-processing/image-processing jobs. Good for a heavy hitting server.
If you think you have encountered a bug, please feel free to report it here:
Please provide us with as much details as possible such as:
- Agenda version
- Environment (OSX, Linux, Windows, etc)
- Small description of what happened
- Any relevant stack track
- Agenda logs (see below)
- OSX:
DEBUG="agenda:*" ts-node src/index.ts - Linux:
DEBUG="agenda:*" ts-node src/index.ts - Windows CMD:
set DEBUG=agenda:* - Windows PowerShell:
$env:DEBUG = "agenda:*"
While not necessary, attaching a text file with this debug information would be extremely useful in debugging certain issues and is encouraged.
Performance tuning is backend-specific. See the documentation for your backend:
- MongoDB: See @agendajs/mongo-backend for index recommendations
- PostgreSQL: See @agendajs/postgres-backend - indexes are created automatically by default
- Redis: See @agendajs/redis-backend
It's possible to start jobs in a child process, this helps for example for long running processes to seperate them from the main thread. For example if one process consumes too much memory and gets killed, it will not affect any others. To use this feature, several steps are required. 1.) create a childWorker helper. The subrocess has a complete seperate context, so there are no database connections or anything else that can be shared. Therefore you have to ensure that all required connections and initializations are done here too. Furthermore you also have to load the correct job definition so that agenda nows what code it must execute. Therefore 3 parameters are passed to the childWorker: name, jobId and path to the job definition.
Example file can look like this:
childWorker.ts
import 'reflect-metadata';
process.on('message', message => {
if (message === 'cancel') {
process.exit(2);
} else {
console.log('got message', message);
}
});
(async () => {
const mongooseConnection = await connectToDatabase(); // connect to database
// do other required initializations
// get process arguments (name, jobId and path to agenda definition file)
const [, , name, jobId, agendaDefinition] = process.argv;
// set fancy process title
process.title = `${process.title} (sub worker: ${name}/${jobId})`;
// initialize Agenda in "forkedWorker" mode
const agenda = new Agenda({
name: `subworker-${name}`,
forkedWorker: true,
mongo: mongooseConnection.db as any
});
// wait for db connection
await agenda.ready;
if (!name || !jobId) {
throw new Error(`invalid parameters: ${JSON.stringify(process.argv)}`);
}
// load job definition
/** in this case the file is for example ../some/path/definitions.js
with a content like:
export default (agenda: Agenda, definitionOnly = false) => {
agenda.define(
'some job',
async (notification: {
attrs: { data: { dealId: string; orderId: TypeObjectId<IOrder> } };
}) => {
// do something
}
);
if (!definitionOnly) {
// here you can create scheduled jobs or other things
}
});
*/
if (agendaDefinition) {
const loadDefinition = await import(agendaDefinition);
(loadDefinition.default || loadDefinition)(agenda, true);
}
// run this job now
await agenda.runForkedJob(jobId);
// disconnect database and exit
process.exit(0);
})().catch(err => {
console.error('err', err);
if (process.send) {
process.send(JSON.stringify(err));
}
process.exit(1);
});
Ensure to only define job definitions during this step, otherwise you create some overhead (e.g. if you create new jobs inside the defintion files). That's why I call the defintion file with agenda and a second paramter that is set to true. If this parameter is true, I do not initialize any jobs (create jobs etc..)
2.) to use this, you have to enable it on a job. Set forkMode to true:
const job = agenda.create('some job', { meep: 1 });
job.forkMode(true);
await job.save();- Agenda was originally created by @rschmukler.
- Agendash was originally created by @joeframbach.
- These days Agenda has a great community of contributors around it. Join us!