## New jobs:handle-schedules bin script
Similarly to `payload jobs:run`, this PR adds a new
`jobs:handle-schedules` bin script which only handles scheduling.
## Allows jobs:run bin script to handle scheduling
Similarly to how [payload
autoRun](https://payloadcms.com/docs/jobs-queue/queues#cron-jobs)
handles both running and scheduling jobs by default, you can now set the
`payload jobs:run` bin script to also handle scheduling. This is opt-in:
```sh
pnpm payload jobs:run --cron "*/5 * * * *" --queue myQueue --handle-schedules # This will both schedule jobs according to the configuration and run them
```
## Cron schedules for all bin scripts
Previously, only the `payload jobs:run` bin script accepted a cron flag.
The `payload jobs:handle-schedules` would have required the same logic
to also handle a cron flag.
Instead of opting for this duplicative logic, I'm now handling cron
logic before we determine which script to run. This means: it's simpler
and requires less duplicative code.
**This allows all other bin scripts (including custom ones) to use the
`--cron` flag**, enabling cool use-cases like scheduling your own custom
scripts - no additional config required!
Example:
```sh
pnpm payload run ./myScript.ts --cron "0 * * * *"
```
Video Example:
https://github.com/user-attachments/assets/4ded738d-2ef9-43ea-8136-f47f913a7ba8
## More reliable job system crons
When using autorun or `--cron`, if one cron run takes longer than the
cron interval, the second cron would run before the first one finishes.
This can be especially dangerous when running jobs using a bin script,
potentially causing race conditions, as the first cron run will take
longer due to payload initialization overhead (only for first cron run,
consecutive ones use cached payload). Now, consecutive cron runs will
wait for the first one to finish by using the `{ protect: true }`
property of Croner.
This change will affect both autorun and bin scripts.
## Cleanup
- Centralized payload instance cleanup (payload.destroy()) for all bin
scripts
- The `getPayload` function arguments were not properly typed. Arguments
like `disableOnInit: true` are already supported, but the type did not
reflect that. This simplifies the type and makes it more accurate.
## Fixes
- `allQueues` argument for `payload jobs:run` was not respected
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1211124797199077
Adds a new `schedule` property to workflow and task configs that can be
used to have Payload automatically _queue_ jobs following a certain
_schedule_.
Docs:
https://payloadcms.com/docs/dynamic/jobs-queue/schedules?branch=feat/schedule-jobs
## API Example
```ts
export default buildConfig({
// ...
jobs: {
// ...
scheduler: 'manual', // Or `cron` if you're not using serverless. If `manual` is used, then user needs to set up running /api/payload-jobs/handleSchedules or payload.jobs.handleSchedules in regular intervals
tasks: [
{
schedule: [
{
cron: '* * * * * *',
queue: 'autorunSecond',
// Hooks are optional
hooks: {
// Not an array, as providing and calling `defaultBeforeSchedule` would be more error-prone if this was an array
beforeSchedule: async (args) => {
// Handles verifying that there are no jobs already scheduled or processing.
// You can override this behavior by not calling defaultBeforeSchedule, e.g. if you wanted
// to allow a maximum of 3 scheduled jobs in the queue instead of 1, or add any additional conditions
const result = await args.defaultBeforeSchedule(args)
return {
...result,
input: {
message: 'This task runs every second',
},
}
},
afterSchedule: async (args) => {
await args.defaultAfterSchedule(args) // Handles updating the payload-jobs-stats global
args.req.payload.logger.info(
'EverySecond task scheduled: ' +
(args.status === 'success' ? args.job.id : 'skipped or failed to schedule'),
)
},
},
},
],
slug: 'EverySecond',
inputSchema: [
{
name: 'message',
type: 'text',
required: true,
},
],
handler: ({ input, req }) => {
req.payload.logger.info(input.message)
return {
output: {},
}
},
}
]
}
})
```
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210495300843759
Previously, we were always initializing cronjobs when calling
`getPayload` or `payload.init`.
This is undesired in bin scripts - we don't want cron jobs to start
triggering db calls while we're running an initial migration using
`payload migrate` for example. This has previously led to a race
condition, triggering the following, occasional error, if job autoruns
were enabled:
```ts
DrizzleQueryError: Failed query: select "payload_jobs"."id", "payload_jobs"."input", "payload_jobs"."completed_at", "payload_jobs"."total_tried", "payload_jobs"."has_error", "payload_jobs"."error", "payload_jobs"."workflow_slug", "payload_jobs"."task_slug", "payload_jobs"."queue", "payload_jobs"."wait_until", "payload_jobs"."processing", "payload_jobs"."updated_at", "payload_jobs"."created_at", "payload_jobs_log"."data" as "log" from "payload_jobs" "payload_jobs" left join lateral (select coalesce(json_agg(json_build_array("payload_jobs_log"."_order", "payload_jobs_log"."id", "payload_jobs_log"."executed_at", "payload_jobs_log"."completed_at", "payload_jobs_log"."task_slug", "payload_jobs_log"."task_i_d", "payload_jobs_log"."input", "payload_jobs_log"."output", "payload_jobs_log"."state", "payload_jobs_log"."error") order by "payload_jobs_log"."_order" asc), '[]'::json) as "data" from (select * from "payload_jobs_log" "payload_jobs_log" where "payload_jobs_log"."_parent_id" = "payload_jobs"."id" order by "payload_jobs_log"."_order" asc) "payload_jobs_log") "payload_jobs_log" on true where ("payload_jobs"."completed_at" is null and ("payload_jobs"."has_error" is null or "payload_jobs"."has_error" <> $1) and "payload_jobs"."processing" = $2 and ("payload_jobs"."wait_until" is null or "payload_jobs"."wait_until" < $3) and "payload_jobs"."queue" = $4) order by "payload_jobs"."created_at" asc limit $5
params: true,false,2025-07-10T21:25:03.002Z,autorunSecond,100
at NodePgPreparedQuery.queryWithCache (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/pg-core/session.ts:74:11)
at processTicksAndRejections (node:internal/process/task_queues:105:5)
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:154:19
... 6 lines matching cause stack trace ...
at N._trigger (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/croner@9.0.0/node_modules/croner/dist/croner.cjs:1:16806) {
query: `select "payload_jobs"."id", "payload_jobs"."input", "payload_jobs"."completed_at", "payload_jobs"."total_tried", "payload_jobs"."has_error", "payload_jobs"."error", "payload_jobs"."workflow_slug", "payload_jobs"."task_slug", "payload_jobs"."queue", "payload_jobs"."wait_until", "payload_jobs"."processing", "payload_jobs"."updated_at", "payload_jobs"."created_at", "payload_jobs_log"."data" as "log" from "payload_jobs" "payload_jobs" left join lateral (select coalesce(json_agg(json_build_array("payload_jobs_log"."_order", "payload_jobs_log"."id", "payload_jobs_log"."executed_at", "payload_jobs_log"."completed_at", "payload_jobs_log"."task_slug", "payload_jobs_log"."task_i_d", "payload_jobs_log"."input", "payload_jobs_log"."output", "payload_jobs_log"."state", "payload_jobs_log"."error") order by "payload_jobs_log"."_order" asc), '[]'::json) as "data" from (select * from "payload_jobs_log" "payload_jobs_log" where "payload_jobs_log"."_parent_id" = "payload_jobs"."id" order by "payload_jobs_log"."_order" asc) "payload_jobs_log") "payload_jobs_log" on true where ("payload_jobs"."completed_at" is null and ("payload_jobs"."has_error" is null or "payload_jobs"."has_error" <> $1) and "payload_jobs"."processing" = $2 and ("payload_jobs"."wait_until" is null or "payload_jobs"."wait_until" < $3) and "payload_jobs"."queue" = $4) order by "payload_jobs"."created_at" asc limit $5`,
params: [ true, false, '2025-07-10T21:25:03.002Z', 'autorunSecond', 100 ],
cause: error: relation "payload_jobs" does not exist
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/pg@8.16.3/node_modules/pg/lib/client.js:545:17
at processTicksAndRejections (node:internal/process/task_queues:105:5)
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:161:13
at NodePgPreparedQuery.queryWithCache (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/pg-core/session.ts:72:12)
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:154:19
at find (/Users/alessio/Documents/GitHub/payload2/packages/drizzle/src/find/findMany.ts:162:19)
at Object.updateMany (/Users/alessio/Documents/GitHub/payload2/packages/drizzle/src/updateJobs.ts:26:16)
at updateJobs (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/utilities/updateJob.ts:102:37)
at runJobs (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/operations/runJobs/index.ts:181:25)
at Object.run (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/localAPI.ts:137:12)
at N.fn (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/index.ts:866:13)
at N._trigger (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/croner@9.0.0/node_modules/croner/dist/croner.cjs:1:16806) {
length: 112,
severity: 'ERROR',
code: '42P01',
detail: undefined,
hint: undefined,
position: '406',
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'parse_relation.c',
line: '1449',
routine: 'parserOpenTable'
}
}
```
This PR makes running crons opt-in using a new `cron` flag. By default,
no cron jobs will be created.
By default, calling `payload.jobs.run()` will incorrectly run all jobs
from all queues. The `npx payload jobs:run` bin script behaves the same
way.
The `payload-jobs/run` endpoint runs jobs from the `default` queue,
which is the correct behavior.
This PR does the following:
- Change `payload.jobs.run()` to only runs jobs from the `default` queue
by default
- Change `npx payload jobs:run` bin script to only runs jobs from the
`default` queue by default
- Add new allQueues / --all-queues arguments/queryparams/flags for the
local API, rest API and bin script to allow you to run all jobs from all
queues
- Clarify the docs
### What?
Adds documentation to demonstrate how to make the internal
`payload-jobs` collection visible in the Admin Panel using the
`jobsCollectionOverrides` option.
### Why?
By default, the jobs collection is hidden from the UI. However, for
debugging or monitoring purposes—especially during development—some
teams may want direct access to view and inspect job entries.
### How?
A code snippet is added under the **"Jobs Queue > Overview"** page
showing how to override the collection config to set `admin.hidden =
false`.
---
**Before:**
No mention of how to expose the `payload-jobs` collection in the
documentation.
**After:**
Clear section and code example on how to enable visibility for the jobs
collection.
---
Supersedes: [#12321](https://github.com/payloadcms/payload/pull/12321)
Related Discord thread: [Payload Jobs
UI](https://discord.com/channels/967097582721572934/1367918548428652635)
Co-authored-by: Willian Souza <willian.souza@compasso.com.br>
This clarifies that jobs.autoRun only *runs* already-queued jobs. It does not queue the jobs for you.
Also adds an e2e test as this functionality had no e2e coverage
Previously, jobs were executed in FIFO order on MongoDB, and LIFO on
Postgres, with no way to configure this behavior.
This PR makes FIFO the default on both MongoDB and Postgres and
introduces the following new options to configure the processing order
globally or on a queue-by-queue basis:
- a `processingOrder` property to the jobs config
- a `processingOrder` argument to `payload.jobs.run()` to override
what's set in the jobs config
It also adds a new `sequential` option to `payload.jobs.run()`, which
can be useful for debugging.
This adds new `payload.jobs.cancel` and `payload.jobs.cancelByID` methods that allow you to cancel already-running jobs, or prevent queued jobs from running.
While it's not possible to cancel a function mid-execution, this will stop job execution the next time the job makes a request to the db, which happens after every task.
Adds a `shouldAutoRun` property to the `jobs` config to be able to have
fine-grained control over if jobs should be run. This is helpful in
cases where you may have many horizontally scaled compute instances, and
only one instance should be responsible for running jobs.
By default, if a task has passed previously and a workflow is re-run,
the task will not be re-run. Instead, the output from the previous task
run will be returned. This is to prevent unnecessary re-runs of tasks
that have already passed.
This PR allows you to configure this behavior through the
`retries.shouldRestore` property. This property accepts a boolean or a
function for more complex restore behaviors.
## Fix default retries
By default, if no `retries` property has been set, jobs / tasks should
not be retried. This was not the case previously, as the `maxRetries`
variable was `undefined`, causing jobs to retry endlessly. This PR sets
them to `0` by default.
Additionally, this fixes some undesirable behavior of the workflow
retries property. Workflow retries now act as **maximum**,
workflow-level retries. Only tasks that do not have a retry property set
will inherit the workflow-level retries.
## Fix error messages
Previously, you were able to encounter error messages with undefined
values like these:

Reason is that it was always using `job.workflowSlug` for the error
messages. However, if you queue a task directly, without a workflow,
`job.workflowSlug` is undefined and `job.taskSlug` should be used
instead.
This PR then gets rid of the second undefined value by ensuring that
`maxRetries´ is never undefined
### What?
Fixes a formatting issue that prevents
payloadcms.com/docs/beta/jobs-queue/overview from displaying properly.
There were a couple `<strong>` tags in the `jobs-queue/overview` docs
that did not have proper closing `</strong>` tags.
Adds a jobs queue to Payload.
- [x] Docs, w/ examples for Vercel Cron, additional services
- [x] Type the `job` using GeneratedTypes in `JobRunnerArgs`
(@AlessioGr)
- [x] Write the `runJobs` function
- [x] Allow for some type of `payload.runTask`
- [x] Open up a new bin script for running jobs
- [x] Determine strategy for runner endpoint to either await jobs
successfully or return early and stay open until job work completes
(serverless ramifications here)
- [x] Allow for job runner to accept how many jobs to run in one
invocation
- [x] Make a Payload local API method for creating a new job easily
(payload.createJob) or similar which is strongly typed (@AlessioGr)
- [x] Make `payload.runJobs` or similar (@AlessioGr)
- [x] Write tests for retrying up to max retries for a given step
- [x] Write tests for dynamic import of a runner
The shape of the config should permit the definition of steps separate
from the job workflows themselves.
```js
const config = {
// Not sure if we need this property anymore
queues: {
},
// A job is an instance of a workflow, stored in DB
// and triggered by something at some point
jobs: {
// Be able to override the jobs collection
collectionOverrides: () => {},
// Workflows are groups of tasks that handle
// the flow from task to task.
// When defined on the config, they are considered as predefined workflows
// BUT - in the future, we'll allow for UI-based workflow definition as well.
workflows: [
{
slug: 'job-name',
// Temporary name for this
// should be able to pass function
// or path to it for Node to dynamically import
controlFlowInJS: '/my-runner.js',
// Temporary name as well
// should be able to eventually define workflows
// in UI (meaning they need to be serialized in JSON)
// Should not be able to define both control flows
controlFlowInJSON: [
{
task: 'myTask',
next: {
// etc
}
}
],
// Workflows take input
// which are a group of fields
input: [
{
name: 'post',
type: 'relationship',
relationTo: 'posts',
maxDepth: 0,
required: true,
},
{
name: 'message',
type: 'text',
required: true,
},
],
},
],
// Tasks are defined separately as isolated functions
// that can be retried on fail
tasks: [
{
slug: 'myTask',
retries: 2,
// Each task takes input
// Used to auto-type the task func args
input: [
{
name: 'post',
type: 'relationship',
relationTo: 'posts',
maxDepth: 0,
required: true,
},
{
name: 'message',
type: 'text',
required: true,
},
],
// Each task takes output
// Used to auto-type the function signature
output: [
{
name: 'success',
type: 'checkbox',
}
],
onSuccess: () => {},
onFail: () => {},
run: myRunner,
},
]
}
}
```
### `payload.createJob`
This function should allow for the creation of jobs based on either a
workflow (group of tasks) or an individual task.
To create a job using a workflow:
```js
const job = await payload.createJob({
// Accept the `name` of a workflow so we can match to either a
// code-based workflow OR a workflow defined in the DB
// Should auto-type the input
workflowName: 'myWorkflow',
input: {
// typed to the args of the workflow by name
}
})
```
To create a job using a task:
```js
const job = await payload.createJob({
// Accept the `name` of a task
task: 'myTask',
input: {
// typed to the args of the task by name
}
})
```
---------
Co-authored-by: Alessio Gravili <alessio@gravili.de>
Co-authored-by: Dan Ribbens <dan.ribbens@gmail.com>