### What?
This PR adds atomic array operations ($append and $remove) for
relationship fields with `hasMany: true` across all database adapters.
These operations allow developers to add or remove specific items from
relationship arrays without replacing the entire array.
New API:
```
// Append relationships (prevents duplicates)
await payload.db.updateOne({
collection: 'posts',
id: 'post123',
data: {
categories: { $append: ['featured', 'trending'] }
}
})
// Remove specific relationships
await payload.db.updateOne({
collection: 'posts',
id: 'post123',
data: {
tags: { $remove: ['draft', 'private'] }
}
})
// Works with polymorphic relationships
await payload.db.updateOne({
collection: 'posts',
id: 'post123',
data: {
relatedItems: {
$append: [
{ relationTo: 'categories', value: 'category-id' },
{ relationTo: 'tags', value: 'tag-id' }
]
}
}
})
```
### Why?
Currently, updating relationship arrays requires replacing the entire
array which requires fetching existing data before updates. Requiring
more implementation effort and potential for errors when using the API,
in particular for bulk updates.
### How?
#### Cross-Adapter Features:
- Polymorphic relationships: Full support for relationTo:
['collection1', 'collection2']
- Localized relationships: Proper locale handling when fields are
localized
- Duplicate prevention: Ensures `$append` doesn't create duplicates
- Order preservation: Appends to end of array maintaining order
- Bulk operations: Works with `updateMany` for bulk updates
#### MongoDB Implementation:
- Converts `$append` to native `$addToSet` (prevents duplicates in
contrast to `$push`)
- Converts `$remove` to native `$pull` (targeted removal)
#### Drizzle Implementation (Postgres/SQLite):
- Uses optimized batch `INSERT` with duplicate checking for `$append`
- Uses targeted `DELETE` queries for `$remove`
- Implements timestamp-based ordering for performance
- Handles locale columns conditionally based on schema
### Limitations
The current implementation is only on database-adapter level and not
(yet) for the local API. Implementation in the localAPI will be done
separately.
### What
Fixes a bug where collection docs paginate incorrectly when `timestamps:
false` is set — the same docs were appearing across multiple pages.
### Why
The `find` query sanitizes the `sort` parameter.
- With `timestamps: true`, it defaults to `createdAt`
- With `timestamps: false`, it falls back to `_id`
That logic is correct, but in `find.ts` we always passed `timestamps:
true`, ignoring the collection config. With the right sort applied,
pagination works as expected.
### How
`find.ts` now passes `collectionConfig.timestamps` to
`buildSortParam()`, ensuring the correct sort field is chosen.
---
Fixes#13888
### What?
Adds a new `experimental.localizeStatus` config option, set to `false`
by default. When `true`, the admin panel will display the document
status based on the *current locale* instead of the _latest_ overall
status. Also updates the edit view to only show a `changed` status when
`autosave` is enabled.
### Why?
Showing the status for the current locale is more accurate and useful in
multi-locale setups. This update will become default behavior, able to
be opted in by setting `experimental.localizeStatus: true` in the
Payload config. This option will become depreciated in V4.
### How?
When `localizeStatus` is `true`, we store the localized status in a new
`localeStatus` field group within version data. The admin panel then
reads from this field to display the correct status for the current
locale.
---------
Co-authored-by: Jarrod Flesch <jarrodmflesch@gmail.com>
Currently, attempting to run tasks in parallel will result in DB errors.
## Solution
The problem was caused due to inefficient db update calls. After each
task completes, we need to update the log array in the payload-jobs
collection. On postgres, that's a different table.
Currently, the update works the following way:
1. Nuke the table
2. Re-insert every single row, including the new one
This will throw db errors if multiple processes start doing that.
Additionally, due to conflicts, new log rows may be lost.
This PR makes use of the the [new db $push operation
](https://github.com/payloadcms/payload/pull/13453) we recently added to
atomically push a new log row to the database in a single round-trip.
This not only reduces the amount of db round trips (=> faster job queue
system) but allows multiple tasks to perform this db operation in
parallel, without conflicts.
## Problem
**Example:**
```ts
export const fastParallelTaskWorkflow: WorkflowConfig<'fastParallelTask'> = {
slug: 'fastParallelTask',
handler: async ({nlineTask }) => {
const taskFunctions = []
for (let i = 0; i < 20; i++) {
const idx = i + 1
taskFunctions.push(async () => {
return await inlineTask(`parallel task ${idx}`, {
input: {
test: idx,
},
task: () => {
return {
output: {
taskID: idx.toString(),
},
}
},
})
})
}
await Promise.all(taskFunctions.map((f) => f()))
},
}
```
On SQLite, this would throw the following error:
```bash
Caught error Error: UNIQUE constraint failed: payload_jobs_log.id
at Object.next (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/libsql@0.4.7/node_modules/libsql/index.js:335:20)
at Statement.all (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/libsql@0.4.7/node_modules/libsql/index.js:360:16)
at executeStmt (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5/node_modules/@libsql/client/lib-cjs/sqlite3.js:285:34)
at Sqlite3Client.execute (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5/node_modules/@libsql/client/lib-cjs/sqlite3.js:101:16)
at /Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/libsql/session.ts:288:58
at LibSQLPreparedQuery.queryWithCache (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/sqlite-core/session.ts:79:18)
at LibSQLPreparedQuery.values (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/libsql/session.ts:286:21)
at LibSQLPreparedQuery.all (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/libsql/session.ts:214:27)
at QueryPromise.all (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/sqlite-core/query-builders/insert.ts:402:26)
at QueryPromise.execute (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/sqlite-core/query-builders/insert.ts:414:40)
at QueryPromise.then (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/query-promise.ts:31:15) {
rawCode: 1555,
code: 'SQLITE_CONSTRAINT_PRIMARYKEY',
libsqlError: true
}
```
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1211001438499053
This PR adds **atomic** `$push` **support for array fields**. It makes
it possible to safely append new items to arrays, which is especially
useful when running tasks in parallel (like job queues) where multiple
processes might update the same record at the same time. By handling
pushes atomically, we avoid race conditions and keep data consistent -
especially on postgres, where the current implementation would nuke the
entire array table before re-inserting every single array item.
The feature works for both localized and unlocalized arrays, and
supports pushing either single or multiple items at once.
This PR is a requirement for reliably running parallel tasks in the job
queue - see https://github.com/payloadcms/payload/pull/13452.
Alongside documenting `$push`, this PR also adds documentation for
`$inc`.
## Changes to updatedAt behavior
https://github.com/payloadcms/payload/pull/13335 allows us to override
the updatedAt property instead of the db always setting it to the
current date.
However, we are not able to skip updating the updatedAt property
completely. This means, usage of $push results in 2 postgres db calls:
1. set updatedAt in main row
2. append array row in arrays table
This PR changes the behavior to only automatically set updatedAt if it's
undefined. If you explicitly set it to `null`, this now allows you to
skip the db adapter automatically setting updatedAt.
=> This allows us to use $push in just one single db call
## Usage Examples
### Pushing a single item to an array
```ts
const post = (await payload.db.updateOne({
data: {
array: {
$push: {
text: 'some text 2',
id: new mongoose.Types.ObjectId().toHexString(),
},
},
},
collection: 'posts',
id: post.id,
}))
```
### Pushing a single item to a localized array
```ts
const post = (await payload.db.updateOne({
data: {
arrayLocalized: {
$push: {
en: {
text: 'some text 2',
id: new mongoose.Types.ObjectId().toHexString(),
},
es: {
text: 'some text 2 es',
id: new mongoose.Types.ObjectId().toHexString(),
},
},
},
},
collection: 'posts',
id: post.id,
}))
```
### Pushing multiple items to an array
```ts
const post = (await payload.db.updateOne({
data: {
array: {
$push: [
{
text: 'some text 2',
id: new mongoose.Types.ObjectId().toHexString(),
},
{
text: 'some text 3',
id: new mongoose.Types.ObjectId().toHexString(),
},
],
},
},
collection: 'posts',
id: post.id,
}))
```
### Pushing multiple items to a localized array
```ts
const post = (await payload.db.updateOne({
data: {
arrayLocalized: {
$push: {
en: {
text: 'some text 2',
id: new mongoose.Types.ObjectId().toHexString(),
},
es: [
{
text: 'some text 2 es',
id: new mongoose.Types.ObjectId().toHexString(),
},
{
text: 'some text 3 es',
id: new mongoose.Types.ObjectId().toHexString(),
},
],
},
},
},
collection: 'posts',
id: post.id,
}))
```
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1211110462564647
Fixes https://github.com/payloadcms/payload/issues/13433. Testing
release: `3.54.0-internal.90cf7d5`
Previously, when calling `getPayload`, you would always use the same,
cached payload instance within a single process, regardless of the
arguments passed to the `getPayload` function. This resulted in the
following issues - both are fixed by this PR:
- If, in your frontend you're calling `getPayload` without `cron: true`,
and you're hosting the Payload Admin Panel in the same process, crons
will not be enabled even if you visit the admin panel which calls
`getPayload` with `cron: true`. This will break jobs autorun depending
on which page you visit first - admin panel or frontend
- Within the same process, you are unable to use `getPayload` twice for
different instances of payload with different Payload Configs.
On postgres, you can get around this by manually calling new
`BasePayload()` which skips the cache. This did not work on mongoose
though, as mongoose was caching the models on a global singleton (this
PR addresses this).
In order to bust the cache for different Payload Config, this PR
introduces a new, optional `key` property to `getPayload`.
## Mongoose - disable using global singleton
This PR refactors the Payload Mongoose adapter to stop relying on the
global mongoose singleton. Instead, each adapter instance now creates
and manages its own scoped Connection object.
### Motivation
Previously, calling `getPayload()` more than once in the same process
would throw `Cannot overwrite model` errors because models were compiled
into the global singleton. This prevented running multiple Payload
instances side-by-side, even when pointing at different databases.
### Changes
- Replace usage of `mongoose.connect()` / `mongoose.model()` with
instance-scoped `createConnection()` and `connection.model()`.
- Ensure models, globals, and versions are compiled per connection, not
globally.
- Added proper `close()` handling on `this.connection` instead of
`mongoose.disconnect()`.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1211114366468745
Previously, when manually setting `createdAt` or `updatedAt` in a
`payload.db.*` or `payload.*` operation, the value may have been
ignored. In some cases it was _impossible_ to change the `updatedAt`
value, even when using direct db adapter calls. On top of that, this
behavior sometimes differed between db adapters. For example, mongodb
did accept `updatedAt` when calling `payload.db.updateVersion` -
postgres ignored it.
This PR changes this behavior to consistently respect `createdAt` and
`updatedAt` values for `payload.db.*` operations.
For `payload.*` operations, this also works with the following
exception:
- update operations do no respect `updatedAt`, as updates are commonly
performed by spreading the old data, e.g. `payload.update({ data:
{...oldData} })` - in these cases, we usually still want the `updatedAt`
to be updated. If you need to get around this, you can use the
`payload.db.updateOne` operation instead.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210919646303994
### What?
The PR #12763 added significant improvements for third-party databases
that are compatible with the MongoDB API. While the original PR was
focused on Firestore, other databases like DocumentDB also benefit from
these compatibility features.
In particular, the aggregate JOIN strategy does not work on AWS
DocumentDB and thus needs to be disabled. The current PR aims to provide
this as a sensible default in the `compatibilityOptions` that are
provided by Payload out-of-the-box.
As a bonus, it also fixes a small typo from `compat(a)bility` to
`compat(i)bility`.
### Why?
Because our Payload instance, which is backed by AWS DocumentDB, crashes
upon trying to access any `join` field.
### How?
By adding the existing `useJoinAggregations` with value `false` to the
compatiblity layer. Individual developers can still choose to override
it in their own local config as needed.
Currently, with DocumentDB instead of a friendly error like "Value must
be unique" we see a generic "Something went wrong" message.
This PR fixes that by adding a fallback to parse the message instead of
using `error.keyValue` which doesn't exist for responses from
DocumentDB.
Adds a new `schedule` property to workflow and task configs that can be
used to have Payload automatically _queue_ jobs following a certain
_schedule_.
Docs:
https://payloadcms.com/docs/dynamic/jobs-queue/schedules?branch=feat/schedule-jobs
## API Example
```ts
export default buildConfig({
// ...
jobs: {
// ...
scheduler: 'manual', // Or `cron` if you're not using serverless. If `manual` is used, then user needs to set up running /api/payload-jobs/handleSchedules or payload.jobs.handleSchedules in regular intervals
tasks: [
{
schedule: [
{
cron: '* * * * * *',
queue: 'autorunSecond',
// Hooks are optional
hooks: {
// Not an array, as providing and calling `defaultBeforeSchedule` would be more error-prone if this was an array
beforeSchedule: async (args) => {
// Handles verifying that there are no jobs already scheduled or processing.
// You can override this behavior by not calling defaultBeforeSchedule, e.g. if you wanted
// to allow a maximum of 3 scheduled jobs in the queue instead of 1, or add any additional conditions
const result = await args.defaultBeforeSchedule(args)
return {
...result,
input: {
message: 'This task runs every second',
},
}
},
afterSchedule: async (args) => {
await args.defaultAfterSchedule(args) // Handles updating the payload-jobs-stats global
args.req.payload.logger.info(
'EverySecond task scheduled: ' +
(args.status === 'success' ? args.job.id : 'skipped or failed to schedule'),
)
},
},
},
],
slug: 'EverySecond',
inputSchema: [
{
name: 'message',
type: 'text',
required: true,
},
],
handler: ({ input, req }) => {
req.payload.logger.info(input.message)
return {
output: {},
}
},
}
]
}
})
```
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210495300843759
Adds a new operation findDistinct that can give you distinct values of a
field for a given collection
Example:
Assume you have a collection posts with multiple documents, and some of
them share the same title:
```js
// Example dataset (some titles appear multiple times)
[
{ title: 'title-1' },
{ title: 'title-2' },
{ title: 'title-1' },
{ title: 'title-3' },
{ title: 'title-2' },
{ title: 'title-4' },
{ title: 'title-5' },
{ title: 'title-6' },
{ title: 'title-7' },
{ title: 'title-8' },
{ title: 'title-9' },
]
```
You can now retrieve all unique title values using findDistinct:
```js
const result = await payload.findDistinct({
collection: 'posts',
field: 'title',
})
console.log(result.values)
// Output:
// [
// 'title-1',
// 'title-2',
// 'title-3',
// 'title-4',
// 'title-5',
// 'title-6',
// 'title-7',
// 'title-8',
// 'title-9'
// ]
```
You can also limit the number of distinct results:
```js
const limitedResult = await payload.findDistinct({
collection: 'posts',
field: 'title',
sortOrder: 'desc',
limit: 3,
})
console.log(limitedResult.values)
// Output:
// [
// 'title-1',
// 'title-2',
// 'title-3'
// ]
```
You can also pass a `where` query to filter the documents.
### What?
Adds four more arguments to the `mongooseAdapter`:
```typescript
useJoinAggregations?: boolean /* The big one */
useAlternativeDropDatabase?: boolean
useBigIntForNumberIDs?: boolean
usePipelineInSortLookup?: boolean
```
Also export a new `compatabilityOptions` object from
`@payloadcms/db-mongodb` where each key is a mongo-compatible database
and the value is the recommended `mongooseAdapter` settings for
compatability.
### Why?
When using firestore and visiting
`/admin/collections/media/payload-folders`, we get:
```
MongoServerError: invalid field(s) in lookup: [let, pipeline], only lookup(from, localField, foreignField, as) is supported
```
Firestore doesn't support the full MongoDB aggregation API used by
Payload which gets used when building aggregations for populating join
fields.
There are several other compatability issues with Firestore:
- The invalid `pipeline` property is used in the `$lookup` aggregation
in `buildSortParams`
- Firestore only supports number IDs of type `Long`, but Mongoose
converts custom ID fields of type number to `Double`
- Firestore does not support the `dropDatabase` command
- Firestore does not support the `createIndex` command (not addressed in
this PR)
### How?
```typescript
useJoinAggregations?: boolean /* The big one */
```
When this is `false` we skip the `buildJoinAggregation()` pipeline and resolve the join fields through multiple queries. This can potentially be used with AWS DocumentDB and Azure Cosmos DB to support join fields, but I have not tested with either of these databases.
```typescript
useAlternativeDropDatabase?: boolean
```
When `true`, monkey-patch (replace) the `dropDatabase` function so that
it calls `collection.deleteMany({})` on every collection instead of
sending a single `dropDatabase` command to the database
```typescript
useBigIntForNumberIDs?: boolean
```
When `true`, use `mongoose.Schema.Types.BigInt` for custom ID fields of type `number` which converts to a firestore `Long` behind the scenes
```typescript
usePipelineInSortLookup?: boolean
```
When `false`, modify the sortAggregation pipeline in `buildSortParams()` so that we don't use the `pipeline` property in the `$lookup` aggregation. Results in slightly worse performance when sorting by relationship properties.
### Limitations
This PR does not add support for transactions or creating indexes in firestore.
### Fixes
Fixed a bug (and added a test) where you weren't able to sort by multiple properties on a relationship field.
### Future work
1. Firestore supports simple `$lookup` aggregations but other databases might not. Could add a `useSortAggregations` property which can be used to disable aggregations in sorting.
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Sasha <64744993+r1tsuu@users.noreply.github.com>
Based on https://github.com/payloadcms/payload/pull/13060 which should
be merged first
This PR adds ability to update number fields atomically, which could be
important with parallel writes. For now we support this only via
`payload.db.updateOne`.
For example:
```js
// increment by 10
const res = await payload.db.updateOne({
data: {
number: {
$inc: 10,
},
},
collection: 'posts',
where: { id: { equals: post.id } },
})
// decrement by 3
const res2 = await payload.db.updateOne({
data: {
number: {
$inc: -3,
},
},
collection: 'posts',
where: { id: { equals: post.id } },
})
```
Fixes https://github.com/payloadcms/payload/issues/13045
`updateOne` when returning is `false` mutates the data object for write
operations on the DB, but that causes an issue when using that data
object later on since all of the id's are mutated to objectIDs and never
transformed back into read id's.
This fix ensures that the transform happens even when the result is not
returned.
If you (using the MongoDB adapter) delete a block from the payload
config, but still have some data with that block in the DB, you'd
receive in the admin panel an error like:
```
Block with type "cta" was found in block data, but no block with that type is defined in the config for field with schema path pages.blocks
```
Now, we remove those "unknown" blocks at the DB adapter level.
Co-authored-by: Dan Ribbens <dan.ribbens@gmail.com>
Previously, there were multiple ways to type a running job:
- `GeneratedTypes['payload-jobs']` - only works in an installed project
- is `any` in monorepo
- `BaseJob` - works everywhere, but does not incorporate generated types
which may include type for custom fields added to the jobs collection
- `RunningJob<>` - more accurate version of `BaseJob`, but same problem
This PR deprecated all those types in favor of a new `Job` type.
Benefits:
- Works in both monorepo and installed projects. If no generated types
exist, it will automatically fall back to `BaseJob`
- Comes with an optional generic that can be used to narrow down
`job.input` based on the task / workflow slug. No need to use a separate
type helper like `RunningJob<>`
With this new type, I was able to replace every usage of
`GeneratedTypes['payload-jobs']`, `BaseJob` and `RunningJob<>` with the
simple `Job` type.
Additionally, this PR simplifies some of the logic used to run jobs
Currently, we globally enable both DOM and Node.js types. While this
mostly works, it can cause conflicts - particularly with `fetch`. For
example, TypeScript may incorrectly allow browser-only properties (like
`cache`) and reject valid Node.js ones like `dispatcher`.
This PR disables DOM types for server-only packages like payload,
ensuring Node-specific typings are applied. This caught a few instances
of incorrect fetch usage that were previously masked by overlapping DOM
types.
This is not a perfect solution - packages that contain both server and
client code (like richtext-lexical or next) will still suffer from this
issue. However, it's an improvement in cases where we can cleanly
separate server and client types, like for the `payload` package which
is server-only.
## Use-case
This change enables https://github.com/payloadcms/payload/pull/12622 to
explore using node-native fetch + `dispatcher`, instead of `node-fetch`
+ `agent`.
Currently, it will incorrectly report that `dispatcher` is not a valid
property for node-native fetch