Field-level permissions were not handled correctly at all. If you had a
field set with access control, this would mean that nested fields would
incorrectly be omitted from the version view.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210932060696919
Fields such as groups and arrays would not always reset errorPaths when
there were no more errors. The server and client state was not being
merged safely and the client state was always persisting when the server
sent back no errorPaths, i.e. itterable fields with fully valid
children. This change ensures errorPaths is defaulted to an empty array
if it is not present on the incoming field.
Likely a regression from
https://github.com/payloadcms/payload/pull/9388.
Adds e2e test.
### What?
- Updated the `countOperation` to respect the `trash` argument.
### Why?
- Previously, `count` would incorrectly include trashed documents even
when `trash` was not specified.
- This change aligns `count` behavior with `find` and other operations,
providing accurate counts for normal and trashed documents.
### How?
- Applied `appendNonTrashedFilter` in `countOperation` to automatically
exclude soft-deleted docs when `trash: false` (default).
- Added `trash` argument support in Local API, REST API (`/count`
endpoints), and GraphQL (`count<Collection>` queries).
### What?
- Added new end-to-end tests covering trash functionality for
auth-enabled collections (e.g., `users`).
- Implemented test cases for:
- Display of the trash tab in the list view.
- Trashing a user and verifying its appearance in the trash view.
- Accessing the trashed user edit view.
- Ensuring all auth fields are properly disabled in trashed state.
- Restoring a trashed user and verifying its status.
### Why?
- To ensure that the trash (soft-delete) feature works consistently for
collections with `auth: true`.
- To prevent regressions in user management flows, especially around
disabling and restoring trashed users.
### How?
- Added a new `Auth enabled collection` test suite in the E2E `Trash`
tests.
Previously, a single run of the simplest job queue workflow (1 single
task, no db calls by user code in the task - we're just testing db
system overhead) would result in **22 db roundtrips** on drizzle. This
PR reduces it to **17 db roundtrips** by doing the following:
- Modifies db.updateJobs to use the new optimized upsertRow function if
the update is simple
- Do not unnecessarily pass the job log to the final job update when the
workflow completes => allows using the optimized upsertRow function, as
only the main table is involved
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210888186878606
## What?
The Slate to Lexical migration script assumes that the depth of Slate
nodes matches the depth of the Lexical schema, which isn't necessarily
true. This pull request fixes this assumption by first checking for
children and unwrapping the text nodes.
## Why?
During my migration, I ran into a lot of copy + pasted rich text with
list items with untyped nodes with `children`. The existing migration
script assumed that since list items can't have paragraphs, all untyped
nodes inside must be text nodes.
The result of the migration script was a lot of invalid text nodes with
`text: undefined` and all of the content in the `children` being
silently lost. Beyond the silent loss, the invalid text nodes caused the
Lexical editor to unmount with an error about accessing `0 of
undefined`, so those documents couldn't be edited.
This additionally makes the migration script more closely align with the
[recursive serialization logic recommendation from the Payload Slate
Rich Text
documentation](https://payloadcms.com/docs/rich-text/slate#generating-html).
## Visualization
### Slate
```txt
Slate rich text content
┣━┳━ Unordered list
┋ ┣━┳━ List item
┋ ┋ ┗━┳━ Generic (paragraph-like, untyped with children)
┋ ┋ ┣━━━ Text (untyped) `Hello `
┋ ┋ ┗━━━ Text (untyped) `World!
[...]
```
### Lexical Before PR
```txt
Lexical rich text content (invalid)
┣━┳━ Unordered list
┋ ┣━┳━ List item
┋ ┋ ┗━━━ Invalid text (assumed the generic node was text, stopped processing children, cannot restore lost text without a restoring backup with Slate and rerunning the script after this MR)
[...]
```
### Lexical After PR
```txt
Lexical rich text content
┣━┳━ Unordered list
┋ ┣━┳━ List item
┋ ┋ ┣━━━ Text `Hello `
┋ ┋ ┗━━━ Text `World!
[...]
```
---------
Co-authored-by: German Jablonski <43938777+GermanJablo@users.noreply.github.com>
### What?
- Updated `TrashView` to pass `trash: true` as a dedicated prop instead
of embedding it in the `query` object.
- Modified `renderListView` to correctly merge `trash` and `where`
queries by using both `queryFromArgs` and `queryFromReq`.
- Ensured filtering (via `where`) works correctly in the trash view.
### Why?
Previously, the `trash: true` flag was injected into the `query` object,
and `renderListView` only used `queryFromArgs`.
This caused the `where` clause from filters (added by the
`WhereBuilder`) to be overridden, breaking filtering in the trash view.
### How?
- Introduced an explicit `trash` prop in `renderListView` arguments.
- Updated `TrashView` to pass `trash: true` separately.
- Updated `renderListView` to apply the `trash` filter in addition to
any `where` conditions.
### What?
Prevents decrypted apiKey from being saved back to database on the auth
refresh operation.
### Why?
References issue #13063: refreshing a token for a logged-in user
decrypted `apiKey` and wrote it back in plaintext, corrupting the user
record.
### How?
The user is now fetched with `db.findOne` instead of `findByID`,
preserving the encryption of the key when saved back to the database
using `db.updateOne`. The user record is then re-fetched using
`findByID`, allowing for the decrypted key to be provided in the
response.
### Tests
* ✅ keeps apiKey encrypted in DB after refresh
* ✅ returns user with decrypted apiKey after refresh
Fixes#13063
### What
- filters cookies with the `payload-` prefix in `getExternalFile` by
default (if `externalFileHeaderFilter` is not used).
- Document in `externalFileHeaderFilter`, that the user should handle
the removing of the payload cookie.
### Why
In the Payload application, the `getExternalFile` function sends the
user's cookies to an external server when fetching media, inadvertently
exposing the user's session to that third-party service.
```ts
const headers = uploadConfig.externalFileHeaderFilter
? uploadConfig.externalFileHeaderFilter(Object.fromEntries(new Headers(req.headers)))
: { cookie: req.headers?.get('cookie') };
const res = await fetch(fileURL, {
credentials: 'include',
headers,
method: 'GET',
});
```
Although the
[externalFileHeaderFilter](https://payloadcms.com/docs/upload/overview#collection-upload-options)
function can strip sensitive cookies from the request, the default
config includes the session cookie, violating the secure-by-default
principle.
### How
- If `externalFileHeaderFilter` is not defined, any cookie beginning
with `payload-` is filtered.
- Added 2 tests: both for the case where `externalFileHeaderFilter` is
defined and for the case where it is not.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210561338171125
When grouping by a relationship field and it's value is `null`, the list
view crashes.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210916642997992
Fixes#12975.
When editing autosave-enabled documents through the join field, the
document drawer closes unexpectedly on every autosave interval, making
it nearly impossible to use.
This is because as of #12842, the underlying relationship table
re-renders on every autosave event, remounting the drawer each time. The
fix is to lift the drawer out of table's rendering tree and into the
join field itself. This way all rows share the same drawer, whose
rendering lifecycle has been completely decoupled from the table's
state.
Note: this is very similar to how relationship fields achieve similar
functionality.
This PR also adds jsdocs to the `useDocumentDrawer` hook and strengthens
its types.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210906078627353
Catches list filter errors and prevents the list view from crashing when
attempting to search on fields the user does not have access to. Instead
just shows the default "no results found" message.
### What?
- Fixed an issue where group-by enabled collections with `trash: true`
were not showing trashed documents in the collection’s trash view.
- Ensured that the `trash` query argument is properly passed to the
`findDistinct` call within `handleGroupBy`, allowing trashed documents
to be included in grouped list views.
### Why?
Previously, when viewing the trash view of a collection with both
**group-by** and **trash** enabled, trashed documents would not appear.
This was caused by the `trash` argument not being forwarded to
`findDistinct` in `handleGroupBy`, which resulted in empty or incorrect
group-by results.
### How?
- Passed the `trash` flag through all relevant `findDistinct` and `find`
calls in `handleGroupBy`.
- bumps next.js from 15.3.2 to 15.4.4 in monorepo and templates. It's
important to run our tests against the latest Next.js version to
guarantee full compatibility.
- bumps playwright because of peer dependency conflict with next 15.4.4
- bumps react types because why not
https://nextjs.org/blog/next-15-4
As part of this upgrade, the functionality added by
https://github.com/payloadcms/payload/pull/11658 broke. This PR fixes it
by creating a wrapper around `React.isValidElemen`t that works for
Next.js 15.4.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210803039809808
Fixes#13191
- Render a single html element for single error messages
- Preserve ul structure for multiple errors
- Updates tests to check for both cases
Previously, `db.deleteMany` on postgres resulted in 2 roundtrips to the
database (find + delete with ids). This PR passes the where query
directly to the `deleteWhere` function, resulting in only one roundtrip
to the database (delete with where).
If the where query queries other tables (=> joins required), this falls
back to find + delete with ids. However, this is also more optimized
than before, as we now pass `select: { id: true }` to the findMany
query.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210871676349299
Previously, the Lexical editor was using px, and the JSX converter was
using rem. #12848 fixed the inconsistency by changing the editor to rem,
but it should have been the other way around, changing the JSX converter
to px.
You can see the latest explanation about why it should be 40px
[here](https://github.com/payloadcms/payload/issues/13130#issuecomment-3058348085).
In short, that's the default indentation all browsers use for lists.
This time I'm making sure to leave clear comments everywhere and a test
to avoid another regression.
Here is an image of what the e2e test looks like:
<img width="321" height="678" alt="image"
src="https://github.com/user-attachments/assets/8880c7cb-a954-4487-8377-aee17c06754c"
/>
The first part is the Lexical editor, the second is the JSX converter.
As you can see, the checkbox in JSX looks a little odd because it uses
an input checkbox (as opposed to a pseudo-element in the Lexical
editor). I thought about adding an inline style to move it slightly to
the left, but I found that browsers don't have a standard size for the
checkbox; it varies by browser and device.
That requires a little more thought; I'll address that in a future PR.
Fixes#13130
Previously, filtering by a polymorphic relationship inside an array /
group (unless the `name` is `version`) / tab caused `QueryError: The
following path cannot be queried:`.
### What?
This PR introduces complete trash (soft-delete) support. When a
collection is configured with `trash: true`, documents can now be
soft-deleted and restored via both the API and the admin panel.
```
import type { CollectionConfig } from 'payload'
const Posts: CollectionConfig = {
slug: 'posts',
trash: true, // <-- New collection config prop @default false
fields: [
{
name: 'title',
type: 'text',
},
// other fields...
],
}
```
### Why
Soft deletes allow developers and admins to safely remove documents
without losing data immediately. This enables workflows like reversible
deletions, trash views, and auditing—while preserving compatibility with
drafts, autosave, and version history.
### How?
#### Backend
- Adds new `trash: true` config option to collections.
- When enabled:
- A `deletedAt` timestamp is conditionally injected into the schema.
- Soft deletion is performed by setting `deletedAt` instead of removing
the document from the database.
- Extends all relevant API operations (`find`, `findByID`, `update`,
`delete`, `versions`, etc.) to support a new `trash` param:
- `trash: false` → excludes trashed documents (default)
- `trash: true` → includes both trashed and non-trashed documents
- To query **only trashed** documents: use `trash: true` with a `where`
clause like `{ deletedAt: { exists: true } }`
- Enforces delete access control before allowing a soft delete via
update or updateByID.
- Disables version restoring on trashed documents (must be restored
first).
#### Admin Panel
- Adds a dedicated **Trash view**: `/collections/:collectionSlug/trash`
- Default delete action now soft-deletes documents when `trash: true` is
set.
- **Delete confirmation modal** includes a checkbox to permanently
delete instead.
- Trashed documents:
- Displays UI banner for better clarity of trashed document edit view vs
non-trashed document edit view
- Render in a read-only edit view
- Still allow access to **Preview**, **API**, and **Versions** tabs
- Updated Status component:
- Displays “Previously published” or “Previously a draft” for trashed
documents.
- Disables status-changing actions when documents are in trash.
- Adds new **Restore** bulk action to clear the `deletedAt` timestamp.
- New `Restore` and `Permanently Delete` buttons for
single-trashed-document restore and permanent deletion.
- **Restore confirmation modal** includes a checkbox to restore as
`published`, defaults to `draft`.
- Adds **Empty Trash** and **Delete permanently** bulk actions.
#### Notes
- This feature is completely opt-in. Collections without trash: true
behave exactly as before.
https://github.com/user-attachments/assets/00b83f8a-0442-441e-a89e-d5dc1f49dd37
## Problem:
In PR #11887, a bug fix for `copyToLocale` was introduced to address
issues with copying content between locales in Postgres. However, an
incorrect algorithm was used, which removed all "id" properties from
documents being copied. This led to bug #12536, where `copyToLocale`
would mistakenly delete the document in the source language, affecting
not only Postgres but any database.
## Cause and Solution:
When copying documents with localized arrays or blocks, Postgres throws
errors if there are two blocks with the same ID. This is why PR #11887
removed all IDs from the document to avoid conflicts. However, this
removal was too broad and caused issues in cases where it was
unnecessary.
The correct solution should remove the IDs only in nested fields whose
ancestors are localized. The reasoning is as follows:
- When an array/block is **not localized** (`localized: false`), if it
contains localized fields, these fields share the same ID across
different locales.
- When an array/block **is localized** (`localized: true`), its
descendant fields cannot share the same ID across different locales if
Postgres is being used. This wouldn't be an issue if the table
containing localized blocks had a composite primary key of `locale +
id`. However, since the primary key is just `id`, we need to assign a
new ID for these fields.
This PR properly removes IDs **only for nested fields** whose ancestors
are localized.
Fixes#12536
## Example:
### Before Fix:
```js
// Original document (en)
array: [{
id: "123",
text: { en: "English text" }
}]
// After copying to 'es' locale, a new ID was created instead of updating the existing item
array: [{
id: "456", // 🐛 New ID created!
text: { es: "Spanish text" } // 🐛 'en' locale is missing
}]
```
### After fix:
```js
// After fix
array: [{
id: "123", // ✅ Same ID maintained
text: {
en: "English text",
es: "Spanish text" // ✅ Properly merged with existing item
}
}]
```
## Additional fixes:
### TraverseFields
In the process of designing an appropriate solution, I detected a couple
of bugs in traverseFields that are also addressed in this PR.
### Fixed MongoDB Empty Array Handling
During testing, I discovered that MongoDB and PostgreSQL behave
differently when querying documents that don't exist in a specific
locale:
- PostgreSQL: Returns the document with data from the fallback locale
- MongoDB: Returns the document with empty arrays for localized fields
This difference caused `copyToLocale` to fail in MongoDB because the
merge algorithm only checked for `null` or `undefined` values, but not
empty arrays. When MongoDB returned `content: []` for a non-existent
locale, the algorithm would attempt to iterate over the empty array
instead of using the source locale's data.
### Move test e2e to int
The test introduced in #11887 didn't catch the bug because our e2e suite
doesn't run on Postgres. I migrated the test to an integration test that
does run on Postgres and MongoDB.
Supports grouping documents by specific fields within the list view.
For example, imagine having a "posts" collection with a "categories"
field. To report on each specific category, you'd traditionally filter
for each category, one at a time. This can be quite inefficient,
especially with large datasets.
Now, you can interact with all categories simultaneously, grouped by
distinct values.
Here is a simple demonstration:
https://github.com/user-attachments/assets/0dcd19d2-e983-47e6-9ea2-cfdd2424d8b5
Enable on any collection by setting the `admin.groupBy` property:
```ts
import type { CollectionConfig } from 'payload'
const MyCollection: CollectionConfig = {
// ...
admin: {
groupBy: true
}
}
```
This is currently marked as beta to gather feedback while we reach full
stability, and to leave room for API changes and other modifications.
Use at your own risk.
Note: when using `groupBy`, bulk editing is done group-by-group. In the
future we may support cross-group bulk editing.
Dependent on #13102 (merged).
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210774523852467
---------
Co-authored-by: Paul Popus <paul@payloadcms.com>
### What?
Refactors the `LeaveWithoutSaving` modal to be generic and delegates
document unlock logic back to the `DefaultEditView` component via a
callback.
### Why?
Previously, `unlockDocument` was triggered in a cleanup `useEffect` in
the edit view. When logging out from the edit view, the unlock request
would often fail due to the session ending — leaving the document in a
locked state.
### How?
- Introduced `onConfirm` and `onPrevent` props for `LeaveWithoutSaving`.
- Moved all document lock/unlock logic into `DefaultEditView`’s
`handleLeaveConfirm`.
- Captures the next navigation target via `onPrevent` and evaluates
whether to unlock based on:
- Locking being enabled.
- Current user owning the lock.
- Navigation not targeting internal admin views (`/preview`, `/api`,
`/versions`).
---------
Co-authored-by: Jarrod Flesch <jarrodmflesch@gmail.com>
### What?
Improves both the JSON preview and export functionality in the
import-export plugin:
- Preserves proper nesting of object and array fields (e.g., groups,
tabs, arrays)
- Excludes any fields explicitly marked as `disabled` via
`custom.plugin-import-export`
- Ensures downloaded files use proper JSON formatting when `format` is
`json` (no CSV-style flattening)
### Why?
Previously:
- The JSON preview flattened all fields to a single level and included
disabled fields.
- Exported files with `format: json` were still CSV-style data encoded
as `.json`, rather than real JSON.
### How?
- Refactored `/preview-data` JSON handling to preserve original document
shape.
- Applied `removeDisabledFields` to clean nested fields using
dot-notation paths.
- Updated `createExport` to skip `flattenObject` for JSON formats, using
a nested JSON filter instead.
- Fixed streaming and buffered export paths to output valid JSON arrays
when `format` is `json`.
https://github.com/payloadcms/payload/pull/13186 actually made the
select API _more powerful_, as it can reduce the amount of db calls even
for complex collections with blocks down to 1.
This PR adds a test that verifies this.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210871676349303
Currently, an optimized DB update (simple data => no
delete-and-create-row) does the following:
1. sql UPDATE
2. sql SELECT
This PR reduces this further to one single DB call for simple
collections:
1. sql UPDATE with RETURNING()
This only works for simple collections that do not have any fields that
need to be fetched from other tables. If a collection has fields like
relationship or blocks, we'll need that separate SELECT call to join in
the other tables.
In 4.0, we can remove all "complex" fields from the jobs collection and
replace them with a JSON field to make use of this optimization
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210803039809814
### What?
This PR ensures that when a document is created using the `Publish in
__` button, it is saved to the correct locale.
### Why?
During document creation, the buttons `Publish` or `Publish in [locale]`
have the same effect. As a result, we overlooked the case where a user
may specifically click `Publish in [locale]` for the first save. In this
scenario, the create operation does not respect the
`publishSpecificLocale` value, so the document was always saved in the
default locale regardless of the intended one.
### How?
Passes the `publishSpecificLocale` value to the create operation,
ensuring the document and version is saved to the correct locale.
**Fixes:** #13117
Some search params within the list view do not properly sync to user
preferences, and visa versa.
For example, when selecting a query preset, the `?preset=123` param is
injected into the URL and saved to preferences, but when reloading the
page without the param, that preset is not reactivated as expected.
### Problem
The reason this wasn't working before is that omitting this param would
also reset prefs. It was designed this way in order to support
client-side resets, e.g. clicking the query presets "x" button.
This pattern would never work, however, because this means that every
time the user navigates to the list view directly, their preference is
cleared, as no param would exist in the query.
Note: this is not an issue with _all_ params, as not all are handled in
the same way.
### Solution
The fix is to use empty values instead, e.g. `?preset=`. When the server
receives this, it knows to clear the pref. If it doesn't exist at all,
it knows to load from prefs. And if it has a value, it saves to prefs.
On the client, we sanitize those empty values back out so they don't
appear in the URL in the end.
This PR also refactors much of the list query context and its respective
provider to be significantly more predictable and easier to work with,
namely:
- The `ListQuery` type now fully aligns with what Payload APIs expect,
e.g. `page` is a number, not a string
- The provider now receives a single `query` prop which matches the
underlying context 1:1
- Propagating the query from the server to the URL is significantly more
predictable
- Any new props that may be supported in the future will automatically
work
- No more reconciling `columns` and `listPreferences.columns`, its just
`query.columns`
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210827129744922
Adds a new `schedule` property to workflow and task configs that can be
used to have Payload automatically _queue_ jobs following a certain
_schedule_.
Docs:
https://payloadcms.com/docs/dynamic/jobs-queue/schedules?branch=feat/schedule-jobs
## API Example
```ts
export default buildConfig({
// ...
jobs: {
// ...
scheduler: 'manual', // Or `cron` if you're not using serverless. If `manual` is used, then user needs to set up running /api/payload-jobs/handleSchedules or payload.jobs.handleSchedules in regular intervals
tasks: [
{
schedule: [
{
cron: '* * * * * *',
queue: 'autorunSecond',
// Hooks are optional
hooks: {
// Not an array, as providing and calling `defaultBeforeSchedule` would be more error-prone if this was an array
beforeSchedule: async (args) => {
// Handles verifying that there are no jobs already scheduled or processing.
// You can override this behavior by not calling defaultBeforeSchedule, e.g. if you wanted
// to allow a maximum of 3 scheduled jobs in the queue instead of 1, or add any additional conditions
const result = await args.defaultBeforeSchedule(args)
return {
...result,
input: {
message: 'This task runs every second',
},
}
},
afterSchedule: async (args) => {
await args.defaultAfterSchedule(args) // Handles updating the payload-jobs-stats global
args.req.payload.logger.info(
'EverySecond task scheduled: ' +
(args.status === 'success' ? args.job.id : 'skipped or failed to schedule'),
)
},
},
},
],
slug: 'EverySecond',
inputSchema: [
{
name: 'message',
type: 'text',
required: true,
},
],
handler: ({ input, req }) => {
req.payload.logger.info(input.message)
return {
output: {},
}
},
}
]
}
})
```
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210495300843759
### What?
Fixes the `custom.plugin-import-export.disabled` flag to correctly
disable fields in all nested structures including:
- Groups
- Arrays
- Tabs
- Blocks
Previously, only top-level fields or direct children were respected.
This update ensures nested paths (e.g. `group.array.field1`,
`blocks.hero.title`, etc.) are matched and filtered from exports.
### Why?
- Updated regex logic in both `createExport` and Preview components to
recursively support:
- Indexed array fields (e.g. `array_0_field1`)
- Block fields with slugs (e.g. `blocks_0_hero_title`)
- Nested field accessors with correct part-by-part expansion
### How?
To allow users to disable entire field groups or deeply nested fields in
structured layouts.
As discussed in [this
RFC](https://github.com/payloadcms/payload/discussions/12729), this PR
supports collection-scoped folders. You can scope folders to multiple
collection types or just one.
This unlocks the possibility to have folders on a per collection instead
of always being shared on every collection. You can combine this feature
with the `browseByFolder: false` to completely isolate a collection from
other collections.
Things left to do:
- [x] ~~Create a custom react component for the selecting of
collectionSlugs to filter out available options based on the current
folders parameters~~
https://github.com/user-attachments/assets/14cb1f09-8d70-4cb9-b1e2-09da89302995
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210564397815557
Adds a new operation findDistinct that can give you distinct values of a
field for a given collection
Example:
Assume you have a collection posts with multiple documents, and some of
them share the same title:
```js
// Example dataset (some titles appear multiple times)
[
{ title: 'title-1' },
{ title: 'title-2' },
{ title: 'title-1' },
{ title: 'title-3' },
{ title: 'title-2' },
{ title: 'title-4' },
{ title: 'title-5' },
{ title: 'title-6' },
{ title: 'title-7' },
{ title: 'title-8' },
{ title: 'title-9' },
]
```
You can now retrieve all unique title values using findDistinct:
```js
const result = await payload.findDistinct({
collection: 'posts',
field: 'title',
})
console.log(result.values)
// Output:
// [
// 'title-1',
// 'title-2',
// 'title-3',
// 'title-4',
// 'title-5',
// 'title-6',
// 'title-7',
// 'title-8',
// 'title-9'
// ]
```
You can also limit the number of distinct results:
```js
const limitedResult = await payload.findDistinct({
collection: 'posts',
field: 'title',
sortOrder: 'desc',
limit: 3,
})
console.log(limitedResult.values)
// Output:
// [
// 'title-1',
// 'title-2',
// 'title-3'
// ]
```
You can also pass a `where` query to filter the documents.
### What?
Adds four more arguments to the `mongooseAdapter`:
```typescript
useJoinAggregations?: boolean /* The big one */
useAlternativeDropDatabase?: boolean
useBigIntForNumberIDs?: boolean
usePipelineInSortLookup?: boolean
```
Also export a new `compatabilityOptions` object from
`@payloadcms/db-mongodb` where each key is a mongo-compatible database
and the value is the recommended `mongooseAdapter` settings for
compatability.
### Why?
When using firestore and visiting
`/admin/collections/media/payload-folders`, we get:
```
MongoServerError: invalid field(s) in lookup: [let, pipeline], only lookup(from, localField, foreignField, as) is supported
```
Firestore doesn't support the full MongoDB aggregation API used by
Payload which gets used when building aggregations for populating join
fields.
There are several other compatability issues with Firestore:
- The invalid `pipeline` property is used in the `$lookup` aggregation
in `buildSortParams`
- Firestore only supports number IDs of type `Long`, but Mongoose
converts custom ID fields of type number to `Double`
- Firestore does not support the `dropDatabase` command
- Firestore does not support the `createIndex` command (not addressed in
this PR)
### How?
```typescript
useJoinAggregations?: boolean /* The big one */
```
When this is `false` we skip the `buildJoinAggregation()` pipeline and resolve the join fields through multiple queries. This can potentially be used with AWS DocumentDB and Azure Cosmos DB to support join fields, but I have not tested with either of these databases.
```typescript
useAlternativeDropDatabase?: boolean
```
When `true`, monkey-patch (replace) the `dropDatabase` function so that
it calls `collection.deleteMany({})` on every collection instead of
sending a single `dropDatabase` command to the database
```typescript
useBigIntForNumberIDs?: boolean
```
When `true`, use `mongoose.Schema.Types.BigInt` for custom ID fields of type `number` which converts to a firestore `Long` behind the scenes
```typescript
usePipelineInSortLookup?: boolean
```
When `false`, modify the sortAggregation pipeline in `buildSortParams()` so that we don't use the `pipeline` property in the `$lookup` aggregation. Results in slightly worse performance when sorting by relationship properties.
### Limitations
This PR does not add support for transactions or creating indexes in firestore.
### Fixes
Fixed a bug (and added a test) where you weren't able to sort by multiple properties on a relationship field.
### Future work
1. Firestore supports simple `$lookup` aggregations but other databases might not. Could add a `useSortAggregations` property which can be used to disable aggregations in sorting.
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Sasha <64744993+r1tsuu@users.noreply.github.com>
This is a follow-up to https://github.com/payloadcms/payload/pull/13060.
There are a bunch of other db adapter methods that use `upsertRow` for
updates: `updateGlobal`, `updateGlobalVersion`, `updateJobs`,
`updateMany`, `updateVersion`.
The previous PR had the logic for using the optimized row updating logic
inside the `updateOne` adapter. This PR moves that logic to the original
`upsertRow` function. Benefits:
- all the other db methods will benefit from this massive optimization
as well. This will be especially relevant for optimizing postgres job
queue initial updates - we should be able to close
https://github.com/payloadcms/payload/pull/11865 after another follow-up
PR
- easier to read db adapter methods due to less code.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210803039809810
Based on https://github.com/payloadcms/payload/pull/13060 which should
be merged first
This PR adds ability to update number fields atomically, which could be
important with parallel writes. For now we support this only via
`payload.db.updateOne`.
For example:
```js
// increment by 10
const res = await payload.db.updateOne({
data: {
number: {
$inc: 10,
},
},
collection: 'posts',
where: { id: { equals: post.id } },
})
// decrement by 3
const res2 = await payload.db.updateOne({
data: {
number: {
$inc: -3,
},
},
collection: 'posts',
where: { id: { equals: post.id } },
})
```
### What?
Fixes the export field selection dropdown to correctly differentiate
between fields in named and unnamed tabs.
### Why?
Previously, when a `tabs` field contained both named and unnamed tabs,
subfields with the same `name` would appear as duplicates in the
dropdown (e.g. `Tab To CSV`, `Tab To CSV`). Additionally, selecting a
field from a named tab would incorrectly map it to the unnamed version
due to shared labels and missing path prefixes.
### How?
- Updated the `reduceFields` utility to manually construct the field
path and label using the tab’s `name` if present.
- Ensured unnamed tabs treat subfields as top-level and skip prefixing
altogether.
- Adjusted label prefix logic to show `Named Tab > Field Name` when
appropriate.
#### Before
<img width="169" height="79" alt="Screenshot 2025-07-15 at 2 55 14 PM"
src="https://github.com/user-attachments/assets/2ab2d19e-41a3-4be2-8496-1da2a79f88e1"
/>
#### After
<img width="211" height="79" alt="Screenshot 2025-07-15 at 2 50 38 PM"
src="https://github.com/user-attachments/assets/0620e96a-71cd-4eb1-9396-30d461ed47a5"
/>
Previously, we were always initializing cronjobs when calling
`getPayload` or `payload.init`.
This is undesired in bin scripts - we don't want cron jobs to start
triggering db calls while we're running an initial migration using
`payload migrate` for example. This has previously led to a race
condition, triggering the following, occasional error, if job autoruns
were enabled:
```ts
DrizzleQueryError: Failed query: select "payload_jobs"."id", "payload_jobs"."input", "payload_jobs"."completed_at", "payload_jobs"."total_tried", "payload_jobs"."has_error", "payload_jobs"."error", "payload_jobs"."workflow_slug", "payload_jobs"."task_slug", "payload_jobs"."queue", "payload_jobs"."wait_until", "payload_jobs"."processing", "payload_jobs"."updated_at", "payload_jobs"."created_at", "payload_jobs_log"."data" as "log" from "payload_jobs" "payload_jobs" left join lateral (select coalesce(json_agg(json_build_array("payload_jobs_log"."_order", "payload_jobs_log"."id", "payload_jobs_log"."executed_at", "payload_jobs_log"."completed_at", "payload_jobs_log"."task_slug", "payload_jobs_log"."task_i_d", "payload_jobs_log"."input", "payload_jobs_log"."output", "payload_jobs_log"."state", "payload_jobs_log"."error") order by "payload_jobs_log"."_order" asc), '[]'::json) as "data" from (select * from "payload_jobs_log" "payload_jobs_log" where "payload_jobs_log"."_parent_id" = "payload_jobs"."id" order by "payload_jobs_log"."_order" asc) "payload_jobs_log") "payload_jobs_log" on true where ("payload_jobs"."completed_at" is null and ("payload_jobs"."has_error" is null or "payload_jobs"."has_error" <> $1) and "payload_jobs"."processing" = $2 and ("payload_jobs"."wait_until" is null or "payload_jobs"."wait_until" < $3) and "payload_jobs"."queue" = $4) order by "payload_jobs"."created_at" asc limit $5
params: true,false,2025-07-10T21:25:03.002Z,autorunSecond,100
at NodePgPreparedQuery.queryWithCache (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/pg-core/session.ts:74:11)
at processTicksAndRejections (node:internal/process/task_queues:105:5)
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:154:19
... 6 lines matching cause stack trace ...
at N._trigger (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/croner@9.0.0/node_modules/croner/dist/croner.cjs:1:16806) {
query: `select "payload_jobs"."id", "payload_jobs"."input", "payload_jobs"."completed_at", "payload_jobs"."total_tried", "payload_jobs"."has_error", "payload_jobs"."error", "payload_jobs"."workflow_slug", "payload_jobs"."task_slug", "payload_jobs"."queue", "payload_jobs"."wait_until", "payload_jobs"."processing", "payload_jobs"."updated_at", "payload_jobs"."created_at", "payload_jobs_log"."data" as "log" from "payload_jobs" "payload_jobs" left join lateral (select coalesce(json_agg(json_build_array("payload_jobs_log"."_order", "payload_jobs_log"."id", "payload_jobs_log"."executed_at", "payload_jobs_log"."completed_at", "payload_jobs_log"."task_slug", "payload_jobs_log"."task_i_d", "payload_jobs_log"."input", "payload_jobs_log"."output", "payload_jobs_log"."state", "payload_jobs_log"."error") order by "payload_jobs_log"."_order" asc), '[]'::json) as "data" from (select * from "payload_jobs_log" "payload_jobs_log" where "payload_jobs_log"."_parent_id" = "payload_jobs"."id" order by "payload_jobs_log"."_order" asc) "payload_jobs_log") "payload_jobs_log" on true where ("payload_jobs"."completed_at" is null and ("payload_jobs"."has_error" is null or "payload_jobs"."has_error" <> $1) and "payload_jobs"."processing" = $2 and ("payload_jobs"."wait_until" is null or "payload_jobs"."wait_until" < $3) and "payload_jobs"."queue" = $4) order by "payload_jobs"."created_at" asc limit $5`,
params: [ true, false, '2025-07-10T21:25:03.002Z', 'autorunSecond', 100 ],
cause: error: relation "payload_jobs" does not exist
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/pg@8.16.3/node_modules/pg/lib/client.js:545:17
at processTicksAndRejections (node:internal/process/task_queues:105:5)
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:161:13
at NodePgPreparedQuery.queryWithCache (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/pg-core/session.ts:72:12)
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:154:19
at find (/Users/alessio/Documents/GitHub/payload2/packages/drizzle/src/find/findMany.ts:162:19)
at Object.updateMany (/Users/alessio/Documents/GitHub/payload2/packages/drizzle/src/updateJobs.ts:26:16)
at updateJobs (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/utilities/updateJob.ts:102:37)
at runJobs (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/operations/runJobs/index.ts:181:25)
at Object.run (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/localAPI.ts:137:12)
at N.fn (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/index.ts:866:13)
at N._trigger (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/croner@9.0.0/node_modules/croner/dist/croner.cjs:1:16806) {
length: 112,
severity: 'ERROR',
code: '42P01',
detail: undefined,
hint: undefined,
position: '406',
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'parse_relation.c',
line: '1449',
routine: 'parserOpenTable'
}
}
```
This PR makes running crons opt-in using a new `cron` flag. By default,
no cron jobs will be created.
### What?
Adds a new `format` option to the `plugin-import-export` config that
allows users to force the export format (`csv` or `json`) and hide the
format dropdown from the export UI.
### Why?
In some use cases, allowing the user to select between CSV and JSON is
unnecessary or undesirable. This new option allows plugin consumers to
lock the format and simplify the export interface.
### How?
- Added a `format?: 'csv' | 'json'` field to `ImportExportPluginConfig`.
- When defined, the `format` field in the export UI is:
- Hidden via `admin.condition`
- Pre-filled via `defaultValue`
- Updated `getFields` to accept the plugin config and apply logic
accordingly.
### Example
```ts
importExportPlugin({
format: 'json',
})