2665 Commits

Author SHA1 Message Date
Jarrod Flesch
43b4b22af9 fix: svg uploads allowed when glob (#13356)
Builds on top of https://github.com/payloadcms/payload/pull/13276 and
allows images/* to also parse and accept svg files correctly.
2025-08-04 23:47:37 -04:00
Alessio Gravili
1d70d4d36c fix(next): version view did not properly handle field-level permissions (#13336)
Field-level permissions were not handled correctly at all. If you had a
field set with access control, this would mean that nested fields would
incorrectly be omitted from the version view.

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210932060696919
2025-08-02 12:01:48 -07:00
Sasha
f432cc1956 feat(graphql): allow to pass count: true to a join query (#13351)
Fixes https://github.com/payloadcms/payload/issues/13077
2025-08-01 18:05:54 +03:00
Jarrod Flesch
2903486974 fix(ui): group/array error paths persisting when valid (#13347)
Fields such as groups and arrays would not always reset errorPaths when
there were no more errors. The server and client state was not being
merged safely and the client state was always persisting when the server
sent back no errorPaths, i.e. itterable fields with fully valid
children. This change ensures errorPaths is defaulted to an empty array
if it is not present on the incoming field.

Likely a regression from
https://github.com/payloadcms/payload/pull/9388.

Adds e2e test.
2025-08-01 16:04:51 +01:00
Patrik
11755089f8 feat: adds trash support to the count operation (#13304)
### What?

- Updated the `countOperation` to respect the `trash` argument.

### Why?

- Previously, `count` would incorrectly include trashed documents even
when `trash` was not specified.
- This change aligns `count` behavior with `find` and other operations,
providing accurate counts for normal and trashed documents.

### How?

- Applied `appendNonTrashedFilter` in `countOperation` to automatically
exclude soft-deleted docs when `trash: false` (default).
- Added `trash` argument support in Local API, REST API (`/count`
endpoints), and GraphQL (`count<Collection>` queries).
2025-07-30 14:11:11 -07:00
Patrik
a8b6983ab5 test: adds e2e tests for auth enabled collections with trash enabled (#13317)
### What?
- Added new end-to-end tests covering trash functionality for
auth-enabled collections (e.g., `users`).
- Implemented test cases for:
  - Display of the trash tab in the list view.
  - Trashing a user and verifying its appearance in the trash view.
  - Accessing the trashed user edit view.
  - Ensuring all auth fields are properly disabled in trashed state.
  - Restoring a trashed user and verifying its status.

### Why?
- To ensure that the trash (soft-delete) feature works consistently for
collections with `auth: true`.
- To prevent regressions in user management flows, especially around
disabling and restoring trashed users.

### How?
- Added a new `Auth enabled collection` test suite in the E2E `Trash`
tests.
2025-07-30 14:11:02 -07:00
Sasha
b26a73be4a fix: querying hasMany: true select fields inside polymorphic joins (#13334)
This PR fixes queries like this:

```ts
const findFolder = await payload.find({
  collection: 'payload-folders',
  where: {
    id: {
      equals: folderDoc.id,
    },
  },
  joins: {
    documentsAndFolders: {
      limit: 100_000,
      sort: 'name',
      where: {
        and: [
          {
            relationTo: {
              equals: 'payload-folders',
            },
          },
          {
            folderType: {
              in: ['folderPoly1'], // previously this didn't work
            },
          },
        ],
      },
    },
  },
})
```

Additionally, this PR potentially fixes querying JSON fields by the top
level path, for example if your JSON field has a value like: `[1, 2]`,
previously `where: { json: { equals: 1 } }` didn't work, however with a
value like `{ nested: [1, 2] }` and a query `where: { 'json.nested': {
equals: 1 } }`it did.

---------

Co-authored-by: Jarrod Flesch <jarrodmflesch@gmail.com>
2025-07-30 15:30:20 -04:00
Alessio Gravili
3114b89d4c perf: 23% faster job queue system on postgres/sqlite (#13187)
Previously, a single run of the simplest job queue workflow (1 single
task, no db calls by user code in the task - we're just testing db
system overhead) would result in **22 db roundtrips** on drizzle. This
PR reduces it to **17 db roundtrips** by doing the following:

- Modifies db.updateJobs to use the new optimized upsertRow function if
the update is simple
- Do not unnecessarily pass the job log to the final job update when the
workflow completes => allows using the optimized upsertRow function, as
only the main table is involved

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210888186878606
2025-07-30 16:23:43 +03:00
Evelyn Hathaway
227a20e94b fix(richtext-lexical): recursively unwrap generic Slate nodes in Lexical migration converter (#13202)
## What?

The Slate to Lexical migration script assumes that the depth of Slate
nodes matches the depth of the Lexical schema, which isn't necessarily
true. This pull request fixes this assumption by first checking for
children and unwrapping the text nodes.

## Why?

During my migration, I ran into a lot of copy + pasted rich text with
list items with untyped nodes with `children`. The existing migration
script assumed that since list items can't have paragraphs, all untyped
nodes inside must be text nodes.

The result of the migration script was a lot of invalid text nodes with
`text: undefined` and all of the content in the `children` being
silently lost. Beyond the silent loss, the invalid text nodes caused the
Lexical editor to unmount with an error about accessing `0 of
undefined`, so those documents couldn't be edited.

This additionally makes the migration script more closely align with the
[recursive serialization logic recommendation from the Payload Slate
Rich Text
documentation](https://payloadcms.com/docs/rich-text/slate#generating-html).

## Visualization

### Slate

```txt
Slate rich text content
┣━┳━ Unordered list
┋ ┣━┳━ List item
┋ ┋ ┗━┳━ Generic (paragraph-like, untyped with children)
┋ ┋   ┣━━━ Text (untyped) `Hello `
┋ ┋   ┗━━━ Text (untyped) `World!
[...]
```

### Lexical Before PR

```txt
Lexical rich text content (invalid)
┣━┳━ Unordered list
┋ ┣━┳━ List item
┋ ┋ ┗━━━ Invalid text (assumed the generic node was text, stopped processing children, cannot restore lost text without a restoring backup with Slate and rerunning the script after this MR)
[...]
```

### Lexical After PR

```txt
Lexical rich text content
┣━┳━ Unordered list
┋ ┣━┳━ List item
┋ ┋ ┣━━━ Text `Hello `
┋ ┋ ┗━━━ Text `World!
[...]
```

---------

Co-authored-by: German Jablonski <43938777+GermanJablo@users.noreply.github.com>
2025-07-30 13:16:18 +00:00
Jarrod Flesch
a22f27de1c test: stabilize frequent fails (#13318)
Adjusts tests that "flake" frequently.
2025-07-30 05:52:01 -07:00
Patrik
e7124f6176 fix(next): cannot filter trash (#13320)
### What?

- Updated `TrashView` to pass `trash: true` as a dedicated prop instead
of embedding it in the `query` object.
- Modified `renderListView` to correctly merge `trash` and `where`
queries by using both `queryFromArgs` and `queryFromReq`.
- Ensured filtering (via `where`) works correctly in the trash view.

### Why?

Previously, the `trash: true` flag was injected into the `query` object,
and `renderListView` only used `queryFromArgs`.
This caused the `where` clause from filters (added by the
`WhereBuilder`) to be overridden, breaking filtering in the trash view.

### How?

- Introduced an explicit `trash` prop in `renderListView` arguments.
- Updated `TrashView` to pass `trash: true` separately.
- Updated `renderListView` to apply the `trash` filter in addition to
any `where` conditions.
2025-07-29 14:29:04 -07:00
contip
b1fa76e397 fix: keep apiKey encrypted in refresh operation (#13063) (#13177)
### What?
Prevents decrypted apiKey from being saved back to database on the auth
refresh operation.

### Why?
References issue #13063: refreshing a token for a logged-in user
decrypted `apiKey` and wrote it back in plaintext, corrupting the user
record.

### How?
The user is now fetched with `db.findOne` instead of `findByID`,
preserving the encryption of the key when saved back to the database
using `db.updateOne`. The user record is then re-fetched using
`findByID`, allowing for the decrypted key to be provided in the
response.

### Tests
*  keeps apiKey encrypted in DB after refresh
*  returns user with decrypted apiKey after refresh

Fixes #13063
2025-07-29 16:27:45 -04:00
German Jablonski
08942494e3 fix: filters cookies with the payload- prefix in getExternalFile by default (#13215)
### What

- filters cookies with the `payload-` prefix in `getExternalFile` by
default (if `externalFileHeaderFilter` is not used).
- Document in `externalFileHeaderFilter`, that the user should handle
the removing of the payload cookie.

### Why

In the Payload application, the `getExternalFile` function sends the
user's cookies to an external server when fetching media, inadvertently
exposing the user's session to that third-party service.




```ts
const headers = uploadConfig.externalFileHeaderFilter
  ? uploadConfig.externalFileHeaderFilter(Object.fromEntries(new Headers(req.headers)))
  : { cookie: req.headers?.get('cookie') };

const res = await fetch(fileURL, {
  credentials: 'include',
  headers,
  method: 'GET',
});
```
Although the
[externalFileHeaderFilter](https://payloadcms.com/docs/upload/overview#collection-upload-options)
function can strip sensitive cookies from the request, the default
config includes the session cookie, violating the secure-by-default
principle.

### How

- If `externalFileHeaderFilter` is not defined, any cookie beginning
with `payload-` is filtered.
- Added 2 tests: both for the case where `externalFileHeaderFilter` is
defined and for the case where it is not.





---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210561338171125
2025-07-29 16:21:50 -04:00
Jacob Fletcher
e50220374e fix(next): group by null relationship crashes list view (#13315)
When grouping by a relationship field and it's value is `null`, the list
view crashes.

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210916642997992
2025-07-29 11:55:02 -04:00
Jacob Fletcher
61ee8fadca fix(ui): autosave-enabled document drawers close unexpectedly within the join field (#13298)
Fixes #12975.

When editing autosave-enabled documents through the join field, the
document drawer closes unexpectedly on every autosave interval, making
it nearly impossible to use.

This is because as of #12842, the underlying relationship table
re-renders on every autosave event, remounting the drawer each time. The
fix is to lift the drawer out of table's rendering tree and into the
join field itself. This way all rows share the same drawer, whose
rendering lifecycle has been completely decoupled from the table's
state.

Note: this is very similar to how relationship fields achieve similar
functionality.

This PR also adds jsdocs to the `useDocumentDrawer` hook and strengthens
its types.

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210906078627353
2025-07-29 11:49:15 -04:00
Jarrod Flesch
8d84352ee9 fix(next): catch list filter errors, prevent list view crash (#13297)
Catches list filter errors and prevents the list view from crashing when
attempting to search on fields the user does not have access to. Instead
just shows the default "no results found" message.
2025-07-29 11:30:07 -04:00
Jarrod Flesch
72349245ca test: fix flaky sorting test (#13303)
Ensures the browser uses fresh data after seeding by refreshing the
route and navigating when done.
2025-07-29 03:25:09 +00:00
Alessio Gravili
4fde0f23ce fix: use atomic operation for incrementing login attempts (#13204)
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210561338171141
2025-07-28 16:08:10 -07:00
Patrik
aff2ce1b9b fix(next): unable to view trashed documents when group-by is enabled (#13300)
### What?

- Fixed an issue where group-by enabled collections with `trash: true`
were not showing trashed documents in the collection’s trash view.
- Ensured that the `trash` query argument is properly passed to the
`findDistinct` call within `handleGroupBy`, allowing trashed documents
to be included in grouped list views.

### Why?

Previously, when viewing the trash view of a collection with both
**group-by** and **trash** enabled, trashed documents would not appear.
This was caused by the `trash` argument not being forwarded to
`findDistinct` in `handleGroupBy`, which resulted in empty or incorrect
group-by results.

### How?

- Passed the `trash` flag through all relevant `findDistinct` and `find`
calls in `handleGroupBy`.
2025-07-28 11:29:04 -07:00
Alessio Gravili
5c94d2dc71 feat: support next.js 15.4.4 (#13280)
- bumps next.js from 15.3.2 to 15.4.4 in monorepo and templates. It's
important to run our tests against the latest Next.js version to
guarantee full compatibility.
- bumps playwright because of peer dependency conflict with next 15.4.4
- bumps react types because why not

https://nextjs.org/blog/next-15-4

As part of this upgrade, the functionality added by
https://github.com/payloadcms/payload/pull/11658 broke. This PR fixes it
by creating a wrapper around `React.isValidElemen`t that works for
Next.js 15.4.

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210803039809808
2025-07-28 16:23:43 +00:00
Sean Zubrickas
d093bb1f00 fix: refactors toast error rendering (#13252)
Fixes #13191

- Render a single html element for single error messages
- Preserve ul structure for multiple errors
- Updates tests to check for both cases
2025-07-28 05:59:25 -07:00
Alessio Gravili
6d6c9ebc56 perf(drizzle): 2x faster db.deleteMany (#13255)
Previously, `db.deleteMany` on postgres resulted in 2 roundtrips to the
database (find + delete with ids). This PR passes the where query
directly to the `deleteWhere` function, resulting in only one roundtrip
to the database (delete with where).

If the where query queries other tables (=> joins required), this falls
back to find + delete with ids. However, this is also more optimized
than before, as we now pass `select: { id: true }` to the findMany
query.

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210871676349299
2025-07-25 15:46:09 -07:00
German Jablonski
7cd4a8a602 fix(richtext-lexical): unify indent between different converters and make paragraphs and lists match without CSS (#13274)
Previously, the Lexical editor was using px, and the JSX converter was
using rem. #12848 fixed the inconsistency by changing the editor to rem,
but it should have been the other way around, changing the JSX converter
to px.

You can see the latest explanation about why it should be 40px
[here](https://github.com/payloadcms/payload/issues/13130#issuecomment-3058348085).
In short, that's the default indentation all browsers use for lists.

This time I'm making sure to leave clear comments everywhere and a test
to avoid another regression.

Here is an image of what the e2e test looks like:

<img width="321" height="678" alt="image"
src="https://github.com/user-attachments/assets/8880c7cb-a954-4487-8377-aee17c06754c"
/>

The first part is the Lexical editor, the second is the JSX converter.

As you can see, the checkbox in JSX looks a little odd because it uses
an input checkbox (as opposed to a pseudo-element in the Lexical
editor). I thought about adding an inline style to move it slightly to
the left, but I found that browsers don't have a standard size for the
checkbox; it varies by browser and device.
That requires a little more thought; I'll address that in a future PR.

Fixes #13130
2025-07-25 22:58:49 +01:00
Jarrod Flesch
bc802846c5 fix: serve svg+xml as svg (#13277)
Based from https://github.com/payloadcms/payload/pull/13276

Fixes https://github.com/payloadcms/payload/issues/7624

If an uploaded image has `.svg` ext, and the mimeType is read as
`application/xml` adjust the mimeType to `image/svg+xml`.

---------

Co-authored-by: Philipp Schneider <47689073+philipp-tailor@users.noreply.github.com>
2025-07-25 21:00:51 +00:00
Jarrod Flesch
e8f6cb5ed1 fix: svg+xml file detection (#13276)
Adds logic for svg+xml file type detection.

---------

Co-authored-by: Philipp Schneider <47689073+philipp-tailor@users.noreply.github.com>
2025-07-25 18:33:53 +00:00
Sasha
75385de01f fix: filtering by polymorphic relationships inside other fields (#13265)
Previously, filtering by a polymorphic relationship inside an array /
group (unless the `name` is `version`) / tab caused `QueryError: The
following path cannot be queried:`.
2025-07-25 09:10:21 -04:00
Patrik
f63dc2a10c feat: adds trash support (soft deletes) (#12656)
### What?

This PR introduces complete trash (soft-delete) support. When a
collection is configured with `trash: true`, documents can now be
soft-deleted and restored via both the API and the admin panel.

```
import type { CollectionConfig } from 'payload'

const Posts: CollectionConfig = {
  slug: 'posts',
  trash: true, // <-- New collection config prop @default false
  fields: [
    {
      name: 'title',
      type: 'text',
    },
    // other fields...
  ],
}
```

### Why

Soft deletes allow developers and admins to safely remove documents
without losing data immediately. This enables workflows like reversible
deletions, trash views, and auditing—while preserving compatibility with
drafts, autosave, and version history.

### How?

#### Backend

- Adds new `trash: true` config option to collections.
- When enabled:
  - A `deletedAt` timestamp is conditionally injected into the schema.
- Soft deletion is performed by setting `deletedAt` instead of removing
the document from the database.
- Extends all relevant API operations (`find`, `findByID`, `update`,
`delete`, `versions`, etc.) to support a new `trash` param:
  - `trash: false` → excludes trashed documents (default)
  - `trash: true` → includes both trashed and non-trashed documents
- To query **only trashed** documents: use `trash: true` with a `where`
clause like `{ deletedAt: { exists: true } }`
- Enforces delete access control before allowing a soft delete via
update or updateByID.
- Disables version restoring on trashed documents (must be restored
first).

#### Admin Panel

- Adds a dedicated **Trash view**: `/collections/:collectionSlug/trash`
- Default delete action now soft-deletes documents when `trash: true` is
set.
- **Delete confirmation modal** includes a checkbox to permanently
delete instead.
- Trashed documents:
- Displays UI banner for better clarity of trashed document edit view vs
non-trashed document edit view
  - Render in a read-only edit view
  - Still allow access to **Preview**, **API**, and **Versions** tabs
- Updated Status component:
- Displays “Previously published” or “Previously a draft” for trashed
documents.
  - Disables status-changing actions when documents are in trash.
- Adds new **Restore** bulk action to clear the `deletedAt` timestamp.
- New `Restore` and `Permanently Delete` buttons for
single-trashed-document restore and permanent deletion.
- **Restore confirmation modal** includes a checkbox to restore as
`published`, defaults to `draft`.
- Adds **Empty Trash** and **Delete permanently** bulk actions.
  
#### Notes

- This feature is completely opt-in. Collections without trash: true
behave exactly as before.



https://github.com/user-attachments/assets/00b83f8a-0442-441e-a89e-d5dc1f49dd37
2025-07-25 09:08:22 -04:00
German Jablonski
4a712b3483 fix(ui): preserve localized blocks and arrays when using CopyToLocale (#13216)
## Problem:
In PR #11887, a bug fix for `copyToLocale` was introduced to address
issues with copying content between locales in Postgres. However, an
incorrect algorithm was used, which removed all "id" properties from
documents being copied. This led to bug #12536, where `copyToLocale`
would mistakenly delete the document in the source language, affecting
not only Postgres but any database.

## Cause and Solution:

When copying documents with localized arrays or blocks, Postgres throws
errors if there are two blocks with the same ID. This is why PR #11887
removed all IDs from the document to avoid conflicts. However, this
removal was too broad and caused issues in cases where it was
unnecessary.


The correct solution should remove the IDs only in nested fields whose
ancestors are localized. The reasoning is as follows:
- When an array/block is **not localized** (`localized: false`), if it
contains localized fields, these fields share the same ID across
different locales.
- When an array/block **is localized** (`localized: true`), its
descendant fields cannot share the same ID across different locales if
Postgres is being used. This wouldn't be an issue if the table
containing localized blocks had a composite primary key of `locale +
id`. However, since the primary key is just `id`, we need to assign a
new ID for these fields.

This PR properly removes IDs **only for nested fields** whose ancestors
are localized.

Fixes #12536

## Example:
### Before Fix:
```js
// Original document (en)
array: [{
  id: "123",
  text: { en: "English text" }
}]

// After copying to 'es' locale, a new ID was created instead of updating the existing item
array: [{
  id: "456",  // 🐛 New ID created!
  text: { es: "Spanish text" } // 🐛 'en' locale is missing
}]
```
### After fix:
```js
// After fix
array: [{
  id: "123",  //  Same ID maintained
  text: {
    en: "English text",
    es: "Spanish text"  //  Properly merged with existing item
  }
}]
```


## Additional fixes:

### TraverseFields

In the process of designing an appropriate solution, I detected a couple
of bugs in traverseFields that are also addressed in this PR.

### Fixed MongoDB Empty Array Handling

During testing, I discovered that MongoDB and PostgreSQL behave
differently when querying documents that don't exist in a specific
locale:
- PostgreSQL: Returns the document with data from the fallback locale
- MongoDB: Returns the document with empty arrays for localized fields

This difference caused `copyToLocale` to fail in MongoDB because the
merge algorithm only checked for `null` or `undefined` values, but not
empty arrays. When MongoDB returned `content: []` for a non-existent
locale, the algorithm would attempt to iterate over the empty array
instead of using the source locale's data.

### Move test e2e to int

The test introduced in #11887 didn't catch the bug because our e2e suite
doesn't run on Postgres. I migrated the test to an integration test that
does run on Postgres and MongoDB.
2025-07-24 20:37:13 +01:00
Jarrod Flesch
fa7d209cc9 fix(ui): incorrect blocks label sizing (#13264)
Blocks container labels should match the Array and Tab labels. Uses same
styling approach as Array labels.

### Before
<img width="229" height="260" alt="CleanShot 2025-07-24 at 12 26 38"
src="https://github.com/user-attachments/assets/9c4eb7c5-3638-4b47-805b-1206f195f5eb"
/>

### After
<img width="245" height="259" alt="CleanShot 2025-07-24 at 12 27 00"
src="https://github.com/user-attachments/assets/c04933b4-226f-403b-9913-24ba00857aab"
/>
2025-07-24 19:34:29 +00:00
Jacob Fletcher
bccf6ab16f feat: group by (#13138)
Supports grouping documents by specific fields within the list view.

For example, imagine having a "posts" collection with a "categories"
field. To report on each specific category, you'd traditionally filter
for each category, one at a time. This can be quite inefficient,
especially with large datasets.

Now, you can interact with all categories simultaneously, grouped by
distinct values.

Here is a simple demonstration:


https://github.com/user-attachments/assets/0dcd19d2-e983-47e6-9ea2-cfdd2424d8b5

Enable on any collection by setting the `admin.groupBy` property:

```ts
import type { CollectionConfig } from 'payload'

const MyCollection: CollectionConfig = {
  // ...
  admin: {
    groupBy: true
  }
}
```

This is currently marked as beta to gather feedback while we reach full
stability, and to leave room for API changes and other modifications.
Use at your own risk.

Note: when using `groupBy`, bulk editing is done group-by-group. In the
future we may support cross-group bulk editing.

Dependent on #13102 (merged).

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210774523852467

---------

Co-authored-by: Paul Popus <paul@payloadcms.com>
2025-07-24 14:00:52 -04:00
Patrik
7e81d30808 fix(ui): ensure document unlocks when logging out from edit view of a locked document (#13142)
### What?

Refactors the `LeaveWithoutSaving` modal to be generic and delegates
document unlock logic back to the `DefaultEditView` component via a
callback.

### Why?

Previously, `unlockDocument` was triggered in a cleanup `useEffect` in
the edit view. When logging out from the edit view, the unlock request
would often fail due to the session ending — leaving the document in a
locked state.

### How?

- Introduced `onConfirm` and `onPrevent` props for `LeaveWithoutSaving`.
- Moved all document lock/unlock logic into `DefaultEditView`’s
`handleLeaveConfirm`.
- Captures the next navigation target via `onPrevent` and evaluates
whether to unlock based on:
  - Locking being enabled.
  - Current user owning the lock.
- Navigation not targeting internal admin views (`/preview`, `/api`,
`/versions`).

---------

Co-authored-by: Jarrod Flesch <jarrodmflesch@gmail.com>
2025-07-24 09:18:49 -07:00
Sasha
a83ed5ebb5 fix(db-postgres): search is broken when useAsTitle is not specified (#13232)
Fixes https://github.com/payloadcms/payload/issues/13171
2025-07-24 18:42:17 +03:00
Patrik
8f85da8931 fix(plugin-import-export): json preview and downloads preserve nesting and exclude disabled fields (#13210)
### What?

Improves both the JSON preview and export functionality in the
import-export plugin:
- Preserves proper nesting of object and array fields (e.g., groups,
tabs, arrays)
- Excludes any fields explicitly marked as `disabled` via
`custom.plugin-import-export`
- Ensures downloaded files use proper JSON formatting when `format` is
`json` (no CSV-style flattening)

### Why?

Previously:
- The JSON preview flattened all fields to a single level and included
disabled fields.
- Exported files with `format: json` were still CSV-style data encoded
as `.json`, rather than real JSON.

### How?

- Refactored `/preview-data` JSON handling to preserve original document
shape.
- Applied `removeDisabledFields` to clean nested fields using
dot-notation paths.
- Updated `createExport` to skip `flattenObject` for JSON formats, using
a nested JSON filter instead.
- Fixed streaming and buffered export paths to output valid JSON arrays
when `format` is `json`.
2025-07-24 11:36:46 -04:00
Jacob Fletcher
e48427e59a feat(ui): expose refresh method to list drawer context (#13173) 2025-07-24 10:12:45 -04:00
Alessio Gravili
aeee0704dd chore: add new int test verifying that select *improves* performance of new optimization (#13254)
https://github.com/payloadcms/payload/pull/13186 actually made the
select API _more powerful_, as it can reduce the amount of db calls even
for complex collections with blocks down to 1.

This PR adds a test that verifies this.

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210871676349303
2025-07-23 16:48:25 -07:00
Jarrod Flesch
29fb9ee5b4 fix(ui): monomorphic relationship fields should not show relationTo option labels (#13245) 2025-07-23 16:31:05 -04:00
Alessio Gravili
94f5e790f6 perf(drizzle): single-roundtrip db updates for simple collections (#13186)
Currently, an optimized DB update (simple data => no
delete-and-create-row) does the following:
1. sql UPDATE
2. sql SELECT

This PR reduces this further to one single DB call for simple
collections:
1. sql UPDATE with RETURNING()

This only works for simple collections that do not have any fields that
need to be fetched from other tables. If a collection has fields like
relationship or blocks, we'll need that separate SELECT call to join in
the other tables.

In 4.0, we can remove all "complex" fields from the jobs collection and
replace them with a JSON field to make use of this optimization

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210803039809814
2025-07-23 01:45:55 -07:00
Jarrod Flesch
412bf4ff73 fix(ui): select all should reset when params change, page, filter, etc (#12612)
Fixes #11938
Fixes https://github.com/payloadcms/payload/issues/13154

When select-all is checked and you filter or change the page, the
selected documents should reset.
2025-07-22 15:23:02 -04:00
Jessica Rynkar
dce898d7ca fix(ui): ensure publishSpecificLocale works during create operation (#13129)
### What?
This PR ensures that when a document is created using the `Publish in
__` button, it is saved to the correct locale.

### Why?
During document creation, the buttons `Publish` or `Publish in [locale]`
have the same effect. As a result, we overlooked the case where a user
may specifically click `Publish in [locale]` for the first save. In this
scenario, the create operation does not respect the
`publishSpecificLocale` value, so the document was always saved in the
default locale regardless of the intended one.

### How?
Passes the `publishSpecificLocale` value to the create operation,
ensuring the document and version is saved to the correct locale.

**Fixes:** #13117
2025-07-21 09:19:51 -04:00
Jacob Fletcher
d7a3faa4e9 fix(ui): properly sync search params to user preferences (#13200)
Some search params within the list view do not properly sync to user
preferences, and visa versa.

For example, when selecting a query preset, the `?preset=123` param is
injected into the URL and saved to preferences, but when reloading the
page without the param, that preset is not reactivated as expected.

### Problem 

The reason this wasn't working before is that omitting this param would
also reset prefs. It was designed this way in order to support
client-side resets, e.g. clicking the query presets "x" button.

This pattern would never work, however, because this means that every
time the user navigates to the list view directly, their preference is
cleared, as no param would exist in the query.

Note: this is not an issue with _all_ params, as not all are handled in
the same way.

### Solution

The fix is to use empty values instead, e.g. `?preset=`. When the server
receives this, it knows to clear the pref. If it doesn't exist at all,
it knows to load from prefs. And if it has a value, it saves to prefs.
On the client, we sanitize those empty values back out so they don't
appear in the URL in the end.

This PR also refactors much of the list query context and its respective
provider to be significantly more predictable and easier to work with,
namely:

- The `ListQuery` type now fully aligns with what Payload APIs expect,
e.g. `page` is a number, not a string
- The provider now receives a single `query` prop which matches the
underlying context 1:1
- Propagating the query from the server to the URL is significantly more
predictable
- Any new props that may be supported in the future will automatically
work
- No more reconciling `columns` and `listPreferences.columns`, its just
`query.columns`

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210827129744922
2025-07-18 09:29:26 -04:00
Alessio Gravili
c08b2aea89 feat: scheduling jobs (#12863)
Adds a new `schedule` property to workflow and task configs that can be
used to have Payload automatically _queue_ jobs following a certain
_schedule_.

Docs:
https://payloadcms.com/docs/dynamic/jobs-queue/schedules?branch=feat/schedule-jobs

## API Example

```ts
export default buildConfig({
  // ...
  jobs: {
    // ...
    scheduler: 'manual', // Or `cron` if you're not using serverless. If `manual` is used, then user needs to set up running /api/payload-jobs/handleSchedules or payload.jobs.handleSchedules in regular intervals
    tasks: [
      {
        schedule: [
          {
            cron: '* * * * * *',
            queue: 'autorunSecond',
            // Hooks are optional
            hooks: {
              // Not an array, as providing and calling `defaultBeforeSchedule` would be more error-prone if this was an array
              beforeSchedule: async (args) => {
                // Handles verifying that there are no jobs already scheduled or processing.
                // You can override this behavior by not calling defaultBeforeSchedule, e.g. if you wanted
                // to allow a maximum of 3 scheduled jobs in the queue instead of 1, or add any additional conditions
                const result = await args.defaultBeforeSchedule(args)
                return {
                  ...result,
                  input: {
                    message: 'This task runs every second',
                  },
                }
              },
              afterSchedule: async (args) => {
                await args.defaultAfterSchedule(args) // Handles updating the payload-jobs-stats global
                args.req.payload.logger.info(
                  'EverySecond task scheduled: ' +
                  (args.status === 'success' ? args.job.id : 'skipped or failed to schedule'),
                )
              },
            },
          },
        ],
        slug: 'EverySecond',
        inputSchema: [
          {
            name: 'message',
            type: 'text',
            required: true,
          },
        ],
        handler: ({ input, req }) => {
          req.payload.logger.info(input.message)
          return {
            output: {},
          }
        },
      }
    ]
  }
})
```

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210495300843759
2025-07-18 06:48:27 -04:00
Patrik
95e373e60b fix(plugin-import-export): disabled flag to cascade to nested fields from parent containers (#13199)
### What?

Fixes the `custom.plugin-import-export.disabled` flag to correctly
disable fields in all nested structures including:
- Groups
- Arrays
- Tabs
- Blocks

Previously, only top-level fields or direct children were respected.
This update ensures nested paths (e.g. `group.array.field1`,
`blocks.hero.title`, etc.) are matched and filtered from exports.

### Why?

- Updated regex logic in both `createExport` and Preview components to
recursively support:
  - Indexed array fields (e.g. `array_0_field1`)
  - Block fields with slugs (e.g. `blocks_0_hero_title`)
  - Nested field accessors with correct part-by-part expansion

### How?

To allow users to disable entire field groups or deeply nested fields in
structured layouts.
2025-07-17 18:12:58 +00:00
Jarrod Flesch
12539c61d4 feat(ui): supports collection scoped folders (#12797)
As discussed in [this
RFC](https://github.com/payloadcms/payload/discussions/12729), this PR
supports collection-scoped folders. You can scope folders to multiple
collection types or just one.

This unlocks the possibility to have folders on a per collection instead
of always being shared on every collection. You can combine this feature
with the `browseByFolder: false` to completely isolate a collection from
other collections.

Things left to do:
- [x] ~~Create a custom react component for the selecting of
collectionSlugs to filter out available options based on the current
folders parameters~~


https://github.com/user-attachments/assets/14cb1f09-8d70-4cb9-b1e2-09da89302995


---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210564397815557
2025-07-17 13:24:22 -04:00
Sasha
a20b43624b feat: add findDistinct operation (#13102)
Adds a new operation findDistinct that can give you distinct values of a
field for a given collection
Example:
Assume you have a collection posts with multiple documents, and some of
them share the same title:
```js
// Example dataset (some titles appear multiple times)
[
  { title: 'title-1' },
  { title: 'title-2' },
  { title: 'title-1' },
  { title: 'title-3' },
  { title: 'title-2' },
  { title: 'title-4' },
  { title: 'title-5' },
  { title: 'title-6' },
  { title: 'title-7' },
  { title: 'title-8' },
  { title: 'title-9' },
]
```
You can now retrieve all unique title values using findDistinct:
```js
const result = await payload.findDistinct({
  collection: 'posts',
  field: 'title',
})

console.log(result.values)
// Output:
// [
//   'title-1',
//   'title-2',
//   'title-3',
//   'title-4',
//   'title-5',
//   'title-6',
//   'title-7',
//   'title-8',
//   'title-9'
// ]
```
You can also limit the number of distinct results:
```js
const limitedResult = await payload.findDistinct({
  collection: 'posts',
  field: 'title',
  sortOrder: 'desc',
  limit: 3,
})

console.log(limitedResult.values)
// Output:
// [
//   'title-1',
//   'title-2',
//   'title-3'
// ]
```

You can also pass a `where` query to filter the documents.
2025-07-16 17:18:14 -04:00
Elliott W
41cff6d436 fix(db-mongodb): improve compatibility with Firestore database (#12763)
### What?

Adds four more arguments to the `mongooseAdapter`:

```typescript
  useJoinAggregations?: boolean  /* The big one */
  useAlternativeDropDatabase?: boolean
  useBigIntForNumberIDs?: boolean
  usePipelineInSortLookup?: boolean
```

Also export a new `compatabilityOptions` object from
`@payloadcms/db-mongodb` where each key is a mongo-compatible database
and the value is the recommended `mongooseAdapter` settings for
compatability.

### Why?

When using firestore and visiting
`/admin/collections/media/payload-folders`, we get:

```
MongoServerError: invalid field(s) in lookup: [let, pipeline], only lookup(from, localField, foreignField, as) is supported
```

Firestore doesn't support the full MongoDB aggregation API used by
Payload which gets used when building aggregations for populating join
fields.

There are several other compatability issues with Firestore:
- The invalid `pipeline` property is used in the `$lookup` aggregation
in `buildSortParams`
- Firestore only supports number IDs of type `Long`, but Mongoose
converts custom ID fields of type number to `Double`
- Firestore does not support the `dropDatabase` command
- Firestore does not support the `createIndex` command (not addressed in
this PR)

### How?

 ```typescript
useJoinAggregations?: boolean  /* The big one */
```
When this is `false` we skip the `buildJoinAggregation()` pipeline and resolve the join fields through multiple queries. This can potentially be used with AWS DocumentDB and Azure Cosmos DB to support join fields, but I have not tested with either of these databases.

 ```typescript
useAlternativeDropDatabase?: boolean
```
When `true`, monkey-patch (replace) the `dropDatabase` function so that
it calls `collection.deleteMany({})` on every collection instead of
sending a single `dropDatabase` command to the database

 ```typescript
useBigIntForNumberIDs?: boolean
```
When `true`, use `mongoose.Schema.Types.BigInt` for custom ID fields of type `number` which converts to a firestore `Long` behind the scenes

```typescript
  usePipelineInSortLookup?: boolean
```
When `false`, modify the sortAggregation pipeline in `buildSortParams()` so that we don't use the `pipeline` property in the `$lookup` aggregation. Results in slightly worse performance when sorting by relationship properties.

### Limitations

This PR does not add support for transactions or creating indexes in firestore.

### Fixes

Fixed a bug (and added a test) where you weren't able to sort by multiple properties on a relationship field.

### Future work

1. Firestore supports simple `$lookup` aggregations but other databases might not. Could add a `useSortAggregations` property which can be used to disable aggregations in sorting.

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Sasha <64744993+r1tsuu@users.noreply.github.com>
2025-07-16 15:17:43 -04:00
Alessio Gravili
7cd682c66a perf(drizzle): further optimize postgres row updates (#13184)
This is a follow-up to https://github.com/payloadcms/payload/pull/13060.

There are a bunch of other db adapter methods that use `upsertRow` for
updates: `updateGlobal`, `updateGlobalVersion`, `updateJobs`,
`updateMany`, `updateVersion`.

The previous PR had the logic for using the optimized row updating logic
inside the `updateOne` adapter. This PR moves that logic to the original
`upsertRow` function. Benefits:
- all the other db methods will benefit from this massive optimization
as well. This will be especially relevant for optimizing postgres job
queue initial updates - we should be able to close
https://github.com/payloadcms/payload/pull/11865 after another follow-up
PR
- easier to read db adapter methods due to less code.

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210803039809810
2025-07-16 09:45:02 -07:00
Sasha
841bf891d0 feat: atomic number field updates (#13118)
Based on https://github.com/payloadcms/payload/pull/13060 which should
be merged first
This PR adds ability to update number fields atomically, which could be
important with parallel writes. For now we support this only via
`payload.db.updateOne`.

For example:
```js
// increment by 10
const res = await payload.db.updateOne({
  data: {
    number: {
      $inc: 10,
    },
  },
  collection: 'posts',
  where: { id: { equals: post.id } },
})

// decrement by 3
const res2 = await payload.db.updateOne({
  data: {
    number: {
      $inc: -3,
    },
  },
  collection: 'posts',
  where: { id: { equals: post.id } },
})
```
2025-07-15 21:53:45 -07:00
Patrik
2a59c5bf8c fix(plugin-import-export): export field dropdown to properly label and path fields in named/unnamed tabs (#13180)
### What?

Fixes the export field selection dropdown to correctly differentiate
between fields in named and unnamed tabs.

### Why?

Previously, when a `tabs` field contained both named and unnamed tabs,
subfields with the same `name` would appear as duplicates in the
dropdown (e.g. `Tab To CSV`, `Tab To CSV`). Additionally, selecting a
field from a named tab would incorrectly map it to the unnamed version
due to shared labels and missing path prefixes.

### How?

- Updated the `reduceFields` utility to manually construct the field
path and label using the tab’s `name` if present.
- Ensured unnamed tabs treat subfields as top-level and skip prefixing
altogether.
- Adjusted label prefix logic to show `Named Tab > Field Name` when
appropriate.


#### Before
<img width="169" height="79" alt="Screenshot 2025-07-15 at 2 55 14 PM"
src="https://github.com/user-attachments/assets/2ab2d19e-41a3-4be2-8496-1da2a79f88e1"
/>

#### After
<img width="211" height="79" alt="Screenshot 2025-07-15 at 2 50 38 PM"
src="https://github.com/user-attachments/assets/0620e96a-71cd-4eb1-9396-30d461ed47a5"
/>
2025-07-15 16:41:07 -04:00
Alessio Gravili
64d76a3869 fix: cron jobs running when calling bin scripts, leading to db errors (#13135)
Previously, we were always initializing cronjobs when calling
`getPayload` or `payload.init`.

This is undesired in bin scripts - we don't want cron jobs to start
triggering db calls while we're running an initial migration using
`payload migrate` for example. This has previously led to a race
condition, triggering the following, occasional error, if job autoruns
were enabled:

```ts
DrizzleQueryError: Failed query: select "payload_jobs"."id", "payload_jobs"."input", "payload_jobs"."completed_at", "payload_jobs"."total_tried", "payload_jobs"."has_error", "payload_jobs"."error", "payload_jobs"."workflow_slug", "payload_jobs"."task_slug", "payload_jobs"."queue", "payload_jobs"."wait_until", "payload_jobs"."processing", "payload_jobs"."updated_at", "payload_jobs"."created_at", "payload_jobs_log"."data" as "log" from "payload_jobs" "payload_jobs" left join lateral (select coalesce(json_agg(json_build_array("payload_jobs_log"."_order", "payload_jobs_log"."id", "payload_jobs_log"."executed_at", "payload_jobs_log"."completed_at", "payload_jobs_log"."task_slug", "payload_jobs_log"."task_i_d", "payload_jobs_log"."input", "payload_jobs_log"."output", "payload_jobs_log"."state", "payload_jobs_log"."error") order by "payload_jobs_log"."_order" asc), '[]'::json) as "data" from (select * from "payload_jobs_log" "payload_jobs_log" where "payload_jobs_log"."_parent_id" = "payload_jobs"."id" order by "payload_jobs_log"."_order" asc) "payload_jobs_log") "payload_jobs_log" on true where ("payload_jobs"."completed_at" is null and ("payload_jobs"."has_error" is null or "payload_jobs"."has_error" <> $1) and "payload_jobs"."processing" = $2 and ("payload_jobs"."wait_until" is null or "payload_jobs"."wait_until" < $3) and "payload_jobs"."queue" = $4) order by "payload_jobs"."created_at" asc limit $5
params: true,false,2025-07-10T21:25:03.002Z,autorunSecond,100
    at NodePgPreparedQuery.queryWithCache (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/pg-core/session.ts:74:11)
    at processTicksAndRejections (node:internal/process/task_queues:105:5)
    at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:154:19
    ... 6 lines matching cause stack trace ...
    at N._trigger (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/croner@9.0.0/node_modules/croner/dist/croner.cjs:1:16806) {
  query: `select "payload_jobs"."id", "payload_jobs"."input", "payload_jobs"."completed_at", "payload_jobs"."total_tried", "payload_jobs"."has_error", "payload_jobs"."error", "payload_jobs"."workflow_slug", "payload_jobs"."task_slug", "payload_jobs"."queue", "payload_jobs"."wait_until", "payload_jobs"."processing", "payload_jobs"."updated_at", "payload_jobs"."created_at", "payload_jobs_log"."data" as "log" from "payload_jobs" "payload_jobs" left join lateral (select coalesce(json_agg(json_build_array("payload_jobs_log"."_order", "payload_jobs_log"."id", "payload_jobs_log"."executed_at", "payload_jobs_log"."completed_at", "payload_jobs_log"."task_slug", "payload_jobs_log"."task_i_d", "payload_jobs_log"."input", "payload_jobs_log"."output", "payload_jobs_log"."state", "payload_jobs_log"."error") order by "payload_jobs_log"."_order" asc), '[]'::json) as "data" from (select * from "payload_jobs_log" "payload_jobs_log" where "payload_jobs_log"."_parent_id" = "payload_jobs"."id" order by "payload_jobs_log"."_order" asc) "payload_jobs_log") "payload_jobs_log" on true where ("payload_jobs"."completed_at" is null and ("payload_jobs"."has_error" is null or "payload_jobs"."has_error" <> $1) and "payload_jobs"."processing" = $2 and ("payload_jobs"."wait_until" is null or "payload_jobs"."wait_until" < $3) and "payload_jobs"."queue" = $4) order by "payload_jobs"."created_at" asc limit $5`,
  params: [ true, false, '2025-07-10T21:25:03.002Z', 'autorunSecond', 100 ],
  cause: error: relation "payload_jobs" does not exist
      at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/pg@8.16.3/node_modules/pg/lib/client.js:545:17
      at processTicksAndRejections (node:internal/process/task_queues:105:5)
      at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:161:13
      at NodePgPreparedQuery.queryWithCache (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/pg-core/session.ts:72:12)
      at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:154:19
      at find (/Users/alessio/Documents/GitHub/payload2/packages/drizzle/src/find/findMany.ts:162:19)
      at Object.updateMany (/Users/alessio/Documents/GitHub/payload2/packages/drizzle/src/updateJobs.ts:26:16)
      at updateJobs (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/utilities/updateJob.ts:102:37)
      at runJobs (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/operations/runJobs/index.ts:181:25)
      at Object.run (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/localAPI.ts:137:12)
      at N.fn (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/index.ts:866:13)
      at N._trigger (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/croner@9.0.0/node_modules/croner/dist/croner.cjs:1:16806) {
    length: 112,
    severity: 'ERROR',
    code: '42P01',
    detail: undefined,
    hint: undefined,
    position: '406',
    internalPosition: undefined,
    internalQuery: undefined,
    where: undefined,
    schema: undefined,
    table: undefined,
    column: undefined,
    dataType: undefined,
    constraint: undefined,
    file: 'parse_relation.c',
    line: '1449',
    routine: 'parserOpenTable'
  }
}
```

This PR makes running crons opt-in using a new `cron` flag. By default,
no cron jobs will be created.
2025-07-15 13:24:50 -04:00
Patrik
7294cf561d feat(plugin-import-export): adds support for forcing export format via plugin config (#13160)
### What?

Adds a new `format` option to the `plugin-import-export` config that
allows users to force the export format (`csv` or `json`) and hide the
format dropdown from the export UI.

### Why?

In some use cases, allowing the user to select between CSV and JSON is
unnecessary or undesirable. This new option allows plugin consumers to
lock the format and simplify the export interface.

### How?

- Added a `format?: 'csv' | 'json'` field to `ImportExportPluginConfig`.
- When defined, the `format` field in the export UI is:
  - Hidden via `admin.condition`
  - Pre-filled via `defaultValue`
- Updated `getFields` to accept the plugin config and apply logic
accordingly.

### Example

```ts
importExportPlugin({
  format: 'json',
})
2025-07-14 16:19:52 -04:00