### What?
Prevents decrypted apiKey from being saved back to database on the auth
refresh operation.
### Why?
References issue #13063: refreshing a token for a logged-in user
decrypted `apiKey` and wrote it back in plaintext, corrupting the user
record.
### How?
The user is now fetched with `db.findOne` instead of `findByID`,
preserving the encryption of the key when saved back to the database
using `db.updateOne`. The user record is then re-fetched using
`findByID`, allowing for the decrypted key to be provided in the
response.
### Tests
* ✅ keeps apiKey encrypted in DB after refresh
* ✅ returns user with decrypted apiKey after refresh
Fixes#13063
### What
- filters cookies with the `payload-` prefix in `getExternalFile` by
default (if `externalFileHeaderFilter` is not used).
- Document in `externalFileHeaderFilter`, that the user should handle
the removing of the payload cookie.
### Why
In the Payload application, the `getExternalFile` function sends the
user's cookies to an external server when fetching media, inadvertently
exposing the user's session to that third-party service.
```ts
const headers = uploadConfig.externalFileHeaderFilter
? uploadConfig.externalFileHeaderFilter(Object.fromEntries(new Headers(req.headers)))
: { cookie: req.headers?.get('cookie') };
const res = await fetch(fileURL, {
credentials: 'include',
headers,
method: 'GET',
});
```
Although the
[externalFileHeaderFilter](https://payloadcms.com/docs/upload/overview#collection-upload-options)
function can strip sensitive cookies from the request, the default
config includes the session cookie, violating the secure-by-default
principle.
### How
- If `externalFileHeaderFilter` is not defined, any cookie beginning
with `payload-` is filtered.
- Added 2 tests: both for the case where `externalFileHeaderFilter` is
defined and for the case where it is not.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210561338171125
Catches list filter errors and prevents the list view from crashing when
attempting to search on fields the user does not have access to. Instead
just shows the default "no results found" message.
Custom document tab components (server components) do not receive the
`user` prop, as the types suggest. This makes it difficult to wire up
conditional rendering based on the user. This is because tab conditions
don't receive a user argument either, forcing you to render the default
tab component yourself—but a custom component should not be needed for
this in the first place.
Now they both receive `req` alongside `user`, which is more closely
aligned with custom field components.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210906078627357
### What?
- Fixed an issue where group-by enabled collections with `trash: true`
were not showing trashed documents in the collection’s trash view.
- Ensured that the `trash` query argument is properly passed to the
`findDistinct` call within `handleGroupBy`, allowing trashed documents
to be included in grouped list views.
### Why?
Previously, when viewing the trash view of a collection with both
**group-by** and **trash** enabled, trashed documents would not appear.
This was caused by the `trash` argument not being forwarded to
`findDistinct` in `handleGroupBy`, which resulted in empty or incorrect
group-by results.
### How?
- Passed the `trash` flag through all relevant `findDistinct` and `find`
calls in `handleGroupBy`.
Previously, filtering by a polymorphic relationship inside an array /
group (unless the `name` is `version`) / tab caused `QueryError: The
following path cannot be queried:`.
### What?
This PR introduces complete trash (soft-delete) support. When a
collection is configured with `trash: true`, documents can now be
soft-deleted and restored via both the API and the admin panel.
```
import type { CollectionConfig } from 'payload'
const Posts: CollectionConfig = {
slug: 'posts',
trash: true, // <-- New collection config prop @default false
fields: [
{
name: 'title',
type: 'text',
},
// other fields...
],
}
```
### Why
Soft deletes allow developers and admins to safely remove documents
without losing data immediately. This enables workflows like reversible
deletions, trash views, and auditing—while preserving compatibility with
drafts, autosave, and version history.
### How?
#### Backend
- Adds new `trash: true` config option to collections.
- When enabled:
- A `deletedAt` timestamp is conditionally injected into the schema.
- Soft deletion is performed by setting `deletedAt` instead of removing
the document from the database.
- Extends all relevant API operations (`find`, `findByID`, `update`,
`delete`, `versions`, etc.) to support a new `trash` param:
- `trash: false` → excludes trashed documents (default)
- `trash: true` → includes both trashed and non-trashed documents
- To query **only trashed** documents: use `trash: true` with a `where`
clause like `{ deletedAt: { exists: true } }`
- Enforces delete access control before allowing a soft delete via
update or updateByID.
- Disables version restoring on trashed documents (must be restored
first).
#### Admin Panel
- Adds a dedicated **Trash view**: `/collections/:collectionSlug/trash`
- Default delete action now soft-deletes documents when `trash: true` is
set.
- **Delete confirmation modal** includes a checkbox to permanently
delete instead.
- Trashed documents:
- Displays UI banner for better clarity of trashed document edit view vs
non-trashed document edit view
- Render in a read-only edit view
- Still allow access to **Preview**, **API**, and **Versions** tabs
- Updated Status component:
- Displays “Previously published” or “Previously a draft” for trashed
documents.
- Disables status-changing actions when documents are in trash.
- Adds new **Restore** bulk action to clear the `deletedAt` timestamp.
- New `Restore` and `Permanently Delete` buttons for
single-trashed-document restore and permanent deletion.
- **Restore confirmation modal** includes a checkbox to restore as
`published`, defaults to `draft`.
- Adds **Empty Trash** and **Delete permanently** bulk actions.
#### Notes
- This feature is completely opt-in. Collections without trash: true
behave exactly as before.
https://github.com/user-attachments/assets/00b83f8a-0442-441e-a89e-d5dc1f49dd37
## Problem:
In PR #11887, a bug fix for `copyToLocale` was introduced to address
issues with copying content between locales in Postgres. However, an
incorrect algorithm was used, which removed all "id" properties from
documents being copied. This led to bug #12536, where `copyToLocale`
would mistakenly delete the document in the source language, affecting
not only Postgres but any database.
## Cause and Solution:
When copying documents with localized arrays or blocks, Postgres throws
errors if there are two blocks with the same ID. This is why PR #11887
removed all IDs from the document to avoid conflicts. However, this
removal was too broad and caused issues in cases where it was
unnecessary.
The correct solution should remove the IDs only in nested fields whose
ancestors are localized. The reasoning is as follows:
- When an array/block is **not localized** (`localized: false`), if it
contains localized fields, these fields share the same ID across
different locales.
- When an array/block **is localized** (`localized: true`), its
descendant fields cannot share the same ID across different locales if
Postgres is being used. This wouldn't be an issue if the table
containing localized blocks had a composite primary key of `locale +
id`. However, since the primary key is just `id`, we need to assign a
new ID for these fields.
This PR properly removes IDs **only for nested fields** whose ancestors
are localized.
Fixes#12536
## Example:
### Before Fix:
```js
// Original document (en)
array: [{
id: "123",
text: { en: "English text" }
}]
// After copying to 'es' locale, a new ID was created instead of updating the existing item
array: [{
id: "456", // 🐛 New ID created!
text: { es: "Spanish text" } // 🐛 'en' locale is missing
}]
```
### After fix:
```js
// After fix
array: [{
id: "123", // ✅ Same ID maintained
text: {
en: "English text",
es: "Spanish text" // ✅ Properly merged with existing item
}
}]
```
## Additional fixes:
### TraverseFields
In the process of designing an appropriate solution, I detected a couple
of bugs in traverseFields that are also addressed in this PR.
### Fixed MongoDB Empty Array Handling
During testing, I discovered that MongoDB and PostgreSQL behave
differently when querying documents that don't exist in a specific
locale:
- PostgreSQL: Returns the document with data from the fallback locale
- MongoDB: Returns the document with empty arrays for localized fields
This difference caused `copyToLocale` to fail in MongoDB because the
merge algorithm only checked for `null` or `undefined` values, but not
empty arrays. When MongoDB returned `content: []` for a non-existent
locale, the algorithm would attempt to iterate over the empty array
instead of using the source locale's data.
### Move test e2e to int
The test introduced in #11887 didn't catch the bug because our e2e suite
doesn't run on Postgres. I migrated the test to an integration test that
does run on Postgres and MongoDB.
Supports grouping documents by specific fields within the list view.
For example, imagine having a "posts" collection with a "categories"
field. To report on each specific category, you'd traditionally filter
for each category, one at a time. This can be quite inefficient,
especially with large datasets.
Now, you can interact with all categories simultaneously, grouped by
distinct values.
Here is a simple demonstration:
https://github.com/user-attachments/assets/0dcd19d2-e983-47e6-9ea2-cfdd2424d8b5
Enable on any collection by setting the `admin.groupBy` property:
```ts
import type { CollectionConfig } from 'payload'
const MyCollection: CollectionConfig = {
// ...
admin: {
groupBy: true
}
}
```
This is currently marked as beta to gather feedback while we reach full
stability, and to leave room for API changes and other modifications.
Use at your own risk.
Note: when using `groupBy`, bulk editing is done group-by-group. In the
future we may support cross-group bulk editing.
Dependent on #13102 (merged).
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210774523852467
---------
Co-authored-by: Paul Popus <paul@payloadcms.com>
Fixes#13113
### How?
Does not rely on JS falseyness, instead explicitly checking for null &
undefined
I'm not actually certain this is the approach we want to take. Some
people might interpret "required" as not null, not-undefined and min
length > 1 in the case of strings. If they do, this change to the
behavior in the not-required case will break their expectations
### What?
This PR ensures that when a document is created using the `Publish in
__` button, it is saved to the correct locale.
### Why?
During document creation, the buttons `Publish` or `Publish in [locale]`
have the same effect. As a result, we overlooked the case where a user
may specifically click `Publish in [locale]` for the first save. In this
scenario, the create operation does not respect the
`publishSpecificLocale` value, so the document was always saved in the
default locale regardless of the intended one.
### How?
Passes the `publishSpecificLocale` value to the create operation,
ensuring the document and version is saved to the correct locale.
**Fixes:** #13117
Some search params within the list view do not properly sync to user
preferences, and visa versa.
For example, when selecting a query preset, the `?preset=123` param is
injected into the URL and saved to preferences, but when reloading the
page without the param, that preset is not reactivated as expected.
### Problem
The reason this wasn't working before is that omitting this param would
also reset prefs. It was designed this way in order to support
client-side resets, e.g. clicking the query presets "x" button.
This pattern would never work, however, because this means that every
time the user navigates to the list view directly, their preference is
cleared, as no param would exist in the query.
Note: this is not an issue with _all_ params, as not all are handled in
the same way.
### Solution
The fix is to use empty values instead, e.g. `?preset=`. When the server
receives this, it knows to clear the pref. If it doesn't exist at all,
it knows to load from prefs. And if it has a value, it saves to prefs.
On the client, we sanitize those empty values back out so they don't
appear in the URL in the end.
This PR also refactors much of the list query context and its respective
provider to be significantly more predictable and easier to work with,
namely:
- The `ListQuery` type now fully aligns with what Payload APIs expect,
e.g. `page` is a number, not a string
- The provider now receives a single `query` prop which matches the
underlying context 1:1
- Propagating the query from the server to the URL is significantly more
predictable
- Any new props that may be supported in the future will automatically
work
- No more reconciling `columns` and `listPreferences.columns`, its just
`query.columns`
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210827129744922
<!--
Thank you for the PR! Please go through the checklist below and make
sure you've completed all the steps.
Please review the
[CONTRIBUTING.md](https://github.com/payloadcms/payload/blob/main/CONTRIBUTING.md)
document in this repository if you haven't already.
The following items will ensure that your PR is handled as smoothly as
possible:
- PR Title must follow conventional commits format. For example, `feat:
my new feature`, `fix(plugin-seo): my fix`.
- Minimal description explained as if explained to someone not
immediately familiar with the code.
- Provide before/after screenshots or code diffs if applicable.
- Link any related issues/discussions from GitHub or Discord.
- Add review comments if necessary to explain to the reviewer the logic
behind a change
-->
### What?
Fixes a bug where `afterChange` hooks would attempt to access values for
fields that are `read: false` but `create: true`, resulting in
`undefined` values and unexpected behavior.
### Why?
In scenarios where access control allows field creation (`create: true`)
but disallows reading it (`read: false`), hooks like `afterChange` would
still attempt to operate on `undefined` values from `siblingDoc` or
`previousDoc`, potentially causing errors or skipped logic.
### How?
Adds safe optional chaining and fallback object initialization in
`promise.ts` for:
- `previousDoc[field.name]`
- `siblingDoc[field.name]`
- Group, Array, and Block field traversals
This ensures that these values are treated as empty objects or arrays
where appropriate to prevent runtime errors during traversal or hook
execution.
Fixes https://github.com/payloadcms/payload/issues/12660
---------
Co-authored-by: Niall Bambury <niall.bambury@cuckoo.co>
Adds a new `schedule` property to workflow and task configs that can be
used to have Payload automatically _queue_ jobs following a certain
_schedule_.
Docs:
https://payloadcms.com/docs/dynamic/jobs-queue/schedules?branch=feat/schedule-jobs
## API Example
```ts
export default buildConfig({
// ...
jobs: {
// ...
scheduler: 'manual', // Or `cron` if you're not using serverless. If `manual` is used, then user needs to set up running /api/payload-jobs/handleSchedules or payload.jobs.handleSchedules in regular intervals
tasks: [
{
schedule: [
{
cron: '* * * * * *',
queue: 'autorunSecond',
// Hooks are optional
hooks: {
// Not an array, as providing and calling `defaultBeforeSchedule` would be more error-prone if this was an array
beforeSchedule: async (args) => {
// Handles verifying that there are no jobs already scheduled or processing.
// You can override this behavior by not calling defaultBeforeSchedule, e.g. if you wanted
// to allow a maximum of 3 scheduled jobs in the queue instead of 1, or add any additional conditions
const result = await args.defaultBeforeSchedule(args)
return {
...result,
input: {
message: 'This task runs every second',
},
}
},
afterSchedule: async (args) => {
await args.defaultAfterSchedule(args) // Handles updating the payload-jobs-stats global
args.req.payload.logger.info(
'EverySecond task scheduled: ' +
(args.status === 'success' ? args.job.id : 'skipped or failed to schedule'),
)
},
},
},
],
slug: 'EverySecond',
inputSchema: [
{
name: 'message',
type: 'text',
required: true,
},
],
handler: ({ input, req }) => {
req.payload.logger.info(input.message)
return {
output: {},
}
},
}
]
}
})
```
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210495300843759
### What?
Exit the process after running jobs.
### Why?
When running the `payload jobs:run` bin script with a postgres database
the process hangs forever.
### How?
Execute `process.exit(0)` after running all jobs.
As discussed in [this
RFC](https://github.com/payloadcms/payload/discussions/12729), this PR
supports collection-scoped folders. You can scope folders to multiple
collection types or just one.
This unlocks the possibility to have folders on a per collection instead
of always being shared on every collection. You can combine this feature
with the `browseByFolder: false` to completely isolate a collection from
other collections.
Things left to do:
- [x] ~~Create a custom react component for the selecting of
collectionSlugs to filter out available options based on the current
folders parameters~~
https://github.com/user-attachments/assets/14cb1f09-8d70-4cb9-b1e2-09da89302995
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210564397815557
Adds a new operation findDistinct that can give you distinct values of a
field for a given collection
Example:
Assume you have a collection posts with multiple documents, and some of
them share the same title:
```js
// Example dataset (some titles appear multiple times)
[
{ title: 'title-1' },
{ title: 'title-2' },
{ title: 'title-1' },
{ title: 'title-3' },
{ title: 'title-2' },
{ title: 'title-4' },
{ title: 'title-5' },
{ title: 'title-6' },
{ title: 'title-7' },
{ title: 'title-8' },
{ title: 'title-9' },
]
```
You can now retrieve all unique title values using findDistinct:
```js
const result = await payload.findDistinct({
collection: 'posts',
field: 'title',
})
console.log(result.values)
// Output:
// [
// 'title-1',
// 'title-2',
// 'title-3',
// 'title-4',
// 'title-5',
// 'title-6',
// 'title-7',
// 'title-8',
// 'title-9'
// ]
```
You can also limit the number of distinct results:
```js
const limitedResult = await payload.findDistinct({
collection: 'posts',
field: 'title',
sortOrder: 'desc',
limit: 3,
})
console.log(limitedResult.values)
// Output:
// [
// 'title-1',
// 'title-2',
// 'title-3'
// ]
```
You can also pass a `where` query to filter the documents.
Based on https://github.com/payloadcms/payload/pull/13060 which should
be merged first
This PR adds ability to update number fields atomically, which could be
important with parallel writes. For now we support this only via
`payload.db.updateOne`.
For example:
```js
// increment by 10
const res = await payload.db.updateOne({
data: {
number: {
$inc: 10,
},
},
collection: 'posts',
where: { id: { equals: post.id } },
})
// decrement by 3
const res2 = await payload.db.updateOne({
data: {
number: {
$inc: -3,
},
},
collection: 'posts',
where: { id: { equals: post.id } },
})
```
Previously, we were always initializing cronjobs when calling
`getPayload` or `payload.init`.
This is undesired in bin scripts - we don't want cron jobs to start
triggering db calls while we're running an initial migration using
`payload migrate` for example. This has previously led to a race
condition, triggering the following, occasional error, if job autoruns
were enabled:
```ts
DrizzleQueryError: Failed query: select "payload_jobs"."id", "payload_jobs"."input", "payload_jobs"."completed_at", "payload_jobs"."total_tried", "payload_jobs"."has_error", "payload_jobs"."error", "payload_jobs"."workflow_slug", "payload_jobs"."task_slug", "payload_jobs"."queue", "payload_jobs"."wait_until", "payload_jobs"."processing", "payload_jobs"."updated_at", "payload_jobs"."created_at", "payload_jobs_log"."data" as "log" from "payload_jobs" "payload_jobs" left join lateral (select coalesce(json_agg(json_build_array("payload_jobs_log"."_order", "payload_jobs_log"."id", "payload_jobs_log"."executed_at", "payload_jobs_log"."completed_at", "payload_jobs_log"."task_slug", "payload_jobs_log"."task_i_d", "payload_jobs_log"."input", "payload_jobs_log"."output", "payload_jobs_log"."state", "payload_jobs_log"."error") order by "payload_jobs_log"."_order" asc), '[]'::json) as "data" from (select * from "payload_jobs_log" "payload_jobs_log" where "payload_jobs_log"."_parent_id" = "payload_jobs"."id" order by "payload_jobs_log"."_order" asc) "payload_jobs_log") "payload_jobs_log" on true where ("payload_jobs"."completed_at" is null and ("payload_jobs"."has_error" is null or "payload_jobs"."has_error" <> $1) and "payload_jobs"."processing" = $2 and ("payload_jobs"."wait_until" is null or "payload_jobs"."wait_until" < $3) and "payload_jobs"."queue" = $4) order by "payload_jobs"."created_at" asc limit $5
params: true,false,2025-07-10T21:25:03.002Z,autorunSecond,100
at NodePgPreparedQuery.queryWithCache (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/pg-core/session.ts:74:11)
at processTicksAndRejections (node:internal/process/task_queues:105:5)
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:154:19
... 6 lines matching cause stack trace ...
at N._trigger (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/croner@9.0.0/node_modules/croner/dist/croner.cjs:1:16806) {
query: `select "payload_jobs"."id", "payload_jobs"."input", "payload_jobs"."completed_at", "payload_jobs"."total_tried", "payload_jobs"."has_error", "payload_jobs"."error", "payload_jobs"."workflow_slug", "payload_jobs"."task_slug", "payload_jobs"."queue", "payload_jobs"."wait_until", "payload_jobs"."processing", "payload_jobs"."updated_at", "payload_jobs"."created_at", "payload_jobs_log"."data" as "log" from "payload_jobs" "payload_jobs" left join lateral (select coalesce(json_agg(json_build_array("payload_jobs_log"."_order", "payload_jobs_log"."id", "payload_jobs_log"."executed_at", "payload_jobs_log"."completed_at", "payload_jobs_log"."task_slug", "payload_jobs_log"."task_i_d", "payload_jobs_log"."input", "payload_jobs_log"."output", "payload_jobs_log"."state", "payload_jobs_log"."error") order by "payload_jobs_log"."_order" asc), '[]'::json) as "data" from (select * from "payload_jobs_log" "payload_jobs_log" where "payload_jobs_log"."_parent_id" = "payload_jobs"."id" order by "payload_jobs_log"."_order" asc) "payload_jobs_log") "payload_jobs_log" on true where ("payload_jobs"."completed_at" is null and ("payload_jobs"."has_error" is null or "payload_jobs"."has_error" <> $1) and "payload_jobs"."processing" = $2 and ("payload_jobs"."wait_until" is null or "payload_jobs"."wait_until" < $3) and "payload_jobs"."queue" = $4) order by "payload_jobs"."created_at" asc limit $5`,
params: [ true, false, '2025-07-10T21:25:03.002Z', 'autorunSecond', 100 ],
cause: error: relation "payload_jobs" does not exist
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/pg@8.16.3/node_modules/pg/lib/client.js:545:17
at processTicksAndRejections (node:internal/process/task_queues:105:5)
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:161:13
at NodePgPreparedQuery.queryWithCache (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/pg-core/session.ts:72:12)
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:154:19
at find (/Users/alessio/Documents/GitHub/payload2/packages/drizzle/src/find/findMany.ts:162:19)
at Object.updateMany (/Users/alessio/Documents/GitHub/payload2/packages/drizzle/src/updateJobs.ts:26:16)
at updateJobs (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/utilities/updateJob.ts:102:37)
at runJobs (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/operations/runJobs/index.ts:181:25)
at Object.run (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/localAPI.ts:137:12)
at N.fn (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/index.ts:866:13)
at N._trigger (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/croner@9.0.0/node_modules/croner/dist/croner.cjs:1:16806) {
length: 112,
severity: 'ERROR',
code: '42P01',
detail: undefined,
hint: undefined,
position: '406',
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'parse_relation.c',
line: '1449',
routine: 'parserOpenTable'
}
}
```
This PR makes running crons opt-in using a new `cron` flag. By default,
no cron jobs will be created.
Fixes#7799
Fixes a type issue where all fields in RenderFields['fields'] admin
properties were being marked as required since we were using `Pick`.
Adds a helper type to allow extracting properties with correct
optionality.
### What
Introduces an additional `mimeType` validation based on the actual file
data to ensure the uploaded file matches the allowed `mimeTypes` defined
in the upload config.
### Why?
The current validation relies on the file extension, which can be easily
manipulated. For example, if only PDFs are allowed, a JPEG renamed to
`image.pdf` would bypass the check and be accepted. This change prevents
such cases by verifying the true MIME type.
### How?
Performs a secondary validation using the file’s binary data (buffer),
providing a more reliable MIME type check.
Fixes#12905
While we can use `joins`, `select`, `populate`, `depth` or `draft` on
auth collections when finding or finding by ID, these arguments weren't
supported for `/me` which meant that in some situations like in our
ecommerce template we couldn't optimise these calls.
A workaround would be to make a call to `/me` and then get the user ID
to then use for a `findByID` operation.
The login operation with sessions enabled calls updateOne, in mongodb,
data that does not match the schema is removed. `collection` and
`_strategy` are not part of the schema so they need to be reassigned
after the user is updated.
Adds int test.
This PR makes it so that `modifyResponseHeaders` is supported in our
adapters when set on the collection config. Previously it would be
ignored.
This means that users can now modify or append new headers to what's
returned by each service.
```ts
import type { CollectionConfig } from 'payload'
export const Media: CollectionConfig = {
slug: 'media',
upload: {
modifyResponseHeaders: ({ headers }) => {
const newHeaders = new Headers(headers) // Copy existing headers
newHeaders.set('X-Frame-Options', 'DENY') // Set new header
return newHeaders
},
},
}
```
Also adds support for `void` return on the `modifyResponseHeaders`
function in the case where the user just wants to use existing headers
and doesn't need more control.
eg:
```ts
import type { CollectionConfig } from 'payload'
export const Media: CollectionConfig = {
slug: 'media',
upload: {
modifyResponseHeaders: ({ headers }) => {
headers.set('X-Frame-Options', 'DENY') // You can directly set headers without returning
},
},
}
```
Manual testing checklist (no CI e2es setup for these envs yet):
- [x] GCS
- [x] S3
- [x] Azure
- [x] UploadThing
- [x] Vercel Blob
---------
Co-authored-by: James <james@trbl.design>
Previously, we were performing this check before calling the fetch
function. This changes it to perform the check within the dispatcher.
It adjusts the int tests to both trigger the dispatcher lookup function
(which is only triggered when not already passing a valid IP) and the
check before calling fetch
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210733180484570
Fixes#12834
`loginAttempts` was being shown in the admin panel when it should be
hidden. The field is set to `hidden: true` therefore the value is
removed from siblingData and passes the `allowDefaultValue` check -
showing inconsistent data.
This PR ensures the default value is not returned if the field has a
value but was removed due to the field being hidden.
Login attempts were not being reset correctly which led to situations
where a failed login attempt followed by a successful login attempt
would keep the loginAttempts at 1.
### Before
Example with maxAttempts of 2:
- failed login -> `loginAttempts: 1`
- successful login -> `loginAttempts: 1`
- failed login -> `loginAttempts: 2`
- successful login -> `"This user is locked due to having too many
failed login attempts."`
### After
Example with maxAttempts of 2:
- failed login -> `loginAttempts: 1`
- successful login -> `loginAttempts: 0`
- failed login -> `loginAttempts: 1`
- successful login -> `loginAttempts: 0`
### What?
Adds a new `sanitizeUserDataForEmail` function, exported from
`payload/shared`.
This function sanitizes user data passed to email templates to prevent
injection of HTML, executable code, or other malicious content.
### Why?
In the existing `email` example, we directly insert `user.name` into the
generated email content. Similarly, the `newsletter` collection uses
`doc.name` directly in the email content. A security report identified
this as a potential vulnerability that could be exploited and used to
inject executable or malicious code.
Although this issue does not originate from Payload core, developers
using our examples may unknowingly introduce this vulnerability into
their own codebases.
### How?
Introduces the pre-built `sanitizeUserDataForEmail` function and updates
relevant email examples to use it.
**Fixes `CMS2-1225-14`**
When adding a custom ID field to an array's config, both the default
field provided by Payload, and the custom ID field, exist in the
resulting config. This can lead to problems when the looking up the
field's config, where either one or the other will be returned.
Fixes#12978
Adds `restrictedFileTypes` (default: `false`) to upload collections
which prevents files on a restricted list from being uploaded.
To skip this check:
- set `[Collection].upload.restrictedFileTypes` to `true`
- set `[Collection].upload.mimeType` to any type(s)
This adds a new `analyze` step to our CI that analyzes the bundle size
for our `payload`, `@payloadcms/ui`, `@payloadcms/next` and
`@payloadcms/richtext-lexical` packages.
It does so using a new `build:bundle-for-analysis` script that packages
can add if the normal build step does not output an esbuild-bundled
version suitable for analyzing. For example, `ui` already runs esbuild,
but we run it again using `build:bundle-for-analysis` because we do not
want to split the bundle.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210692087147570
Adds:
```ts
import { lookup } from 'dns/promises'
// ...
const { address } = await lookup(hostname)
// ...
return isSafeIp(address)
```
To ensure that an `ip` address is being verified. Previously, hostnames
were being verified by `isSafeIp`.
Fixes: https://github.com/payloadcms/payload/issues/12876
This fixes the payload bundle script. While not run by default, it's
useful for checking the payload bundle size by manually running `cd
packages/payload && node bundle.js`.
This PR consists of two separate changes. One change cannot pass CI
without the other, so both are included in this single PR.
## CI - ensure types are generated
Our website template is currently failing to build due to a type error.
This error was introduced by a change in our generated types.
Our CI did not catch this issue because it wasn't generating types /
import map before attempting to build the templates. This PR updates the
CI to generate types first.
It also updates some CI step names for improved clarity.
## Fix: type error

This fixes the type error by ensuring we consistently use the _same_
generated `TypedUser` object within payload, instead of `BaseUser`.
Previously, we sometimes used the generated-types user and sometimes the
base user, which was causing type conflicts depending on what the
generated user type was.
It also deprecates the `User` type (which was essentially just
`BaseUser`), as consumers should use `TypedUser` instead. `TypedUser`
will automatically fall back to `BaseUser` if no generated types exists,
but will accept passing it a generated-types User.
Without this change, additional properties added to the user via
generated-types may cause the user object to not be accepted by
functions that only accept a `User` instead of a `TypedUser`, which is
what failed here.
## Templates: re-generate templates to update generated types
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210668927737258
<!--
Thank you for the PR! Please go through the checklist below and make
sure you've completed all the steps.
Please review the
[CONTRIBUTING.md](https://github.com/payloadcms/payload/blob/main/CONTRIBUTING.md)
document in this repository if you haven't already.
The following items will ensure that your PR is handled as smoothly as
possible:
- PR Title must follow conventional commits format. For example, `feat:
my new feature`, `fix(plugin-seo): my fix`.
- Minimal description explained as if explained to someone not
immediately familiar with the code.
- Provide before/after screenshots or code diffs if applicable.
- Link any related issues/discussions from GitHub or Discord.
- Add review comments if necessary to explain to the reviewer the logic
behind a change
### What?
### Why?
### How?
Fixes #
-->
### What?
This PR addresses an issue where the order of operations/conditions for
throwing an unverified email error were incorrect.
### Why?
To properly throw an unverified email error under the correct
conditions.
### How?
Pushing this error to be thrown later in the operation.
Mounts live preview to `../:id` instead `../:id/preview`.
This is a huge win for both UX and a maintainability standpoint.
Here are just a few of those wins:
1. If you edit a document, _then_ decide you want to preview those
changes, you are currently presented with the `LeaveWithoutSaving` modal
and are forced to either save your edits or clear them. This is because
you are being navigated to an entirely new page with it's own form
context. Instead, you should be able to freely navigate back and forth
between the two.
2. If you are an editor who most often uses Live Preview, or you are
editing a collection that typically requires it, you likely want it to
automatically enter live preview mode when you open a document.
Currently, the user has to navigate to the document _first_, then use
the live preview tab. Instead, you should be able to set a preference
and avoid this extra step.
3. Since the inception of Live Preview, we've been maintaining largely
the same code across the default edit view and the live preview view,
which often became out of sync and inconsistent—but they're essentially
doing the same thing. While we could abstract a lot of this out, it is
no longer necessary if the two views are combined into one.
This change does also include some small modifications to UI. The "Live
Preview" tab no longer exists, and instead has been replaced with a
button placed next to the document controls (subject to change).
Before:
https://github.com/user-attachments/assets/48518b02-87ba-4750-ba7b-b21b5c75240a
After:
https://github.com/user-attachments/assets/a8ec8657-a6d6-4ee1-b9a7-3c1173bcfa96