### What?
Brackets (`[ ]`) in option values end up in GraphQL enum names via
`formatName`, causing schema generation to fail. This PR adds a single
rule to `formatName`:
- replace `[` and `]` with `_`
### Why?
Using `_` (instead of removing the brackets) is safer and more
consistent:
- Avoid collisions: removal can merge distinct strings (`"A[B]"` →
`"AB"`). `_` keeps them distinct (`"A_B"`).
- **Consistency**: `formatName` already maps punctuation to `_` (`. - /
+ , ( ) '`). Brackets follow the same rule.
Readability: `mb-[150px]` → `mb__150px_` is clearer than `mb150px`.
Digits/units safety: removal can jam characters (`w-[2/3]` → `w23`); `_`
avoids that (`w_2_3_`).
### How?
Update formatName to include a bracket replacement step:
```
.replace(/\[|\]/g, '_')
```
No other call sites or value semantics change; only names containing
brackets are affected.
Fixes#13466
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1211141396953194
### What?
- Updated the `countOperation` to respect the `trash` argument.
### Why?
- Previously, `count` would incorrectly include trashed documents even
when `trash` was not specified.
- This change aligns `count` behavior with `find` and other operations,
providing accurate counts for normal and trashed documents.
### How?
- Applied `appendNonTrashedFilter` in `countOperation` to automatically
exclude soft-deleted docs when `trash: false` (default).
- Added `trash` argument support in Local API, REST API (`/count`
endpoints), and GraphQL (`count<Collection>` queries).
### What?
This PR introduces complete trash (soft-delete) support. When a
collection is configured with `trash: true`, documents can now be
soft-deleted and restored via both the API and the admin panel.
```
import type { CollectionConfig } from 'payload'
const Posts: CollectionConfig = {
slug: 'posts',
trash: true, // <-- New collection config prop @default false
fields: [
{
name: 'title',
type: 'text',
},
// other fields...
],
}
```
### Why
Soft deletes allow developers and admins to safely remove documents
without losing data immediately. This enables workflows like reversible
deletions, trash views, and auditing—while preserving compatibility with
drafts, autosave, and version history.
### How?
#### Backend
- Adds new `trash: true` config option to collections.
- When enabled:
- A `deletedAt` timestamp is conditionally injected into the schema.
- Soft deletion is performed by setting `deletedAt` instead of removing
the document from the database.
- Extends all relevant API operations (`find`, `findByID`, `update`,
`delete`, `versions`, etc.) to support a new `trash` param:
- `trash: false` → excludes trashed documents (default)
- `trash: true` → includes both trashed and non-trashed documents
- To query **only trashed** documents: use `trash: true` with a `where`
clause like `{ deletedAt: { exists: true } }`
- Enforces delete access control before allowing a soft delete via
update or updateByID.
- Disables version restoring on trashed documents (must be restored
first).
#### Admin Panel
- Adds a dedicated **Trash view**: `/collections/:collectionSlug/trash`
- Default delete action now soft-deletes documents when `trash: true` is
set.
- **Delete confirmation modal** includes a checkbox to permanently
delete instead.
- Trashed documents:
- Displays UI banner for better clarity of trashed document edit view vs
non-trashed document edit view
- Render in a read-only edit view
- Still allow access to **Preview**, **API**, and **Versions** tabs
- Updated Status component:
- Displays “Previously published” or “Previously a draft” for trashed
documents.
- Disables status-changing actions when documents are in trash.
- Adds new **Restore** bulk action to clear the `deletedAt` timestamp.
- New `Restore` and `Permanently Delete` buttons for
single-trashed-document restore and permanent deletion.
- **Restore confirmation modal** includes a checkbox to restore as
`published`, defaults to `draft`.
- Adds **Empty Trash** and **Delete permanently** bulk actions.
#### Notes
- This feature is completely opt-in. Collections without trash: true
behave exactly as before.
https://github.com/user-attachments/assets/00b83f8a-0442-441e-a89e-d5dc1f49dd37
Adds full session functionality into Payload's existing local
authentication strategy.
It's enabled by default, because this is a more secure pattern that we
should enforce. However, we have provided an opt-out pattern for those
that want to stick to stateless JWT authentication by passing
`collectionConfig.auth.useSessions: false`.
Todo:
- [x] @jessrynkar to update the Next.js server functions for refresh and
logout to support these new features
- [x] @jessrynkar resolve build errors
---------
Co-authored-by: Elliot DeNolf <denolfe@gmail.com>
Co-authored-by: Jessica Chowdhury <jessica@trbl.design>
Co-authored-by: Jarrod Flesch <30633324+JarrodMFlesch@users.noreply.github.com>
Co-authored-by: Sasha <64744993+r1tsuu@users.noreply.github.com>
Currently, we globally enable both DOM and Node.js types. While this
mostly works, it can cause conflicts - particularly with `fetch`. For
example, TypeScript may incorrectly allow browser-only properties (like
`cache`) and reject valid Node.js ones like `dispatcher`.
This PR disables DOM types for server-only packages like payload,
ensuring Node-specific typings are applied. This caught a few instances
of incorrect fetch usage that were previously masked by overlapping DOM
types.
This is not a perfect solution - packages that contain both server and
client code (like richtext-lexical or next) will still suffer from this
issue. However, it's an improvement in cases where we can cleanly
separate server and client types, like for the `payload` package which
is server-only.
## Use-case
This change enables https://github.com/payloadcms/payload/pull/12622 to
explore using node-native fetch + `dispatcher`, instead of `node-fetch`
+ `agent`.
Currently, it will incorrectly report that `dispatcher` is not a valid
property for node-native fetch
This PR introduces a few changes to improve turbopack compatibility and
ensure e2e tests pass with turbopack enabled
## Changes to improve turbopack compatibility
- Use correct sideEffects configuration to fix scss issues
- Import scss directly instead of duplicating our scss rules
- Fix some scss rules that are not supported by turbopack
- Bump Next.js and all other dependencies used to build payload
## Changes to get tests to pass
For an unknown reason, flaky tests flake a lot more often in turbopack.
This PR does the following to get them to pass:
- add more `wait`s
- fix actual flakes by ensuring previous operations are properly awaited
## Blocking turbopack bugs
- [X] https://github.com/vercel/next.js/issues/76464
- Fix PR: https://github.com/vercel/next.js/pull/76545
- Once fixed: change `"sideEffectsDisabled":` back to `"sideEffects":`
## Non-blocking turbopack bugs
- [ ] https://github.com/vercel/next.js/issues/76956
## Related PRs
https://github.com/payloadcms/payload/pull/12653https://github.com/payloadcms/payload/pull/12652
I think it's easier to review this PR commit by commit, so I'll explain
it this way:
## Commits
1. [parallelize eslint script (still showing logs results in
serial)](c9ac49c12d):
Previously, `--concurrency 1` was added to the script to make the logs
more readable. However, turborepo has an option specifically for these
use cases: `--log-order=grouped` runs the tasks in parallel but outputs
them serially. As a result, the lint script is now significantly faster.
2. [run pnpm
lint:fix](9c128c276a)
The auto-fix was run, which resolved some eslint errors that were
slipped in due to the use of `no-verify`. Most of these were
`perfectionist` fixes (property ordering) and the removal of unnecessary
assertions. Starting with this PR, this won't happen again in the
future, as we'll be verifying the linter in every PR across the entire
codebase (see commit 7).
3. [fix eslint non-autofixable
errors](700f412a33)
All manual errors have been resolved except for the configuration errors
addressed in commit 5. Most were React compiler violations, which have
been disabled and commented out "TODO" for now. There's also an unused
`use no memo` and a couple of `require` errors.
4. [move react-compiler linter to eslint-config
package](4f7cb4d63a)
To simplify the eslint configuration. My concern was that there would be
a performance regression when used in non-react related packages, but
none was experienced. This is probably because it only runs on .tsx
files.
5. [remove redundant eslint config files and fix
allowDefaultProject](a94347995a)
The main feature introduced by `typescript-eslint` v8 was
`projectService`, which automatically searches each file for the closest
`tsconfig`, greatly simplifying configuration in monorepos
([source](https://typescript-eslint.io/blog/announcing-typescript-eslint-v8#project-service)).
Once I moved `projectService` to `packages/eslint-config`, all the other
configuration files could be easily removed.
I confirmed that pnpm lint still works on individual packages.
The other important change was that the pending eslint errors from
commits 2 and 3 were resolved. That is, some files were giving the
error: "[File] was not found by the project service. Consider either
including it in the tsconfig.json or including it in
allowDefaultProject." Below I copy the explanatory comment I left in the
code:
```ts
// This is necessary because `tsconfig.base.json` defines `"rootDir": "${configDir}/src"`,
// And the following files aren't in src because they aren't transpiled.
// This is typescript-eslint's way of adding files that aren't included in tsconfig.
// See: https://typescript-eslint.io/troubleshooting/typed-linting/#i-get-errors-telling-me--was-not-found-by-the-project-service-consider-either-including-it-in-the-tsconfigjson-or-including-it-in-allowdefaultproject
// The best practice is to have a tsconfig.json that covers ALL files and is used for
// typechecking (with noEmit), and a `tsconfig.build.json` that is used for the build
// (or alternatively, swc, tsup or tsdown). That's what we should ideally do, in which case
// this hardcoded list wouldn't be necessary. Note that these files don't currently go
// through ts, only through eslint.
```
6. [Differentiate errors from warnings in VScode ESLint
Rules](5914d2f48d)
There's no reason to do that. If an eslint rule isn't an error, it
should be disabled or converted to a warning.
7. [Disable skip lint, and lint over the entire repo now that it's
faster](e4b28f1360)
The GitHub action linted only the files that had changed in the PR.
While this seems like a good idea, once exceptions were introduced with
[skip lint], they opened the door to propagating more and more errors.
Often, the linter was skipped, not because someone introduced new
errors, but because they were trying to avoid those that had already
crept in, sometimes accidentally introducing new ones.
On the other hand, `pnpm lint` now runs in parallel (commit 1), so it's
not that slow. Additionally, it runs in parallel with other GitHub
actions like e2e tests, which take much longer, so it can't represent a
bottleneck in CI.
8. [fix lint in next
package](4506595f91)
Small fix missing from commit 5
9. [Merge remote-tracking branch 'origin/main' into
fix-eslint](563d4909c1)
10. [add again eslint.config.js in payload
package](78f6ffcae7)
The comment in the code explains it. Basically, after the merge from
main, the payload package runs out of memory when linting, probably
because it grew in recent PRs. That package will sooner or later
collapse for our tooling, so we may have to split it. It's already too
big.
## Future Actions
- Resolve React compiler violations, as mentioned in commit 3.
- Decouple the `tsconfig` used for typechecking and build across the
entire monorepo (as explained in point 5) to ensure ts coverage even for
files that aren't transpiled (such as scripts).
- Remove the few remaining `eslint.config.js`. I had to leave the
`richtext-lexical` and `next` ones for now. They could be moved to the
root config and scoped to their packages, as we do for example with
`templates/vercel-postgres/**`. However, I couldn't get it to work, I
don't know why.
- Make eslint in the test folder usable. Not only are we not linting
`test` in CI, but now the `pnpm eslint .` command is so large that my
computer freezes. If each suite were its own package, this would be
solved, and dynamic codegen + git hooks to modify tsconfig.base.json
wouldn't be necessary
([related](https://github.com/payloadcms/payload/pull/11984)).
This PR introduced https://github.com/payloadcms/payload/pull/11952
improvement for graphql schema with making fields of the `Paginated<T>`
interface non-nullable.
However, there are a few special ones - `nextPage` and `prevPage`. They
can be `null` when:
The result returned 0 docs.
The result returned `x` docs, but in the DB we don't have `x+1` doc.
Thus, `nextPage` will be `null`. The result will have `nextPage: null`.
Finally, when we query 1st page, `prevPage` is `null` as well.
<img width="873" alt="image"
src="https://github.com/user-attachments/assets/04d04b13-ac26-4fc1-b421-b5f86efc9b65"
/>
Fixes population of joins that target relationship fields that have
`relationTo` as an array, for example:
```ts
// Posts collection
{
name: 'polymorphic',
type: 'relationship',
relationTo: ['categories', 'users'],
},
// Categories collection
{
name: 'polymorphic',
type: 'join',
collection: 'posts',
on: 'polymorphic',
}
```
Thanks @jaycetde for the integration test
https://github.com/payloadcms/payload/pull/12278!
---------
Co-authored-by: Jayce Pulsipher <jpulsipher@nav.com>
GraphQL requests with join fields result in a lot of extra count rows
queries that aren't necessary. This turns off pagination and uses
limit+1 and slice instead.
### What?
Makes several fields and list item types in query results (e.g. `docs`)
non-nullable.
### Why?
When dealing with code generated from a Payload GraphQL schema, it is
often necessary to use type guards and optional chaining.
For example:
```graphql
type Posts {
docs: [Post]
...
}
```
This implies that the `docs` field itself is nullable and that the array
can contain nulls. In reality, neither of these is true. But because of
the types generated by tools like `graphql-code-generator`, the way to
access `posts` ends up something like this:
```ts
const posts = (query.data.docs ?? []).filter(doc => doc != null);
```
Instead, we would like the schema to be:
```graphql
type Posts {
docs: [Post!]!
...
}
```
### How?
The proposed change involves adding `GraphQLNonNull` where appropriate.
---------
Co-authored-by: Dan Ribbens <dan.ribbens@gmail.com>
The same as https://github.com/payloadcms/payload/pull/11763 but also
for GraphQL. The previous fix was working only for the Local API and
REST API due to a different method for querying joins in GraphQL.
Fixes https://github.com/payloadcms/payload/issues/6884
Adds a new flag `acceptIDOnCreate` that allows you to thread your own
`id` to `payload.create` `data`, for example:
```ts
// doc created with id 1
const doc = await payload.create({ collection: 'posts', data: {id: 1, title: "my title"}})
```
```ts
import { Types } from 'mongoose'
const id = new Types.ObjectId().toHexString()
const doc = await payload.create({ collection: 'posts', data: {id, title: "my title"}})
```
### What? Cannot generate GraphQL schema with hyphenated field names
Using field names that do not adhere to the GraphQL `_a-z & A-Z`
standard prevent you from generating a schema, even though it will work
just fine everywhere else.
Example: `my-field-name` will prevent schema generation.
### How? Field name sanitization on generation and querying
This PR adds sanitization to the schema generation that sanitizes field
names.
- It formats field names in a GraphQL safe format for schema generation.
**It does not change your config.**
- It adds resolvers for field names that do not adhere so they can be
mapped from the config name to the GraphQL safe name.
Example:
- `my-field` will turn into `my_field` in the schema generation
- `my_field` will resolve from `my-field` when data comes out
### Other notes
- Moves code from `packages/graphql/src/schema/buildObjectType.ts` to
`packages/graphql/src/schema/fieldToSchemaMap.ts`
- Resolvers are only added when necessary: `if (formatName(field.name)
!== field.name)`.
---------
Co-authored-by: Dan Ribbens <dan.ribbens@gmail.com>