Continuation of https://github.com/payloadcms/payload/pull/12265.
Currently, using `select` on new relationship virtual fields:
```
const doc = await payload.findByID({
collection: 'virtual-relations',
depth: 0,
id,
select: { postTitle: true },
})
```
doesn't work, because in order to calculate `post.title`, the `post`
field must be selected as well. This PR adds logic that sanitizes the
incoming `select` to include those relationships into `select` (that are
related to selected virtual fields)
---------
Co-authored-by: Dan Ribbens <dan.ribbens@gmail.com>
When doing `payload.db.queryDrafts` with `select` without `version`, or
simply your select looks like:
`select: { version: { nonExistingField: true } }` - the `queryDrafts`
function will crash because it tries to access the `version` field.
This PR adds a fallback.
This PR optimizes the new virtual fields with relationships feature
https://github.com/payloadcms/payload/pull/11805 when the path
references the ID field, for example:
```
{
name: 'postCategoryID',
type: 'number',
virtual: 'post.category.id',
},
```
Previously, we did additional population of `category`, which is
unnecessary as we can always grab the ID from the `category` value
itself. One less querying step.
This PR adds an ability to specify a virtual field in this way
```js
{
slug: 'posts',
fields: [
{
name: 'title',
type: 'text',
required: true,
},
],
},
{
slug: 'virtual-relations',
fields: [
{
name: 'postTitle',
type: 'text',
virtual: 'post.title',
},
{
name: 'post',
type: 'relationship',
relationTo: 'posts',
},
],
},
```
Then, every time you query `virtual-relations`, `postTitle` will be
automatically populated (even if using `depth: 0`) on the db level. This
field also, unlike `virtual: true` is available for querying / sorting /
`useAsTitle`.
Also, the field can be deeply nested to 2 or more relationships, for
example:
```
{
name: 'postCategoryTitle',
type: 'text',
virtual: 'post.category.title',
},
```
Where the current collection has `post` - a relationship to `posts`, the
collection `posts` has `category` that's a relationship to `categories`
and finally `categories` has `title`.
### What?
Converts numbers passed to a text field to avoid the database/drizzle
from converting it incorrectly.
### Why?
If you have a hook that passes a value to another field you can
experience this problem where drizzle converts a number value for a text
field to a floating point number in sqlite for example.
### How?
Adds logic to `transform/write/traverseFields.ts` to cast text field
values to string.
Fixes https://github.com/payloadcms/payload/issues/11975
Previously, this configuration was causing errors in postgres due to
long names, even though `dbName` is used:
```
{
slug: 'aliases',
fields: [
{
name: 'thisIsALongFieldNameThatWillCauseAPostgresErrorEvenThoughWeSetAShorterDBName',
dbName: 'shortname',
type: 'array',
fields: [
{
name: 'nested_field_1',
type: 'array',
dbName: 'short_nested_1',
fields: [],
},
{
name: 'nested_field_2',
type: 'text',
},
],
},
],
},
```
This is because we were generating Drizzle relation name (for arrays)
always based on the field path and internally, drizzle uses this name
for aliasing. Now, if `dbName` is present, we use `_{dbName}` instead
for the relation name.
Significantly optimizes the component rendering strategy within the form
state endpoint by precisely rendering only the fields that require it.
This cuts down on server processing and network response sizes when
invoking form state requests **that manipulate array and block rows
which contain server components**, such as rich text fields, custom row
labels, etc. (results listed below).
Here's a breakdown of the issue:
Previously, when manipulating array and block fields, _all_ rows would
render any server components that might exist within them, including
rich text fields. This means that subsequent changes to these fields
would potentially _re-render_ those same components even if they don't
require it.
For example, if you have an array field with a rich text field within
it, adding the first row would cause the rich text field to render,
which is expected. However, when you add a second row, the rich text
field within the first row would render again unnecessarily along with
the new row.
This is especially noticeable for fields with many rows, where every
single row processes its server components and returns RSC data. And
this does not only affect nested rich text fields, but any custom
component defined on the field level, as these are handled in the same
way.
The reason this was necessary in the first place was to ensure that the
server components receive the proper data when they are rendered, such
as the row index and the row's data. Changing one of these rows could
cause the server component to receive the wrong data if it was not
freshly rendered.
While this is still a requirement that rows receive up-to-date props, it
is no longer necessary to render everything.
Here's a breakdown of the actual fix:
This change ensures that only the fields that are actually being
manipulated will be rendered, rather than all rows. The existing rows
will remain in memory on the client, while the newly rendered components
will return from the server. For example, if you add a new row to an
array field, only the new row will render its server components.
To do this, we send the path of the field that is being manipulated to
the server. The server can then use this path to determine for itself
which fields have already been rendered and which ones need required
rendering.
## Results
The following results were gathered by booting up the `form-state` test
suite and seeding 100 array rows, each containing a rich text field. To
invoke a form state request, we navigate to a document within the
"posts" collection, then add a new array row to the list. The result is
then saved to the file system for comparison.
| Test Suite | Collection | Number of Rows | Before | After | Percentage
Change |
|------|------|---------|--------|--------|--------|
| `form-state` | `posts` | 101 | 1.9MB / 266ms | 80KB / 70ms | ~96%
smaller / ~75% faster |
---------
Co-authored-by: James <james@trbl.design>
Co-authored-by: Alessio Gravili <alessio@gravili.de>
Fixes https://github.com/payloadcms/payload/issues/11882
Previously, down migration that dropped the `payload_migrations` table
was failing because `migrationTableExists` doesn't check the current
transaction, only in which you can get a `false` value result.
### What?
Extends our dataloader to add a momiozed payload find function. This way
it will cache the query for the same find request using a cacheKey from
find operation args.
### Why?
This was needed internally for `filterOptions` that exist in an array or
other sitautions where you have the same exact query being made and
awaited many times.
### How?
- Added `find` to payloadDataLoader. Marked `@experimental` in case it
needs to change.
- Created a cache key from the args
- Validate filterOptions changed from `payload.find` to
`payloadDataLoader.find`
- Made `payloadDataLoader` no longer optional on `PayloadRequest`, since
other args are required which are created from createLocalReq (context
for example), I don't see a reason why dataLoader shouldn't be required
also.
Example usage:
```ts
const result = await req.payloadDataLoader.find({
collection,
req,
where,
})
```
Fixes https://github.com/payloadcms/payload/issues/6884
Adds a new flag `acceptIDOnCreate` that allows you to thread your own
`id` to `payload.create` `data`, for example:
```ts
// doc created with id 1
const doc = await payload.create({ collection: 'posts', data: {id: 1, title: "my title"}})
```
```ts
import { Types } from 'mongoose'
const id = new Types.ObjectId().toHexString()
const doc = await payload.create({ collection: 'posts', data: {id, title: "my title"}})
```
This change makes so that data that exists in MongoDB but isn't defined
in the Payload config won't be included to `payload.find` /
`payload.db.find` calls. Now we strip all the additional keys.
Consider you have a field named `secretField` that's also `hidden: true`
(or `read: () => false`) that contains some sensitive data. Then you
removed this field from the database and as for now with the MongoDB
adapter this field will be included to the Local API / REST API results
without any consideration, as Payload doesn't know about it anymore.
This also fixes https://github.com/payloadcms/payload/issues/11542 if
you removed / renamed a relationship field from the schema, Payload
won't sanitize ObjectIDs back to strings anymore.
Ideally you should create a migration script that completely removes the
deleted field from the database with `$unset`, but people rarely do
this.
If you still need to keep those fields to the result, this PR allows you
to do this with the new `allowAdditionalKeys: true` flag.
### What?
This PR adds ability to define indexes on several fields for collections
(compound indexes).
Example:
```ts
{
indexes: [{ unique: true, fields: ['title', 'group.name'] }]
}
```
### Why?
This can be used to either speed up querying/sorting by 2 or more fields
at the same time or to ensure uniqueness between several fields.
### How?
Implements this logic in database adapters. Additionally, adds a utility
`getFieldByPath`.
This PR adds a new `limit` property to `payload.db.updateMany`. This functionality is required for [migrating our job system to use faster, direct db adapter calls](https://github.com/payloadcms/payload/pull/11489)
### What?
If you had multiple operator constraints on a single field, the last one
defined would be the only one used.
Example:
```ts
where: {
id: {
in: [doc2.id],
not_in: [], // <-- only respected this operator constraint
},
}
```
and
```ts
where: {
id: {
not_in: [],
in: [doc2.id], // <-- only respected this operator constraint
},
}
```
They would yield different results.
### Why?
The results were not merged into an `$and` query inside parseParams.
### How?
Merges the results within an `$and` constraint.
Fixes https://github.com/payloadcms/payload/issues/10944
Supersedes https://github.com/payloadcms/payload/pull/11011
Previously, data for globals was inconsistent across database adapters.
In Postgres, globals didn't store correct `createdAt`, `updatedAt`
fields and the `updateGlobal` lacked the `globalType` field. This PR
solves that without introducing schema changes.
Field paths within hooks are not correct.
For example, an unnamed tab containing a group field and nested text
field should have the path:
- `myGroupField.myTextField`
However, within hooks that path is formatted as:
- `_index-1.myGroupField.myTextField`
The leading index shown above should not exist, as this field is
considered top-level since it is located within an unnamed tab.
This discrepancy is only evident through the APIs themselves, such as
when creating a request with invalid data and reading the validation
errors in the response. Form state contains proper field paths, which is
ultimately why this issue was never caught. This is because within the
admin panel we merge the API response with the current form state,
obscuring the underlying issue. This becomes especially obvious in
#10580, where we no longer initialize validation errors within form
state until the form has been submitted, and instead rely solely on the
API response for the initial error state.
Here's comprehensive example of how field paths _should_ be formatted:
```
{
// ...
fields: [
{
// path: 'topLevelNamedField'
// schemaPath: 'topLevelNamedField'
// indexPath: ''
name: 'topLevelNamedField',
type: 'text',
},
{
// path: 'array'
// schemaPath: 'array'
// indexPath: ''
name: 'array',
type: 'array',
fields: [
{
// path: 'array.[n].fieldWithinArray'
// schemaPath: 'array.fieldWithinArray'
// indexPath: ''
name: 'fieldWithinArray',
type: 'text',
},
{
// path: 'array.[n].nestedArray'
// schemaPath: 'array.nestedArray'
// indexPath: ''
name: 'nestedArray',
type: 'array',
fields: [
{
// path: 'array.[n].nestedArray.[n].fieldWithinNestedArray'
// schemaPath: 'array.nestedArray.fieldWithinNestedArray'
// indexPath: ''
name: 'fieldWithinNestedArray',
type: 'text',
},
],
},
{
// path: 'array.[n]._index-2'
// schemaPath: 'array._index-2'
// indexPath: '2'
type: 'row',
fields: [
{
// path: 'array.[n].fieldWithinRowWithinArray'
// schemaPath: 'array._index-2.fieldWithinRowWithinArray'
// indexPath: ''
name: 'fieldWithinRowWithinArray',
type: 'text',
},
],
},
],
},
{
// path: '_index-2'
// schemaPath: '_index-2'
// indexPath: '2'
type: 'row',
fields: [
{
// path: 'fieldWithinRow'
// schemaPath: '_index-2.fieldWithinRow'
// indexPath: ''
name: 'fieldWithinRow',
type: 'text',
},
],
},
{
// path: '_index-3'
// schemaPath: '_index-3'
// indexPath: '3'
type: 'tabs',
tabs: [
{
// path: '_index-3-0'
// schemaPath: '_index-3-0'
// indexPath: '3-0'
label: 'Unnamed Tab',
fields: [
{
// path: 'fieldWithinUnnamedTab'
// schemaPath: '_index-3-0.fieldWithinUnnamedTab'
// indexPath: ''
name: 'fieldWithinUnnamedTab',
type: 'text',
},
{
// path: '_index-3-0-1'
// schemaPath: '_index-3-0-1'
// indexPath: '3-0-1'
type: 'tabs',
tabs: [
{
// path: '_index-3-0-1-0'
// schemaPath: '_index-3-0-1-0'
// indexPath: '3-0-1-0'
label: 'Nested Unnamed Tab',
fields: [
{
// path: 'fieldWithinNestedUnnamedTab'
// schemaPath: '_index-3-0-1-0.fieldWithinNestedUnnamedTab'
// indexPath: ''
name: 'fieldWithinNestedUnnamedTab',
type: 'text',
},
],
},
],
},
],
},
{
// path: 'namedTab'
// schemaPath: '_index-3.namedTab'
// indexPath: ''
label: 'Named Tab',
name: 'namedTab',
fields: [
{
// path: 'namedTab.fieldWithinNamedTab'
// schemaPath: '_index-3.namedTab.fieldWithinNamedTab'
// indexPath: ''
name: 'fieldWithinNamedTab',
type: 'text',
},
],
},
],
},
]
}
```
### What?
Previously, field error messages displayed in toast notifications used
the field path to reference fields that failed validation. This
path-based approach was necessary to distinguish between fields that
might share the same name when nested inside arrays, groups, rows, or
collapsible fields.
However, the human readability of these paths was lacking, especially
for unnamed fields like rows and collapsible fields. For example:
- A text field inside a row could display as: `_index-0.text`
- A text field nested within multiple arrays could display as:
`items.0.subArray.0.text`
These outputs are technically correct but not user-friendly.
### Why?
While the previous format was helpful for pinpointing the specific field
that caused the validation error, it could be more user-friendly and
clearer to read. The goal is to maintain the same level of accuracy
while improving the readability for both developers and content editors.
### How?
To improve readability, the following changes were made:
1. Use Field Labels Instead of Field Paths:
- The ValidationError component now uses the label prop from the field
config (if available) instead of the field’s name.
- If a label is provided, it will be used in the error message.
- If no label exists, it will fall back to the field’s name.
2. Remove _index from Paths for Unnamed Fields (In the validationError
component only):
- For unnamed fields like rows and collapsibles, the _index prefix is
now stripped from the output to make it cleaner.
- Instead of `_index-0.text`, it now outputs just `Text`.
3. Reformat the Error Path for Readability:
- The error message format has been improved to be more human-readable,
showing the field hierarchy in a structured way with array indices
converted to 1-based numbers.
#### Example transformation:
##### Before:
The following fields are invalid: `items.0.subArray.0.text`
##### After:
The following fields are invalid: `Items 1 > SubArray 1 > Text`
Based on https://github.com/payloadcms/payload/pull/10154
If the actual database schema is not changed (no new columns, enums,
indexes, tables) - skip calling Drizzle push. This, potentially can
significantly reduce overhead on reloads in development mode especially
when using remote databases.
If for whatever reason you need to preserve the current behavior you can
use `PAYLOAD_FORCE_DRIZZLE_PUSH=true` env flag.
IDs that are supplied directly through the API, such as client-side
generated IDs when adding new blocks and array rows, are overwritten on
create. This is because when adding blocks or array rows on the client,
their IDs are generated first before being sent to the server for
processing. Then when the server receives this data, it incorrectly
overrides them to ensure they are unique when using relational DBs. But
this only needs to happen when no ID was supplied on create, or
specifically when duplicating documents via the `beforeDuplicate` hook.
### What?
Exposes ability to enable
[AUTOINCREMENT](https://www.sqlite.org/autoinc.html) for Primary Keys
which ensures that the same ID cannot be reused from previously deleted
rows.
```ts
sqliteAdapter({
autoIncrement: true
})
```
### Why?
This may be essential for some systems. Enabled `autoIncrement: true`
also for the SQLite Adapter in our tests, which can be useful when
testing whether the doc was deleted or not when you also have other
create operations.
### How?
Uses Drizzle's `autoIncrement` option.
WARNING:
This cannot be enabled in an existing project without a custom
migration, as it completely changes how primary keys are stored in the
database.
This PR allows to have full type safety on `payload.drizzle` with a
single command
```sh
pnpm payload generate:db-schema
```
Which generates TypeScript code with Drizzle declarations based on the
current database schema.
Example of generated file with the website template:
https://gist.github.com/r1tsuu/b8687f211b51d9a3a7e78ba41e8fbf03
Video that shows the power:
https://github.com/user-attachments/assets/3ced958b-ec1d-49f5-9f51-d859d5fae236
We also now proxy drizzle package the same way we do for Lexical so you
don't have to install it (and you shouldn't because you may have version
mismatch).
Instead, you can import from Drizzle like this:
```ts
import {
pgTable,
index,
foreignKey,
integer,
text,
varchar,
jsonb,
boolean,
numeric,
serial,
timestamp,
uniqueIndex,
pgEnum,
} from '@payloadcms/db-postgres/drizzle/pg-core'
import { sql } from '@payloadcms/db-postgres/drizzle'
import { relations } from '@payloadcms/db-postgres/drizzle/relations'
```
Fixes https://github.com/payloadcms/payload/discussions/4318
In the future we can also support types generation for mongoose / raw
mongodb results.
### What?
`payload.db.updateOne` (and so `payload.db.upsert`) with drizzle
adapters used incoming `where` incorrectly and worked properly only
either if you passed `id` or some where query path required table joins
(like `where: { 'array.title'`) which is also the reason why `upsert`
_worked_ with user preferences specifically, because we need to join the
`preferences_rels` table to query by `user.relationTo` and `user.value`
Fixes https://github.com/payloadcms/payload/issues/9915
This was found here - https://github.com/payloadcms/payload/pull/9913,
the database KV adapter uses `upsert` with `where` by unique fields.
### What?
We sorted migrations by `-name` in `getMigrations` as by assumption from
generated file names, however, it may be not true as the improved (+
unflaked, previously it failed sometimes) test for `migrate:down` can
reproduce. As in result, `migrateDown` / `migrateRefresh` may execute in
order different from `migrate`.
Unflakes the 'should commit multiple operations async' test.
We shouldn't pass the same `req` that doesn't contain a transaction to
different operations that execute in parallel (via `Promise.all`)
without either creating a transaction before or using
`isolateObjectProperty(req, 'transactionID')`. It leads to a race
condition because operation can commit a wrong transaction, different
from inited
This PR fixes cases where you may have a field called `id` within a
group or a named tab, which would have incorrectly been treated as a
custom ID field for the collection.
However, custom IDs need to be defined at the root level - and now
Payload only respects custom IDs defined at the root level.
### What?
Upgrades mongoose from 6 to latest `v8.8.1`
Fixes https://github.com/payloadcms/payload/issues/9171
### Why?
Compatibilty with Mongodb Atlas
### How?
- Updates deps
- Changed ObjectId from bson-objectid to use `new Type.ObjectId` from
mongoose for compatibility (only inside of db-mongodb)
- Internal type adjustments
https://github.com/payloadcms/payload/discussions/9088
BREAKING CHANGES:
All projects with existing data having versions enabled, or relationship or upload fields will want to create the predefined migration that converts all strings to ObjectIDs where needed. This can be created using `payload migrate:create --file @payloadcms/mongodb/relationships-v2-v3`.
For projects making use of the exposed Models from mongoose, review the
upgrade guides from [v6 to
v7](https://mongoosejs.com/docs/7.x/docs/migrating_to_7.html) and [v7 to
v8](https://mongoosejs.com/docs/migrating_to_8.html) and make
adjustments as needed.
---------
Co-authored-by: Sasha <64744993+r1tsuu@users.noreply.github.com>
### What?
Adds full support for the point field to Postgres and Vercel Postgres
adapters through the Postgis extension. Fully the same API as with
MongoDB, including support for `near`, `within` and `intersects`
operators.
Additionally, exposes to adapter args:
*
`tablesFilter`https://orm.drizzle.team/docs/drizzle-kit-push#including-tables-schemas-and-extensions.
* `extensions` list of extensions to create, for example `['vector',
'pg_search']`, `postgis` is created automatically if there's any point
field
### Why?
It's essential to support that field type, especially if the postgres
adapter should be out of beta on 3.0 stable.
### How?
* Bumps `drizzle-orm` to `0.36.1` and `drizzle-kit` to `0.28.0` as we
need this change https://github.com/drizzle-team/drizzle-orm/pull/3141
* Uses its functions to achieve querying functionality, for example the
`near` operator works through `ST_DWithin` or `intersects` through
`ST_Intersects`.
* Removes MongoDB condition from all point field tests, but keeps for
SQLite
Resolves these discussions:
https://github.com/payloadcms/payload/discussions/8996https://github.com/payloadcms/payload/discussions/8644
fix: migrateRefresh migrates without previously ran migrations
chore: adds tests for database migrate:fresh and migrate:refresh
---------
Co-authored-by: Sasha <64744993+r1tsuu@users.noreply.github.com>
### What?
Moved the logic for copying the data.id to data._id to the mongoose
adapter.
### Why?
If you have any hooks that need to set the `id`, the value does not get
sent to mongodb as you would expect since it was copied before the
beforeValidate hooks.
### How?
Now data._id is assigned only in the mongodb adapter's `create`
function.
BREAKING CHANGES:
When using custom ID fields, if you have any collection hooks for
beforeValidate, beforeChange then `data._id` will no longer be assigned
as this happens now in the database adapter. Use `data.id` instead.
Adds abillity to customize the generated Drizzle schema with
`beforeSchemaInit` and `afterSchemaInit`. Could be useful if you want to
preserve the existing database schema / override the generated one with
features that aren't supported from the Payload config.
## Docs:
### beforeSchemaInit
Runs before the schema is built. You can use this hook to extend your
database structure with tables that won't be managed by Payload.
```ts
import { postgresAdapter } from '@payloadcms/db-postgres'
import { integer, pgTable, serial } from 'drizzle-orm/pg-core'
postgresAdapter({
beforeSchemaInit: [
({ schema, adapter }) => {
return {
...schema,
tables: {
...schema.tables,
addedTable: pgTable('added_table', {
id: serial('id').notNull(),
}),
},
}
},
],
})
```
One use case is preserving your existing database structure when
migrating to Payload. By default, Payload drops the current database
schema, which may not be desirable in this scenario.
To quickly generate the Drizzle schema from your database you can use
[Drizzle
Introspection](https://orm.drizzle.team/kit-docs/commands#introspect--pull)
You should get the `schema.ts` file which may look like this:
```ts
import { pgTable, uniqueIndex, serial, varchar, text } from 'drizzle-orm/pg-core'
export const users = pgTable('users', {
id: serial('id').primaryKey(),
fullName: text('full_name'),
phone: varchar('phone', { length: 256 }),
})
export const countries = pgTable(
'countries',
{
id: serial('id').primaryKey(),
name: varchar('name', { length: 256 }),
},
(countries) => {
return {
nameIndex: uniqueIndex('name_idx').on(countries.name),
}
},
)
```
You can import them into your config and append to the schema with the
`beforeSchemaInit` hook like this:
```ts
import { postgresAdapter } from '@payloadcms/db-postgres'
import { users, countries } from '../drizzle/schema'
postgresAdapter({
beforeSchemaInit: [
({ schema, adapter }) => {
return {
...schema,
tables: {
...schema.tables,
users,
countries
},
}
},
],
})
```
Make sure Payload doesn't overlap table names with its collections. For
example, if you already have a collection with slug "users", you should
either change the slug or `dbName` to change the table name for this
collection.
### afterSchemaInit
Runs after the Drizzle schema is built. You can use this hook to modify
the schema with features that aren't supported by Payload, or if you
want to add a column that you don't want to be in the Payload config.
To extend a table, Payload exposes `extendTable` utillity to the args.
You can refer to the [Drizzle
documentation](https://orm.drizzle.team/docs/sql-schema-declaration).
The following example adds the `extra_integer_column` column and a
composite index on `country` and `city` columns.
```ts
import { postgresAdapter } from '@payloadcms/db-postgres'
import { index, integer } from 'drizzle-orm/pg-core'
import { buildConfig } from 'payload'
export default buildConfig({
collections: [
{
slug: 'places',
fields: [
{
name: 'country',
type: 'text',
},
{
name: 'city',
type: 'text',
},
],
},
],
db: postgresAdapter({
afterSchemaInit: [
({ schema, extendTable, adapter }) => {
extendTable({
table: schema.tables.places,
columns: {
extraIntegerColumn: integer('extra_integer_column'),
},
extraConfig: (table) => ({
country_city_composite_index: index('country_city_composite_index').on(
table.country,
table.city,
),
}),
})
return schema
},
],
}),
})
```
<!--
For external contributors, please include:
- A summary of the pull request and any related issues it fixes.
- Reasoning for the changes made or any additional context that may be
useful.
Ensure you have read and understand the
[CONTRIBUTING.md](https://github.com/payloadcms/payload/blob/main/CONTRIBUTING.md)
document in this repository.
-->
## Description
Adds `virtual` property to the fields config. Providing `true`
completely disables the field in the DB, which is useful for [Virtual
Fields](https://payloadcms.com/blog/learn-how-virtual-fields-can-help-solve-common-cms-challenges)
Disables abillity to query by a field with `virtual: true`.
Currently, they bloat the DB with unused tables / columns, which may as
well introduce additional joins.
Discussion https://github.com/payloadcms/payload/discussions/6270
Prev PR (this one contains only this feature):
https://github.com/payloadcms/payload/pull/6983
- [x] I have read and understand the
[CONTRIBUTING.md](https://github.com/payloadcms/payload/blob/main/CONTRIBUTING.md)
document in this repository.
## Type of change
<!-- Please delete options that are not relevant. -->
- [x] New feature (non-breaking change which adds functionality)
- [x] This change requires a documentation update
## Checklist:
- [x] I have added tests that prove my fix is effective or that my
feature works
- [x] Existing test suite passes locally with my changes
- [x] I have made corresponding changes to the documentation