### What?
Previously if you used the typescriptSchema and `returned: false`, the
field would still be required anyways.
### Why?
We were adding fields to be required on the collection without comparing
the returned schema from typescriptSchema functions.
### How?
This changes the order of logic so that `requiredFieldNames` on the
collection is only after running and checking the field schema.
Continuation of #11867. When rendering custom fields nested within
arrays or blocks, such as the Lexical rich text editor which is treated
as a custom field, these fields will sometimes disappear when form state
requests are invoked sequentially. This is especially reproducible on
slow networks.
This is different from the previous PR in that this issue is caused by
adding _rows_ back-to-back, whereas the previous issue was caused when
adding a single row followed by a change to another field.
Here's a screen recording demonstrating the issue:
https://github.com/user-attachments/assets/5ecfa9ec-b747-49ed-8618-df282e64519d
The problem is that `requiresRender` is never sent in the form state
request for row 2. This is because the [task
queue](https://github.com/payloadcms/payload/pull/11579) processes tasks
within a single `useEffect`. This forces React to batch the results of
these tasks into a single rendering cycle. So if request 1 sets state
that request 2 relies on, request 2 will never use that state since
they'll execute within the same lifecycle.
Here's a play-by-play of the current behavior:
1. The "add row" event is dispatched
a. This sets `requiresRender: true` in form state
1. A form state request is sent with `requiresRender: true`
1. While that request is processing, another "add row" event is
dispatched
a. This sets `requiresRender: true` in form state
b. This adds a form state request into the queue
1. The initial form state request finishes
a. This sets `requiresRender: false` in form state
1. The next form state request that was queued up in 3b is sent with
`requiresRender: false`
a. THIS IS EXPECTED, BUT SHOULD ACTUALLY BE `true`!!
To fix this this, we need to ensure that the `requiresRender` property
is persisted into the second request instead of overridden. To do this,
we can add a new `serverPropsToIgnore` to form state which is read when
the processing results from the server. So if `requiresRender` exists in
`serverPropsToIgnore`, we do not merge it. This works because we
actually mutate form state in between requests. So request 2 can read
the results from request 1 without going through an additional rendering
cycle.
Here's a play-by-play of the fix:
1. The "add row" event is dispatched
a. This sets `requiresRender: true` in form state
b. This adds a task in the queue to mutate form state with
`requiresRender: true`
1. A form state request is sent with `requiresRender: true`
1. While that request is processing, another "add row" event is
dispatched
a. This sets `requiresRender: true` in form state AND
`serverPropsToIgnore: [ "requiresRender" ]`
c. This adds a form state request into the queue
1. The initial form state request finishes
a. This returns `requiresRender: false` from the form state endpoint BUT
IS IGNORED
1. The next form state request that was queued up in 3c is sent with
`requiresRender: true`
Bumps the github_actions group with 1 update in the / directory:
[supercharge/mongodb-github-action](https://github.com/supercharge/mongodb-github-action).
Bumps the github_actions group with 1 update in the /.github/workflows
directory:
[supercharge/mongodb-github-action](https://github.com/supercharge/mongodb-github-action).
Updates `supercharge/mongodb-github-action` from 1.11.0 to 1.12.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/supercharge/mongodb-github-action/releases">supercharge/mongodb-github-action's
releases</a>.</em></p>
<blockquote>
<h2>1.12.0</h2>
<p>Release 1.12.0</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/supercharge/mongodb-github-action/blob/main/CHANGELOG.md">supercharge/mongodb-github-action's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/superchargejs/mongodb-github-action/compare/v1.11.0...v1.12.0">1.12.0</a>
- 2025-01-05</h2>
<h3>Added</h3>
<ul>
<li>added <code>mongodb-image</code> input: this option allows you to
define a custom Docker container image. It uses <code>mongo</code> by
default, but you may specify an image from a different registry than
Docker hub. Please check the Readme for details.</li>
</ul>
<h3>Updated</h3>
<ul>
<li>bump dependencies</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="90004df786"><code>90004df</code></a>
bump node and mongodb versions</li>
<li><a
href="b5fa058527"><code>b5fa058</code></a>
bump version to 1.12.0 in readme</li>
<li><a
href="369a992ac4"><code>369a992</code></a>
update changelog</li>
<li><a
href="08d5bf96ab"><code>08d5bf9</code></a>
bump deps</li>
<li><a
href="cbbc6f8110"><code>cbbc6f8</code></a>
Merge pull request <a
href="https://redirect.github.com/supercharge/mongodb-github-action/issues/64">#64</a>
from Sam-Bate-ITV/feature/alternative_image</li>
<li><a
href="6131e7ff86"><code>6131e7f</code></a>
wording</li>
<li><a
href="1f93cb7bb1"><code>1f93cb7</code></a>
change README based on PR review</li>
<li><a
href="812452b9eb"><code>812452b</code></a>
use docker hub for CI</li>
<li><a
href="4639b459cd"><code>4639b45</code></a>
apply suggested change</li>
<li><a
href="2ae9a450cf"><code>2ae9a45</code></a>
<a
href="https://redirect.github.com/supercharge/mongodb-github-action/issues/62">#62</a>:
add option for specifying image</li>
<li>See full diff in <a
href="https://github.com/supercharge/mongodb-github-action/compare/1.11.0...1.12.0">compare
view</a></li>
</ul>
</details>
<br />
Updates `supercharge/mongodb-github-action` from 1.11.0 to 1.12.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/supercharge/mongodb-github-action/releases">supercharge/mongodb-github-action's
releases</a>.</em></p>
<blockquote>
<h2>1.12.0</h2>
<p>Release 1.12.0</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/supercharge/mongodb-github-action/blob/main/CHANGELOG.md">supercharge/mongodb-github-action's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/superchargejs/mongodb-github-action/compare/v1.11.0...v1.12.0">1.12.0</a>
- 2025-01-05</h2>
<h3>Added</h3>
<ul>
<li>added <code>mongodb-image</code> input: this option allows you to
define a custom Docker container image. It uses <code>mongo</code> by
default, but you may specify an image from a different registry than
Docker hub. Please check the Readme for details.</li>
</ul>
<h3>Updated</h3>
<ul>
<li>bump dependencies</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="90004df786"><code>90004df</code></a>
bump node and mongodb versions</li>
<li><a
href="b5fa058527"><code>b5fa058</code></a>
bump version to 1.12.0 in readme</li>
<li><a
href="369a992ac4"><code>369a992</code></a>
update changelog</li>
<li><a
href="08d5bf96ab"><code>08d5bf9</code></a>
bump deps</li>
<li><a
href="cbbc6f8110"><code>cbbc6f8</code></a>
Merge pull request <a
href="https://redirect.github.com/supercharge/mongodb-github-action/issues/64">#64</a>
from Sam-Bate-ITV/feature/alternative_image</li>
<li><a
href="6131e7ff86"><code>6131e7f</code></a>
wording</li>
<li><a
href="1f93cb7bb1"><code>1f93cb7</code></a>
change README based on PR review</li>
<li><a
href="812452b9eb"><code>812452b</code></a>
use docker hub for CI</li>
<li><a
href="4639b459cd"><code>4639b45</code></a>
apply suggested change</li>
<li><a
href="2ae9a450cf"><code>2ae9a45</code></a>
<a
href="https://redirect.github.com/supercharge/mongodb-github-action/issues/62">#62</a>:
add option for specifying image</li>
<li>See full diff in <a
href="https://github.com/supercharge/mongodb-github-action/compare/1.11.0...1.12.0">compare
view</a></li>
</ul>
</details>
<br />
You can trigger a rebase of this PR by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Previously, jobs were executed in FIFO order on MongoDB, and LIFO on
Postgres, with no way to configure this behavior.
This PR makes FIFO the default on both MongoDB and Postgres and
introduces the following new options to configure the processing order
globally or on a queue-by-queue basis:
- a `processingOrder` property to the jobs config
- a `processingOrder` argument to `payload.jobs.run()` to override
what's set in the jobs config
It also adds a new `sequential` option to `payload.jobs.run()`, which
can be useful for debugging.
This replaces usage of our `chainMethods` helper to dynamically chain
queries with [drizzle dynamic query
building](https://orm.drizzle.team/docs/dynamic-query-building).
This is more type-safe, more readable and requires less code
This adds support for running multiple job queue tasks in parallel
within the same workflow while preventing conflicts. Previously, this
would have caused the following issues:
- Job log entries get lost - the final job log is incomplete, despite
all tasks having been executed
- Write conflicts in postgres, leading to unique constraint violation
errors
The solution involves handling job log data updates in a way that avoids
overwriting, and ensuring the final update reflects the latest job log
data. Each job log entry now initializes its own ID, so a given job log
entry’s ID remains the same across multiple, parallel task executions.
## Postgres
In Postgres, we need to enable transactions for the
`payload.db.updateJobs` operation; otherwise, two tasks updating the
same job in parallel can conflict. This happens because Postgres handles
array rows by deleting them all, then re-inserting (rather than
upserting). The rows are stored in a separate table, and the following
scenario can occur:
Op 1: deletes all job log rows
Op 2: deletes all job log rows
Op 1: inserts 200 job log rows
Op 2: insert the same 200 job log rows again => `error: “duplicate key
value violates unique constraint "payload_jobs_log_pkey”`
Because transactions were not used, the rows inserted by Op 1
immediately became visible to Op 2, causing the conflict. Enabling
transactions fixes this. In theory, it can still happen if Op 1 commits
before Op 2 starts inserting (due to the read committed isolation
level), but it should occur far less frequently.
Alongside this change, we should consider inserting the rows using an
upsert (update on conflict), which will get rid of this error
completely. That way, if the insertion of Op 1 is visible to Op 2, Op 2
will simply overwrite it, rather than erroring. Individual job entries
are immutable and job entries cannot be deleted, thus this shouldn't
corrupt any data.
## Mongo
In Mongo, the issue is addressed by ensuring that log row deletions
caused due to different log states in concurrent operations are not
merged back to the client job log, and by making sure the final update
includes all job logs.
There is no duplicate key error in Mongo because the array log resides
in the same document and duplicates are simply upserted. We cannot use
transactions in Mongo, as it appears to lock the document in a way that
prevents reliable parallel updates, leading to:
`MongoServerError: WriteConflict error: this operation conflicted with
another operation. Please retry your operation or multi-document
transaction`
You can access the database name from `sanitizedConfig.db.name`. But
currently, it' not possible to access the db name from the unsanitized
config.
Plugins only have access to the unsanitized config. This change allows
db adapters to return the db name early, which will allow plugins to
conditionally initialize db-specific functionality
### What
This PR improves the `getSafeRedirect` utility to improve security
around open redirect handling.
### How
- Normalizes and decodes the redirect path using `decodeURIComponent`
- Catches malformed encodings with a try/catch fallback
- Blocks open redirects
In 3.0, we made the decision to export all types from the main package
export (e.g. `payload/types` => `payload`). This improves type
discoverability by IDEs and simplifies importing types.
This PR does the same for our db adapters, which still have a separate
`/types` subpath export. While those are kept for
backwards-compatibility, we can remove them in 4.0.
<!--
Thank you for the PR! Please go through the checklist below and make
sure you've completed all the steps.
Please review the
[CONTRIBUTING.md](https://github.com/payloadcms/payload/blob/main/CONTRIBUTING.md)
document in this repository if you haven't already.
The following items will ensure that your PR is handled as smoothly as
possible:
- PR Title must follow conventional commits format. For example, `feat:
my new feature`, `fix(plugin-seo): my fix`.
- Minimal description explained as if explained to someone not
immediately familiar with the code.
- Provide before/after screenshots or code diffs if applicable.
- Link any related issues/discussions from GitHub or Discord.
- Add review comments if necessary to explain to the reviewer the logic
behind a change
### What?
### Why?
### How?
Fixes #
-->
### What?
This PR aims to deflake the `test/versions/e2e.spec.ts:925:5 › Versions
› Collections with draft validation › - with autosave - shows a prevent
leave alert when form is submitted but invalid` e2e test.
The issue seems to be that the `fill` call followed by a `page.reload`
sometimes conflicts with autosave which may cause the test to flake.
### Why?
To deflake this test in ci.
### How?
Adds a single `waitForAutoSaveToRunAndComplete` function call prior to
the last call to `page.reload`. In my testing, on my local machine,
adding the `waitForAutoSaveToRunAndComplete` function allows the test to
pass every time. Without this, the tests fails on my machine
consistently.
Synchronous file system operations such as `readFileSync` block the
event loop, whereas the asynchronous equivalents (like await
`fs.promises.readFile`) do not. This PR replaces certain synchronous fs
calls with their asynchronous counterparts in contexts where async
operations are already in use, improving performance by avoiding event
loop blocking.
Most of the synchronous calls were in our file upload code. Converting
them to async should theoretically free up the event loop and allow
more, other requests to run in parallel without delay
### What?
This PR aims to deflake the indexed fields e2e test in
`test/fields/collections/Indexed/e2e.spec.ts`.
The issue is that this test is setup in a way where sometimes two toasts
will present themselves in the ui. The second toast assertion will fail
with a strict mode violation as the toast locator will resolve to two
elements.
### Why?
To prevent this test from flaking in ci.
### How?
Adding a new `dismissAfterAssertion` flag to the `assertToastErrors`
helper function which dismisses the toasts. This way, the toasts will
not raise the aforementioned error as they will be dismissed from the
ui.
The logic is handled in a separate loop through such that the assertions
occur first. This is done so that dismissing a toast does not surface
errors due to the order of toasts being shown changing.
### What?
Fixed client config caching to properly update when switching languages
in the admin UI.
### Why?
Currently, switching languages doesn't fully update the UI because
client config stays cached with previous language translations.
### How?
Created a language-aware caching system that stores separate configs for
each language and only uses cached config when it matches the active
language.
Before:
```typescript
let cachedClientConfig: ClientConfig | null = global._payload_clientConfig
if (!cachedClientConfig) {
cachedClientConfig = global._payload_clientConfig = null
}
export const getClientConfig = cache(
(args: { config: SanitizedConfig; i18n: I18nClient; importMap: ImportMap }): ClientConfig => {
if (cachedClientConfig && !global._payload_doNotCacheClientConfig) {
return cachedClientConfig
}
// ... create new config ...
}
);
```
After:
```typescript
let cachedClientConfigs: Record<string, ClientConfig> = global._payload_localizedClientConfigs
if (!cachedClientConfigs) {
cachedClientConfigs = global._payload_localizedClientConfigs = {}
}
export const getClientConfig = cache(
(args: { config: SanitizedConfig; i18n: I18nClient; importMap: ImportMap }): ClientConfig => {
const { config, i18n, importMap } = args
const currentLocale = i18n.language
if (!global._payload_doNotCacheClientConfig && cachedClientConfigs[currentLocale]) {
return cachedClientConfigs[currentLocale]
}
// ... create new config with correct translations ...
}
);
```
Also added handling for cache clearing during HMR to ensure
compatibility with the existing system.
Fixes#11406
---------
Co-authored-by: Jacob Fletcher <jacobsfletch@gmail.com>
Replaces the queue pattern used within autosave with the `useQueues`
hook introduced in #11579. To do this, queued tasks now accept an
options object with callbacks which can be used to tie into events of
the process, such as before it begins to prevent it from running, and
after it has finished to perform side effects.
The `useQueues` hook now also maintains an array of queued tasks as
opposed to individual refs.
Lexical nested fields are currently not set-up to handle access control
on the client properly. Despite that, we were passing parent permissions
to `RenderFields`, which causes certain fields to not show up if the
document does not have `create` permission.
This PR comes with a bunch of improvements to our template generation
script that makes it safer and more reliable
- bumps all our templates
- Using `latest` as payload version in our templates has proven to be
unreliable. This updates the gen-templates script to pin all payload
packages to the latest version
- adds the missing `website` entry for our template variations, thus
ensuring its lockfile gets updated
- adds importmap generation to the gen-templates script
- adds new `script:gen-templates:build` script to verify that all
templates still build correctly
In the Cell component for a select field such as our `_status` fields it
will now add a class eg. `selected--published` for the selected option
so it can be easily targeted with CSS.
---------
Co-authored-by: Dan Ribbens <dan.ribbens@gmail.com>
### What?
In the same vein as #11696, this PR optimizes how images are selected
for display in the document edit view. It ensures that only image files
are processed and selects the most appropriate size to minimize
unnecessary downloads and improve performance.
#### Previously:
- Non-image files were being processed unnecessarily, despite not
generating thumbnails.
- Images without a `thumbnailURL` defaulted to their original full size,
even when smaller, optimized versions were available.
#### Now:
- **Only images** are processed for thumbnails, avoiding redundant
requests for non-images.
- **The smallest available image within a target range** (`40px -
180px`) is prioritized for display.
- **If no images fit within this range**, the logic selects:
- The next smallest larger image (if available).
- The **original** image if it is smaller than the next available larger
size.
- The largest **smaller** image if no better fit exists.
### Why?
Prevents unnecessary downloads of non-image files, reduces bandwidth
usage by selecting more efficient image sizes and improves load times
and performance in the edit view.
### How?
- **Filters out non-image files** when determining which assets to
display.
- Uses the same algorithm as in #11696 but turns it into a reusable
function to be used in various areas around the codebase. Namely the
upload field hasOne and hasMany components.
Before (4.5mb transfer):

After (15.9kb transfer):

There are cases when a storage plugin is disabled during development and
enabled in production. This will result in import maps that differ
depending on if they're generated during development or production.
In a lot of cases, those import maps are generated during
development-only. During production, we just re-use what was generated
locally. This will cause missing import map entries for those plugins
that are disabled during development.
This PR ensures the import map entries are added regardless of the
enabled state of those plugins. This is necessary for our
generate-templates script to not omit the vercel blob storage import map
entry.
### What?
Fixes a few broken links in `docs/custom-components` and
`docs/rich-text`. Also made some custom component links lowercase.
### Why?
To direct end users to the correct location in the docs.
### How?
Changes to `docs/custom-components/custom-views.mdx`,
`docs/custom-components/list-view.mdx`, and
`docs/rich-text/custom-features.mdx`.
When selecting query presets from the list drawer, all query presets are
available for selection, even if unrelated to the underlying collection.
When selecting one of these presets, the list view will crash with
client-side exceptions because the columns and filters that are applied
are incompatible.
The fix is to the thread `filterOptions` through the query presets
drawer. This will ensure that only related collections are shown.
When rendering custom fields nested within arrays or blocks, such as the
Lexical rich text editor which is treated as a custom field, these
fields will sometimes disappear when form state requests are invoked
sequentially. This is especially reproducible on slow networks.
This is because form state invocations are placed into a [task
queue](https://github.com/payloadcms/payload/pull/11579) which aborts
the currently running tasks when a new one arrives. By doing this, local
form state is never dispatched, and the second task in the queue becomes
stale.
The fix is to _not_ abort the currently running task. This will trigger
a complete rendering cycle, and when the second task is invoked, local
state will be up to date.
Fixes#11340, #11425, and #11824.
Continuation of #11489. This adds a new, optional `updateJobs` db
adapter method that reduces the amount of database calls for the jobs
queue.
## MongoDB
### Previous: running a set of 50 queued jobs
- 1x db.find (= 1x `Model.paginate`)
- 50x db.updateOne (= 50x `Model.findOneAndUpdate`)
### Now: running a set of 50 queued jobs
- 1x db.updateJobs (= 1x `Model.find` and 1x `Model.updateMany`)
**=> 51 db round trips before, 2 db round trips after**
### Previous: upon task completion
- 1x db.find (= 1x `Model.paginate`)
- 1x db.updateOne (= 1x `Model.findOneAndUpdate`)
### Now: upon task completion
- 1x db.updateJobs (= 1x `Model.findOneAndUpdate`)
**=> 2 db round trips before, 1 db round trip after**
## Drizzle (e.g. Postgres)
### running a set of 50 queued jobs
- 1x db.query[tablename].findMany
- 50x db.select
- 50x upsertRow
This is unaffected by this PR and will be addressed in a future PR
Within auth-enabled collections, we inject the `password` and
`confirmPassword` fields into the field schema map. While this is fine
within the edit view where these fields are used, this breaks field
paths within the version diff view where unnamed fields are no longer
able to lookup their corresponding config. This is because the presence
of these injected fields increments the field indices by two.
A temporary fix for this is to simply inject these fields _last_ into
the schema map. This way their presence does not disrupt field path
generation. A long term fix should be implemented, however, where these
fields actually exist on the collection config itself. This way no
config mutation would be required as the sanitized config would the
single source of truth.
To do this, we'd need to ensure that these fields do not appear in any
APIs, and that they do not generate types, etc.
This will improve performance when updating a single document in
postgres/drizzle, if the ID is known.
Previously, this resulted in 2 sequential operations:
- `db.select `to fetch the document by the ID
- `upsertRow` to update the document (multiple db operations)
This PR removes the unnecessary `db.select` call, as the document ID is
already known
Previously, many test cases in `int/relationships` were wrapped to the
"custom IDs" describe block even though they aren't related to custom
IDs at all. This rearranges them as they should be.
### What
The `crypto.randomUUID()` function was causing errors in non-secure
contexts (HTTP), as it is only available in secure contexts (HTTPS).
### How
Added a fallback to generate UUIDs using the `uuid` library when
`crypto.randomUUID()` is not available.
Fixes#11825
This PR introduces a new utility function, `getSafeRedirect`, to
sanitize and validate redirect paths used in the login flow.
It replaces the previous use of `encodeURIComponent` and inline string
checks with a centralized, reusable, and more secure approach.
#### `getSafeRedirect` utility:
- Ensures redirect paths start with a single `/`
- Blocks protocol-relative URLs (e.g., `//evil.com`)
- Blocks JavaScript schemes (e.g., `/javascript:alert(1)`)
- Blocks full URL redirects like `/http:` or `/https:`
This ensures that the lexical field can be rendered without having to
wrap it inside an `EntityVisibilityProvider`, making it a bit easier to
manually render the lexical field in a custom component.
same comment as in #11560, #11831, #11226:
> In `src/index.ts` I see four more errors in my IDE that don't appear
when I run the typecheck in the CLI with `tsc --noEmit`.
Fixes#11458
Some complex, nested fields were receiving incorrect field paths and
schema paths, leading to a `"Error: No client field found"` error.
This PR ensures field paths are calculated correctly, by matching it to
how they're calculated in payload hooks.
Query Presets allow you to save and share filters, columns, and sort
orders for your collections. This is useful for reusing common or
complex filtering patterns and column configurations across your team.
Query Presets are defined on the fly by the users of your app, rather
than being hard coded into the Payload Config.
Here's a screen recording demonstrating the general workflow as it
relates to the list view. Query Presets are not exclusive to the admin
panel, however, as they could be useful in a number of other contexts
and environments.
https://github.com/user-attachments/assets/1fe1155e-ae78-4f59-9138-af352762a1d5
Each Query Preset is saved as a new record in the database under the
`payload-query-presets` collection. This will effectively make them
CRUDable and allows for an endless number of preset configurations. As
you make changes to filters, columns, limit, etc. you can choose to save
them as a new record and optionally share them with others.
Normal document-level access control will determine who can read,
update, and delete these records. Payload provides a set of sensible
defaults here, such as "only me", "everyone", and "specific users", but
you can also extend your own set of access rules on top of this, such as
"by role", etc. Access control is customizable at the operation-level,
for example you can set this to "everyone" can read, but "only me" can
update.
To enable the Query Presets within a particular collection, set
`enableQueryPresets` on that collection's config.
Here's an example:
```ts
{
// ...
enableQueryPresets: true
}
```
Once enabled, a new set of controls will appear within the list view of
the admin panel. This is where you can select and manage query presets.
General settings for Query Presets are configured under the root
`queryPresets` property. This is where you can customize the labels,
apply custom access control rules, etc.
Here's an example of how you might augment the access control properties
with your own custom rule to achieve RBAC:
```ts
{
// ...
queryPresets: {
constraints: {
read: [
{
label: 'Specific Roles',
value: 'specificRoles',
fields: [roles],
access: ({ req: { user } }) => ({
'access.update.roles': {
in: [user?.roles],
},
}),
},
],
}
}
}
```
Related: #4193 and #3092
---------
Co-authored-by: Dan Ribbens <dan.ribbens@gmail.com>
Previously, if you were querying a collection that has a join field with
`draft: true`, and the join field's collection also has
`versions.drafts: true` our db adapter would still query the original
SQL table / mongodb collection instead of the versions one which isn't
quite right since we respect `draft: true` when populating relationships