Currently, an optimized DB update (simple data => no
delete-and-create-row) does the following:
1. sql UPDATE
2. sql SELECT
This PR reduces this further to one single DB call for simple
collections:
1. sql UPDATE with RETURNING()
This only works for simple collections that do not have any fields that
need to be fetched from other tables. If a collection has fields like
relationship or blocks, we'll need that separate SELECT call to join in
the other tables.
In 4.0, we can remove all "complex" fields from the jobs collection and
replace them with a JSON field to make use of this optimization
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210803039809814
### What?
This PR ensures that when a document is created using the `Publish in
__` button, it is saved to the correct locale.
### Why?
During document creation, the buttons `Publish` or `Publish in [locale]`
have the same effect. As a result, we overlooked the case where a user
may specifically click `Publish in [locale]` for the first save. In this
scenario, the create operation does not respect the
`publishSpecificLocale` value, so the document was always saved in the
default locale regardless of the intended one.
### How?
Passes the `publishSpecificLocale` value to the create operation,
ensuring the document and version is saved to the correct locale.
**Fixes:** #13117
Some search params within the list view do not properly sync to user
preferences, and visa versa.
For example, when selecting a query preset, the `?preset=123` param is
injected into the URL and saved to preferences, but when reloading the
page without the param, that preset is not reactivated as expected.
### Problem
The reason this wasn't working before is that omitting this param would
also reset prefs. It was designed this way in order to support
client-side resets, e.g. clicking the query presets "x" button.
This pattern would never work, however, because this means that every
time the user navigates to the list view directly, their preference is
cleared, as no param would exist in the query.
Note: this is not an issue with _all_ params, as not all are handled in
the same way.
### Solution
The fix is to use empty values instead, e.g. `?preset=`. When the server
receives this, it knows to clear the pref. If it doesn't exist at all,
it knows to load from prefs. And if it has a value, it saves to prefs.
On the client, we sanitize those empty values back out so they don't
appear in the URL in the end.
This PR also refactors much of the list query context and its respective
provider to be significantly more predictable and easier to work with,
namely:
- The `ListQuery` type now fully aligns with what Payload APIs expect,
e.g. `page` is a number, not a string
- The provider now receives a single `query` prop which matches the
underlying context 1:1
- Propagating the query from the server to the URL is significantly more
predictable
- Any new props that may be supported in the future will automatically
work
- No more reconciling `columns` and `listPreferences.columns`, its just
`query.columns`
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210827129744922
Adds a new `schedule` property to workflow and task configs that can be
used to have Payload automatically _queue_ jobs following a certain
_schedule_.
Docs:
https://payloadcms.com/docs/dynamic/jobs-queue/schedules?branch=feat/schedule-jobs
## API Example
```ts
export default buildConfig({
// ...
jobs: {
// ...
scheduler: 'manual', // Or `cron` if you're not using serverless. If `manual` is used, then user needs to set up running /api/payload-jobs/handleSchedules or payload.jobs.handleSchedules in regular intervals
tasks: [
{
schedule: [
{
cron: '* * * * * *',
queue: 'autorunSecond',
// Hooks are optional
hooks: {
// Not an array, as providing and calling `defaultBeforeSchedule` would be more error-prone if this was an array
beforeSchedule: async (args) => {
// Handles verifying that there are no jobs already scheduled or processing.
// You can override this behavior by not calling defaultBeforeSchedule, e.g. if you wanted
// to allow a maximum of 3 scheduled jobs in the queue instead of 1, or add any additional conditions
const result = await args.defaultBeforeSchedule(args)
return {
...result,
input: {
message: 'This task runs every second',
},
}
},
afterSchedule: async (args) => {
await args.defaultAfterSchedule(args) // Handles updating the payload-jobs-stats global
args.req.payload.logger.info(
'EverySecond task scheduled: ' +
(args.status === 'success' ? args.job.id : 'skipped or failed to schedule'),
)
},
},
},
],
slug: 'EverySecond',
inputSchema: [
{
name: 'message',
type: 'text',
required: true,
},
],
handler: ({ input, req }) => {
req.payload.logger.info(input.message)
return {
output: {},
}
},
}
]
}
})
```
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210495300843759
### What?
Fixes the `custom.plugin-import-export.disabled` flag to correctly
disable fields in all nested structures including:
- Groups
- Arrays
- Tabs
- Blocks
Previously, only top-level fields or direct children were respected.
This update ensures nested paths (e.g. `group.array.field1`,
`blocks.hero.title`, etc.) are matched and filtered from exports.
### Why?
- Updated regex logic in both `createExport` and Preview components to
recursively support:
- Indexed array fields (e.g. `array_0_field1`)
- Block fields with slugs (e.g. `blocks_0_hero_title`)
- Nested field accessors with correct part-by-part expansion
### How?
To allow users to disable entire field groups or deeply nested fields in
structured layouts.
As discussed in [this
RFC](https://github.com/payloadcms/payload/discussions/12729), this PR
supports collection-scoped folders. You can scope folders to multiple
collection types or just one.
This unlocks the possibility to have folders on a per collection instead
of always being shared on every collection. You can combine this feature
with the `browseByFolder: false` to completely isolate a collection from
other collections.
Things left to do:
- [x] ~~Create a custom react component for the selecting of
collectionSlugs to filter out available options based on the current
folders parameters~~
https://github.com/user-attachments/assets/14cb1f09-8d70-4cb9-b1e2-09da89302995
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210564397815557
Adds a new operation findDistinct that can give you distinct values of a
field for a given collection
Example:
Assume you have a collection posts with multiple documents, and some of
them share the same title:
```js
// Example dataset (some titles appear multiple times)
[
{ title: 'title-1' },
{ title: 'title-2' },
{ title: 'title-1' },
{ title: 'title-3' },
{ title: 'title-2' },
{ title: 'title-4' },
{ title: 'title-5' },
{ title: 'title-6' },
{ title: 'title-7' },
{ title: 'title-8' },
{ title: 'title-9' },
]
```
You can now retrieve all unique title values using findDistinct:
```js
const result = await payload.findDistinct({
collection: 'posts',
field: 'title',
})
console.log(result.values)
// Output:
// [
// 'title-1',
// 'title-2',
// 'title-3',
// 'title-4',
// 'title-5',
// 'title-6',
// 'title-7',
// 'title-8',
// 'title-9'
// ]
```
You can also limit the number of distinct results:
```js
const limitedResult = await payload.findDistinct({
collection: 'posts',
field: 'title',
sortOrder: 'desc',
limit: 3,
})
console.log(limitedResult.values)
// Output:
// [
// 'title-1',
// 'title-2',
// 'title-3'
// ]
```
You can also pass a `where` query to filter the documents.
### What?
Adds four more arguments to the `mongooseAdapter`:
```typescript
useJoinAggregations?: boolean /* The big one */
useAlternativeDropDatabase?: boolean
useBigIntForNumberIDs?: boolean
usePipelineInSortLookup?: boolean
```
Also export a new `compatabilityOptions` object from
`@payloadcms/db-mongodb` where each key is a mongo-compatible database
and the value is the recommended `mongooseAdapter` settings for
compatability.
### Why?
When using firestore and visiting
`/admin/collections/media/payload-folders`, we get:
```
MongoServerError: invalid field(s) in lookup: [let, pipeline], only lookup(from, localField, foreignField, as) is supported
```
Firestore doesn't support the full MongoDB aggregation API used by
Payload which gets used when building aggregations for populating join
fields.
There are several other compatability issues with Firestore:
- The invalid `pipeline` property is used in the `$lookup` aggregation
in `buildSortParams`
- Firestore only supports number IDs of type `Long`, but Mongoose
converts custom ID fields of type number to `Double`
- Firestore does not support the `dropDatabase` command
- Firestore does not support the `createIndex` command (not addressed in
this PR)
### How?
```typescript
useJoinAggregations?: boolean /* The big one */
```
When this is `false` we skip the `buildJoinAggregation()` pipeline and resolve the join fields through multiple queries. This can potentially be used with AWS DocumentDB and Azure Cosmos DB to support join fields, but I have not tested with either of these databases.
```typescript
useAlternativeDropDatabase?: boolean
```
When `true`, monkey-patch (replace) the `dropDatabase` function so that
it calls `collection.deleteMany({})` on every collection instead of
sending a single `dropDatabase` command to the database
```typescript
useBigIntForNumberIDs?: boolean
```
When `true`, use `mongoose.Schema.Types.BigInt` for custom ID fields of type `number` which converts to a firestore `Long` behind the scenes
```typescript
usePipelineInSortLookup?: boolean
```
When `false`, modify the sortAggregation pipeline in `buildSortParams()` so that we don't use the `pipeline` property in the `$lookup` aggregation. Results in slightly worse performance when sorting by relationship properties.
### Limitations
This PR does not add support for transactions or creating indexes in firestore.
### Fixes
Fixed a bug (and added a test) where you weren't able to sort by multiple properties on a relationship field.
### Future work
1. Firestore supports simple `$lookup` aggregations but other databases might not. Could add a `useSortAggregations` property which can be used to disable aggregations in sorting.
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Sasha <64744993+r1tsuu@users.noreply.github.com>
This is a follow-up to https://github.com/payloadcms/payload/pull/13060.
There are a bunch of other db adapter methods that use `upsertRow` for
updates: `updateGlobal`, `updateGlobalVersion`, `updateJobs`,
`updateMany`, `updateVersion`.
The previous PR had the logic for using the optimized row updating logic
inside the `updateOne` adapter. This PR moves that logic to the original
`upsertRow` function. Benefits:
- all the other db methods will benefit from this massive optimization
as well. This will be especially relevant for optimizing postgres job
queue initial updates - we should be able to close
https://github.com/payloadcms/payload/pull/11865 after another follow-up
PR
- easier to read db adapter methods due to less code.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210803039809810
Based on https://github.com/payloadcms/payload/pull/13060 which should
be merged first
This PR adds ability to update number fields atomically, which could be
important with parallel writes. For now we support this only via
`payload.db.updateOne`.
For example:
```js
// increment by 10
const res = await payload.db.updateOne({
data: {
number: {
$inc: 10,
},
},
collection: 'posts',
where: { id: { equals: post.id } },
})
// decrement by 3
const res2 = await payload.db.updateOne({
data: {
number: {
$inc: -3,
},
},
collection: 'posts',
where: { id: { equals: post.id } },
})
```
### What?
Fixes the export field selection dropdown to correctly differentiate
between fields in named and unnamed tabs.
### Why?
Previously, when a `tabs` field contained both named and unnamed tabs,
subfields with the same `name` would appear as duplicates in the
dropdown (e.g. `Tab To CSV`, `Tab To CSV`). Additionally, selecting a
field from a named tab would incorrectly map it to the unnamed version
due to shared labels and missing path prefixes.
### How?
- Updated the `reduceFields` utility to manually construct the field
path and label using the tab’s `name` if present.
- Ensured unnamed tabs treat subfields as top-level and skip prefixing
altogether.
- Adjusted label prefix logic to show `Named Tab > Field Name` when
appropriate.
#### Before
<img width="169" height="79" alt="Screenshot 2025-07-15 at 2 55 14 PM"
src="https://github.com/user-attachments/assets/2ab2d19e-41a3-4be2-8496-1da2a79f88e1"
/>
#### After
<img width="211" height="79" alt="Screenshot 2025-07-15 at 2 50 38 PM"
src="https://github.com/user-attachments/assets/0620e96a-71cd-4eb1-9396-30d461ed47a5"
/>
Previously, we were always initializing cronjobs when calling
`getPayload` or `payload.init`.
This is undesired in bin scripts - we don't want cron jobs to start
triggering db calls while we're running an initial migration using
`payload migrate` for example. This has previously led to a race
condition, triggering the following, occasional error, if job autoruns
were enabled:
```ts
DrizzleQueryError: Failed query: select "payload_jobs"."id", "payload_jobs"."input", "payload_jobs"."completed_at", "payload_jobs"."total_tried", "payload_jobs"."has_error", "payload_jobs"."error", "payload_jobs"."workflow_slug", "payload_jobs"."task_slug", "payload_jobs"."queue", "payload_jobs"."wait_until", "payload_jobs"."processing", "payload_jobs"."updated_at", "payload_jobs"."created_at", "payload_jobs_log"."data" as "log" from "payload_jobs" "payload_jobs" left join lateral (select coalesce(json_agg(json_build_array("payload_jobs_log"."_order", "payload_jobs_log"."id", "payload_jobs_log"."executed_at", "payload_jobs_log"."completed_at", "payload_jobs_log"."task_slug", "payload_jobs_log"."task_i_d", "payload_jobs_log"."input", "payload_jobs_log"."output", "payload_jobs_log"."state", "payload_jobs_log"."error") order by "payload_jobs_log"."_order" asc), '[]'::json) as "data" from (select * from "payload_jobs_log" "payload_jobs_log" where "payload_jobs_log"."_parent_id" = "payload_jobs"."id" order by "payload_jobs_log"."_order" asc) "payload_jobs_log") "payload_jobs_log" on true where ("payload_jobs"."completed_at" is null and ("payload_jobs"."has_error" is null or "payload_jobs"."has_error" <> $1) and "payload_jobs"."processing" = $2 and ("payload_jobs"."wait_until" is null or "payload_jobs"."wait_until" < $3) and "payload_jobs"."queue" = $4) order by "payload_jobs"."created_at" asc limit $5
params: true,false,2025-07-10T21:25:03.002Z,autorunSecond,100
at NodePgPreparedQuery.queryWithCache (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/pg-core/session.ts:74:11)
at processTicksAndRejections (node:internal/process/task_queues:105:5)
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:154:19
... 6 lines matching cause stack trace ...
at N._trigger (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/croner@9.0.0/node_modules/croner/dist/croner.cjs:1:16806) {
query: `select "payload_jobs"."id", "payload_jobs"."input", "payload_jobs"."completed_at", "payload_jobs"."total_tried", "payload_jobs"."has_error", "payload_jobs"."error", "payload_jobs"."workflow_slug", "payload_jobs"."task_slug", "payload_jobs"."queue", "payload_jobs"."wait_until", "payload_jobs"."processing", "payload_jobs"."updated_at", "payload_jobs"."created_at", "payload_jobs_log"."data" as "log" from "payload_jobs" "payload_jobs" left join lateral (select coalesce(json_agg(json_build_array("payload_jobs_log"."_order", "payload_jobs_log"."id", "payload_jobs_log"."executed_at", "payload_jobs_log"."completed_at", "payload_jobs_log"."task_slug", "payload_jobs_log"."task_i_d", "payload_jobs_log"."input", "payload_jobs_log"."output", "payload_jobs_log"."state", "payload_jobs_log"."error") order by "payload_jobs_log"."_order" asc), '[]'::json) as "data" from (select * from "payload_jobs_log" "payload_jobs_log" where "payload_jobs_log"."_parent_id" = "payload_jobs"."id" order by "payload_jobs_log"."_order" asc) "payload_jobs_log") "payload_jobs_log" on true where ("payload_jobs"."completed_at" is null and ("payload_jobs"."has_error" is null or "payload_jobs"."has_error" <> $1) and "payload_jobs"."processing" = $2 and ("payload_jobs"."wait_until" is null or "payload_jobs"."wait_until" < $3) and "payload_jobs"."queue" = $4) order by "payload_jobs"."created_at" asc limit $5`,
params: [ true, false, '2025-07-10T21:25:03.002Z', 'autorunSecond', 100 ],
cause: error: relation "payload_jobs" does not exist
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/pg@8.16.3/node_modules/pg/lib/client.js:545:17
at processTicksAndRejections (node:internal/process/task_queues:105:5)
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:161:13
at NodePgPreparedQuery.queryWithCache (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/pg-core/session.ts:72:12)
at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:154:19
at find (/Users/alessio/Documents/GitHub/payload2/packages/drizzle/src/find/findMany.ts:162:19)
at Object.updateMany (/Users/alessio/Documents/GitHub/payload2/packages/drizzle/src/updateJobs.ts:26:16)
at updateJobs (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/utilities/updateJob.ts:102:37)
at runJobs (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/operations/runJobs/index.ts:181:25)
at Object.run (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/localAPI.ts:137:12)
at N.fn (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/index.ts:866:13)
at N._trigger (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/croner@9.0.0/node_modules/croner/dist/croner.cjs:1:16806) {
length: 112,
severity: 'ERROR',
code: '42P01',
detail: undefined,
hint: undefined,
position: '406',
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'parse_relation.c',
line: '1449',
routine: 'parserOpenTable'
}
}
```
This PR makes running crons opt-in using a new `cron` flag. By default,
no cron jobs will be created.
### What?
Adds a new `format` option to the `plugin-import-export` config that
allows users to force the export format (`csv` or `json`) and hide the
format dropdown from the export UI.
### Why?
In some use cases, allowing the user to select between CSV and JSON is
unnecessary or undesirable. This new option allows plugin consumers to
lock the format and simplify the export interface.
### How?
- Added a `format?: 'csv' | 'json'` field to `ImportExportPluginConfig`.
- When defined, the `format` field in the export UI is:
- Hidden via `admin.condition`
- Pre-filled via `defaultValue`
- Updated `getFields` to accept the plugin config and apply logic
accordingly.
### Example
```ts
importExportPlugin({
format: 'json',
})
The selector could become hidden by:
- logging in with a user that only has 1 tenant
- logging out
- logging in with a user that has more than 1 tenant
Simplifies useEffect usage. Adds e2e test for this case.
### What
Introduces an additional `mimeType` validation based on the actual file
data to ensure the uploaded file matches the allowed `mimeTypes` defined
in the upload config.
### Why?
The current validation relies on the file extension, which can be easily
manipulated. For example, if only PDFs are allowed, a JPEG renamed to
`image.pdf` would bypass the check and be accepted. This change prevents
such cases by verifying the true MIME type.
### How?
Performs a secondary validation using the file’s binary data (buffer),
providing a more reliable MIME type check.
Fixes#12905
Monomorphic join fields were not using the `draft` argument when
fetching documents to display in the table. This change makes the join
field treatment of drafts consistent with the `relationship` type
fields.
Added e2e test to cover.
While we can use `joins`, `select`, `populate`, `depth` or `draft` on
auth collections when finding or finding by ID, these arguments weren't
supported for `/me` which meant that in some situations like in our
ecommerce template we couldn't optimise these calls.
A workaround would be to make a call to `/me` and then get the user ID
to then use for a `findByID` operation.
The login operation with sessions enabled calls updateOne, in mongodb,
data that does not match the schema is removed. `collection` and
`_strategy` are not part of the schema so they need to be reassigned
after the user is updated.
Adds int test.
This PR makes it so that `modifyResponseHeaders` is supported in our
adapters when set on the collection config. Previously it would be
ignored.
This means that users can now modify or append new headers to what's
returned by each service.
```ts
import type { CollectionConfig } from 'payload'
export const Media: CollectionConfig = {
slug: 'media',
upload: {
modifyResponseHeaders: ({ headers }) => {
const newHeaders = new Headers(headers) // Copy existing headers
newHeaders.set('X-Frame-Options', 'DENY') // Set new header
return newHeaders
},
},
}
```
Also adds support for `void` return on the `modifyResponseHeaders`
function in the case where the user just wants to use existing headers
and doesn't need more control.
eg:
```ts
import type { CollectionConfig } from 'payload'
export const Media: CollectionConfig = {
slug: 'media',
upload: {
modifyResponseHeaders: ({ headers }) => {
headers.set('X-Frame-Options', 'DENY') // You can directly set headers without returning
},
},
}
```
Manual testing checklist (no CI e2es setup for these envs yet):
- [x] GCS
- [x] S3
- [x] Azure
- [x] UploadThing
- [x] Vercel Blob
---------
Co-authored-by: James <james@trbl.design>
In case, if `payload.db.updateOne` received simple data, meaning no:
* Arrays / Blocks
* Localized Fields
* `hasMany: true` text / select / number / relationship fields
* relationship fields with `relationTo` as an array
This PR simplifies the logic to a single SQL `set` call. No any extra
(useless) steps with rewriting all the arrays / blocks / localized
tables even if there were no any changes to them. However, it's good to
note that `payload.update` (not `payload.db.updateOne`) as for now
passes all the previous data as well, so this change won't have any
effect unless you're using `payload.db.updateOne` directly (or for our
internal logic that uses it), in the future a separate PR with
optimization for `payload.update` as well may be implemented.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210710489889576
Currently, when a nonexistent document is accessed via the URL, a
`NotFound` page is displayed with a button to return to the dashboard.
In most cases, the next step the user will take is to navigate to the
list of documents in that collection. If we automatically redirect users
to the list view and display the error in a banner, we can save them a
couple of redirects.
This is a very common scenario when writing tests or restarting the
local environment.
## Before

## After

### What?
Improves the flattening logic used in the import-export plugin to
correctly handle polymorphic relationships (both `hasOne` and `hasMany`)
when generating CSV columns.
### Why?
Previously, `hasMany` polymorphic relationships would flatten their full
`value` object recursively, resulting in unwanted keys like `createdAt`,
`title`, `email`, etc. This change ensures that only the `id` and
`relationTo` fields are included, matching how `hasOne` polymorphic
fields already behave.
### How?
- Updated `flattenObject` to special-case `hasMany` polymorphic
relationships and extract only `relationTo` and `id` per index.
- Refined `getFlattenedFieldKeys` to return correct column keys for
polymorphic fields:
- `hasMany polymorphic → name_0_relationTo`, `name_0_id`
- `hasOne polymorphic → name_relationTo`, `name_id`
- `monomorphic → name` or `name_0`
- **Added try/catch blocks** around `toCSVFunctions` calls in
`flattenObject`, with descriptive error messages including the column
path and input value. This improves debuggability if a custom `toCSV`
function throws.
Previously, we were performing this check before calling the fetch
function. This changes it to perform the check within the dispatcher.
It adjusts the int tests to both trigger the dispatcher lookup function
(which is only triggered when not already passing a valid IP) and the
check before calling fetch
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210733180484570
Occasionally, I find myself on a URL like
`https://domain.com/admin/collections/myCollection/docId` and I modify
the URL with the intention of going to the admin panel, but I shorten it
in the wrong place: `https://domain.com/admin/collections`.
The confusion arises because the admin panel basically displays the
collections.
I think this redirect is a subtle but nice touch, since `/collections`
is a URL that doesn't exist.
EDIT: now I'm doing also the same thing for `/globals`
<!--
Thank you for the PR! Please go through the checklist below and make
sure you've completed all the steps.
Please review the
[CONTRIBUTING.md](https://github.com/payloadcms/payload/blob/main/CONTRIBUTING.md)
document in this repository if you haven't already.
The following items will ensure that your PR is handled as smoothly as
possible:
- PR Title must follow conventional commits format. For example, `feat:
my new feature`, `fix(plugin-seo): my fix`.
- Minimal description explained as if explained to someone not
immediately familiar with the code.
- Provide before/after screenshots or code diffs if applicable.
- Link any related issues/discussions from GitHub or Discord.
- Add review comments if necessary to explain to the reviewer the logic
behind a change
### What?
### Why?
### How?
Fixes #
-->
### What?
This PR introduces support for copy + pasting complex fields such as
Arrays and Blocks. These changes introduce a new `ClipboardAction`
component that houses logic for copy + pasting to and from the clipboard
to supported fields. I've scoped this PR to include only Blocks &
Arrays, however the structure of the components introduced lend
themselves to be easily extended to other field types. I've limited the
scope because there may be design & functional blockers that make it
unclear how to add actions to particular fields.
Supported fields:
- Arrays
([Demo](https://github.com/user-attachments/assets/523916f6-77d0-43e2-9a11-a6a9d8c1b71c))
- Array Rows
([Demo](https://github.com/user-attachments/assets/0cd01a1f-3e5e-4fea-ac83-8c0bba8d1aac))
- Blocks
([Demo](https://github.com/user-attachments/assets/4c55ac2b-55f4-4793-9b53-309b2e090dd9))
- Block Rows
([Demo](https://github.com/user-attachments/assets/1b4d2bea-981a-485b-a6c4-c59a77a50567))
Fields that may be supported in the future with minimal effort by
adopting the changes introduced here:
- Tabs
- Groups
- Collapsible
- Relationships
This PR also encompasses e2e tests that check both field and row-level
copy/pasting.
### Why?
To make it simpler and faster to copy complex fields over between
documents and rows within those docs.
### How?
Introduces a new `ClipboardAction` component with helper utilities to
aid in copy/pasting and validating field data.
Addresses #2977 & #10703
Notes:
- There seems to be an issue with Blocks & Arrays that contain RichText
fields where the RichText field dissappears from the dom upon replacing
form state. These fields are resurfaced after either saving the data or
dragging/dropping the row containing them.
- Copying a Row and then pasting it at the field-level will overwrite
the field to include only that one row. This is intended however can be
changed if requested.
- Clipboard permissions are required to use this feature. [See Clipboard
API caniuse](https://caniuse.com/async-clipboard).
#### TODO
- [x] ~~I forgot BlockReferences~~
- [x] ~~Fix tests failing due to new buttons causing locator conflicts~~
- [x] ~~Ensure deeply nested structures work~~
- [x] ~~Add missing translations~~
- [x] ~~Implement local storage instead of clipboard api~~
- [x] ~~Improve tests~~
---------
Co-authored-by: Germán Jabloñski <43938777+GermanJablo@users.noreply.github.com>
Fixes#12834
`loginAttempts` was being shown in the admin panel when it should be
hidden. The field is set to `hidden: true` therefore the value is
removed from siblingData and passes the `allowDefaultValue` check -
showing inconsistent data.
This PR ensures the default value is not returned if the field has a
value but was removed due to the field being hidden.
Login attempts were not being reset correctly which led to situations
where a failed login attempt followed by a successful login attempt
would keep the loginAttempts at 1.
### Before
Example with maxAttempts of 2:
- failed login -> `loginAttempts: 1`
- successful login -> `loginAttempts: 1`
- failed login -> `loginAttempts: 2`
- successful login -> `"This user is locked due to having too many
failed login attempts."`
### After
Example with maxAttempts of 2:
- failed login -> `loginAttempts: 1`
- successful login -> `loginAttempts: 0`
- failed login -> `loginAttempts: 1`
- successful login -> `loginAttempts: 0`
When adding a custom ID field to an array's config, both the default
field provided by Payload, and the custom ID field, exist in the
resulting config. This can lead to problems when the looking up the
field's config, where either one or the other will be returned.
Fixes#12978
Adds `restrictedFileTypes` (default: `false`) to upload collections
which prevents files on a restricted list from being uploaded.
To skip this check:
- set `[Collection].upload.restrictedFileTypes` to `true`
- set `[Collection].upload.mimeType` to any type(s)
Previously, `db.updateOne` calls with `where` queries that lead to no
results would create new rows on drizzle. Essentially, `db.updateOne`
behaved like `db.upsertOne` on drizzle
Required for #13005.
Opening an autosave-enabled document within a drawer triggers an
infinite loop when the root document is also autosave-enabled.
This was for two reasons:
1. Autosave would run and change the `updatedAt` timestamp. This would
trigger another run of autosave, and so on. The timestamp is now removed
before comparison to ensure that sequential autosave runs are skipped.
2. The `dequal()` call was not being given the `.current` property off
the ref object. This meant that is was never evaluate to `true` and
therefore never skip unnecessary autosaves to begin with.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210697235723932
### What?
Ensure the export preview table includes all field keys as columns, even
if those fields are not populated in any of the returned documents.
### Why?
Previously, if none of the documents in the preview result had a value
for a given field, that column would be missing entirely from the
preview table.
### How?
- Introduced a `getFlattenedFieldKeys` utility that recursively extracts
all missing flattened field accessors from the collection’s config that
are undefined
- Updates the preview UI logic to build columns from all flattened keys,
not just the first document
Adds support for `halfvec` and `sparsevec` and `bit` (binary vector)
column types. This is required for supporting indexing of embeddings >
2000 dimensions on postgres using the pg-vector extension.
Supersedes #12992. Partially closes#12975.
Right now autosave-enabled documents opened within a drawer will
unnecessarily remount on every autosave interval, causing loss of input
focus, etc. This makes it nearly impossible to edit these documents,
especially if the interval is very short.
But the same is true for non-autosave documents when "manually" saving,
e.g. pressing the "save draft" or "publish changes" buttons. This has
gone largely unnoticed, however, as the user has already lost focus of
the form to interact with these controls, and they somewhat expect this
behavior or at least accept it.
Now, the form remains mounted across autosave events and the user's
cursor never loses focus. Much better.
Before:
https://github.com/user-attachments/assets/a159cdc0-21e8-45f6-a14d-6256e53bc3df
After:
https://github.com/user-attachments/assets/cd697439-1cd3-4033-8330-a5642f7810e8
Related: #12842
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210689077645986
### What?
The "Preview Sizes" button in the file upload UI was not showing up if:
- `crop` and `focalPoint` were both `false`
- No `customUploadActions` were provided
- But image sizes were configured
### Why?
This happened because `UploadActions` wasn’t rendered at all unless
adjustments or custom actions were present.
### How?
Update the conditional in `StaticFileDetails` to also render
`UploadActions` when:
- `hasImageSizes` is `true` and the document has a `filename`
Fixes#12832
Fixes#12826
Leave without saving was being triggered when no changes were made to
the tenant. This should only happen if the value in form state differs
from that of the selected tenant, i.e. after changing tenants.
Adds tenant selector syncing so the selector updates when a tenant is
added or the name is edited.
Also adds E2E for most multi-tenant admin functionality.
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1210562742356842