### What?
This PR fixes an issue while using `text` & `number` fields with
`hasMany: true` where the last entry would be unreachable, and thus
undeletable, because the `transformForWrite` function did not track
these rows for deletion. This causes values that should've been deleted
to remain in the edit view form, as well as the db, after a submission.
This PR also properly threads the placeholder value from
`admin.placeholder` to `text` & `number` `hasMany: true` fields.
### Why?
To remove rows from the db when a submission is made where these fields
are empty arrays, and to properly show an appropriate placeholder when
one is set in config.
### How?
Adjusting `transformForWrite` and the `traverseFields` to keep track of
rows for deletion.
Fixes#11781
Before:
[Editing---Post-dbpg-before--Payload.webm](https://github.com/user-attachments/assets/5ba1708a-2672-4b36-ac68-05212f3aa6cb)
After:
[Editing---Post--dbpg-hasmany-after-Payload.webm](https://github.com/user-attachments/assets/1292e998-83ff-49d0-aa86-6199be319937)
Previously, this was possible in MongoDB but not in Postgres/SQLite
(having `null` in an `in` query)
```
const { docs } = await payload.find({
collection: 'posts',
where: { text: { in: ['text-1', 'text-3', null] } },
})
```
This PR fixes that behavior
This PR introduces a few changes to improve turbopack compatibility and
ensure e2e tests pass with turbopack enabled
## Changes to improve turbopack compatibility
- Use correct sideEffects configuration to fix scss issues
- Import scss directly instead of duplicating our scss rules
- Fix some scss rules that are not supported by turbopack
- Bump Next.js and all other dependencies used to build payload
## Changes to get tests to pass
For an unknown reason, flaky tests flake a lot more often in turbopack.
This PR does the following to get them to pass:
- add more `wait`s
- fix actual flakes by ensuring previous operations are properly awaited
## Blocking turbopack bugs
- [X] https://github.com/vercel/next.js/issues/76464
- Fix PR: https://github.com/vercel/next.js/pull/76545
- Once fixed: change `"sideEffectsDisabled":` back to `"sideEffects":`
## Non-blocking turbopack bugs
- [ ] https://github.com/vercel/next.js/issues/76956
## Related PRs
https://github.com/payloadcms/payload/pull/12653https://github.com/payloadcms/payload/pull/12652
The databases do not keep track of document order internally so when
sorting by non-unique fields such as shared `order` number values, the
returned order will be random and not consistent.
While this issue is far more noticeable on mongo it could also occur in
postgres on certain environments.
This combined with pagination can lead to the perception of duplicated
or inconsistent data.
This PR adds a second sort parameter to queries so that we always have a
fallback, `-createdAt` will be used by default or `id` if timestamps are
disabled.
⚠️ `orderable` fields will no longer be `required` and `unique`, so your
database may prompt you to accept an automatic migration if you're using
[this
feature](https://payloadcms.com/docs/configuration/collections#config-options).
Note that the `orderable` feature is still experimental, so it may still
receive breaking changes without a major upgrade or contain bugs. Use it
with caution.
___
The `orderable` fields will not have `required` and `unique` constraints
at the database schema level, in order to automatically migrate
collections that incorporate this property.
Now, when a user adds the `orderable` property to a collection or join
field, existing documents will have the order field set to undefined.
The first time you try to reorder them, the documents will be
automatically assigned an initial order, and you will be prompted to
refresh the page.
We believe this provides a better development experience than having to
manually migrate data with a script.
Additionally, it fixes a bug that occurred when using `orderable` in
conjunction with groups and tabs fields.
Closes:
- #12129
- #12331
- #12212
---------
Co-authored-by: Dan Ribbens <dan.ribbens@gmail.com>
Continuation of https://github.com/payloadcms/payload/pull/12185 and fix
https://github.com/payloadcms/payload/issues/12221
The mentioned PR introduced auto sorting by the point field when a
`near` query is used, but it didn't build actual needed query to order
results by their distance to a _given_ (from the `near` query) point.
Now, we build:
```sql
order by pont_field <-> ST_SetSRID(ST_MakePoint(lng, lat), 4326)
```
Which does what we want
This fixes issues identified in the predefined migration for
postgres v2-v3 including the following:
### relation already exists
Can error with the following:
```ts
{
err: [DatabaseError],
msg: 'Error running migration 20250502_020052_relationships_v2_v3 column "relation_id" of relation "table_name" already exists.'
}
```
This was happening when you run a migration with both a required
relationship or upload field and no schema specified in the db adapter.
When both of these are true the function that replaces `ADD COLUMN` and
`ALTER COLUMN` in order to add `NOT NULL` constraints for requried
fields, wasn't working. This resulted in the `ADD COLUMN` statement from
being being called multiple times instead of altering it after data had
been copied over.
### camelCase column change
Enum columns from using `select` or `radio` have changed from camelCase
to snake case in v3. This change was not accounted for in the
relationship migration and needed to be accounted for.
### DROP CONSTRAINT
It was pointed out by
[here](https://github.com/payloadcms/payload/issues/10162#issuecomment-2610018940)
that the `DROP CONSTRAINT` needs to include `IF EXISTS` so that it can
continue if the contraint was already removed in a previous statement.
fixes https://github.com/payloadcms/payload/issues/10162
### What?
Turborepo fails to compile due to type error in the generated drizzle
schema.
### Why?
TypeScript may not include the module augmentation for
@payloadcms/db-postgres, especially in monorepo or isolated module
builds. This causes type errors during the compilation process of
turborepo project. Adding the type-only import guarantees that
TypeScript loads the relevant type definitions and augmentations,
resolving these errors.
### How?
This PR adds a type-only import statement to ensure TypeScript
recognizes the module augmentation for @payloadcms/db-postgres in the
generated drizzle schema from payload, and there is no runtime effect.
Fixes#12311
-->

Fixes https://github.com/payloadcms/payload/issues/12263
This was caused by passing not needed columns to the `SELECT DISTINCT`
query, which we execute in case if we have a filter / sort by a nested
field / relationship. Since the only columns that we need to pass to the
`SELECT DISTINCT` query are: ID and field(s) specified in `sort`, we now
filter the `selectFields` variable.
### What?
Swaps out `deepAssertEqual` for `dequal` package. Further details and
motivation in [this
discussion](https://github.com/payloadcms/payload/discussions/12192).
### Why?
Dequal is about 100x faster in limited local testing. Dequal package
shows 3-5x speed over `deepAssertEqual` in benchmarks. Memory usage is
within acceptable levels.
### How?
Move the result of dequal to a `const` for readability. Replace the `try
{ ... } catch { ... }` with `if { ... } else { ... }` for minimum impact
and change.
When running the v2-v3 migration you might receive prompts for renaming
columns. Since we start a transaction before, you might end up with a
fail if you don't answer within your transaction session period timeout.
This moves the `getTransaction` call after prompts were answered, since
we don't have a reason to start it earlier.
When `payload migrate` is run and a record with name "dev" is returned
having `batch: -1`, then the `batch` is not incrementing as expected as
it is stuck at 1. This change makes it so the batch is incremented from
the correct latest batch, ignoring the `name: "dev"` migration.
This improves performance when querying data in Postgers / SQLite with
`limit: 0`. Before, unless you additionally passed `pagination: false`
we executed additional count query to calculate the pagination. Now we
skip this as this is unnecessary since we can retrieve the count just
from `rows.length`.
This logic already existed in `db-mongodb` -
1b17df9e0b/packages/db-mongodb/src/find.ts (L114-L124)
Fixes https://github.com/payloadcms/payload/issues/12090, in MongoDB the
documents are sorted by distance automatically whenever the `near`
operation is used, which we have in the docs:
> When querying using the near operator, the returned documents will be
sorted by nearest first.
This fixes this incosistensty between Postgres and MongoDB.
⚠️ This change potentially can cause to produce different results, if
you used the `near` operator without `sort: 'pointFieldName'`.
### What?
Converts numbers passed to a text field to avoid the database/drizzle
from converting it incorrectly.
### Why?
If you have a hook that passes a value to another field you can
experience this problem where drizzle converts a number value for a text
field to a floating point number in sqlite for example.
### How?
Adds logic to `transform/write/traverseFields.ts` to cast text field
values to string.
following changes made by Commit a6f7ef8
> feat(db-*): export types from main export (#11914)
In 3.0, we made the decision to export all types from the main package
export (e.g. `payload/types` => `payload`). This improves type
discoverability by IDEs and simplifies importing types.
> This PR does the same for our db adapters, which still have a separate
`/types` subpath export. While those are kept for
backwards-compatibility, we can remove them in 4.0.
a6f7ef837a
the script responsible for generating file generated-schema.ts was not
updated to reflect this change in export paths
drizzle/src/utilities/createSchemaGenerator.ts
CURRENT
```typescript
const finalDeclaration = `
declare module '${this.packageName}/types' {
export interface GeneratedDatabaseSchema {
schema: DatabaseSchema
}
}
```
AFTER THIS PULL REQUEST
```typescript
const finalDeclaration = `
declare module '${this.packageName}' {
export interface GeneratedDatabaseSchema {
schema: DatabaseSchema
}
}
```
this pull request fixes the generation of generated-schema.ts avoiding
errors while building for production with command
```bash
npm run build
```

### What?
This PR adds support for `where` querying by the join field (don't
confuse with `where` querying of related docs via `joins.where`)
Previously, this didn't work:
```
const categories = await payload.find({
collection: 'categories',
where: { 'relatedPosts.title': { equals: 'my-title' } },
})
```
### Why?
This is crucial for bi-directional relationships, can be used for access
control.
### How?
Implements `where` handling for join fields the same as we do for
relationships. In MongoDB it's not as efficient as it can be, the old PR
that improves it and can be updated later is here
https://github.com/payloadcms/payload/pull/8858
Fixes https://github.com/payloadcms/payload/discussions/9683
Fixes https://github.com/payloadcms/payload/issues/11975
Previously, this configuration was causing errors in postgres due to
long names, even though `dbName` is used:
```
{
slug: 'aliases',
fields: [
{
name: 'thisIsALongFieldNameThatWillCauseAPostgresErrorEvenThoughWeSetAShorterDBName',
dbName: 'shortname',
type: 'array',
fields: [
{
name: 'nested_field_1',
type: 'array',
dbName: 'short_nested_1',
fields: [],
},
{
name: 'nested_field_2',
type: 'text',
},
],
},
],
},
```
This is because we were generating Drizzle relation name (for arrays)
always based on the field path and internally, drizzle uses this name
for aliasing. Now, if `dbName` is present, we use `_{dbName}` instead
for the relation name.
Fixes https://github.com/payloadcms/payload/issues/11882
Previously, down migration that dropped the `payload_migrations` table
was failing because `migrationTableExists` doesn't check the current
transaction, only in which you can get a `false` value result.
Fixes https://github.com/payloadcms/payload/issues/11901
Previously, when `ValidationError` `errors.path` was referring to a
field with `label` defined as a function, the error message was
generated with `[object Object]`. Now, we call that function instead.
Since the `i18n` argument is required for `StaticLabel`, this PR
introduces so you can pass a partial `req` to `ValidationError` from
which we thread `req.i18n` to the label args.
This replaces usage of our `chainMethods` helper to dynamically chain
queries with [drizzle dynamic query
building](https://orm.drizzle.team/docs/dynamic-query-building).
This is more type-safe, more readable and requires less code
This adds support for running multiple job queue tasks in parallel
within the same workflow while preventing conflicts. Previously, this
would have caused the following issues:
- Job log entries get lost - the final job log is incomplete, despite
all tasks having been executed
- Write conflicts in postgres, leading to unique constraint violation
errors
The solution involves handling job log data updates in a way that avoids
overwriting, and ensuring the final update reflects the latest job log
data. Each job log entry now initializes its own ID, so a given job log
entry’s ID remains the same across multiple, parallel task executions.
## Postgres
In Postgres, we need to enable transactions for the
`payload.db.updateJobs` operation; otherwise, two tasks updating the
same job in parallel can conflict. This happens because Postgres handles
array rows by deleting them all, then re-inserting (rather than
upserting). The rows are stored in a separate table, and the following
scenario can occur:
Op 1: deletes all job log rows
Op 2: deletes all job log rows
Op 1: inserts 200 job log rows
Op 2: insert the same 200 job log rows again => `error: “duplicate key
value violates unique constraint "payload_jobs_log_pkey”`
Because transactions were not used, the rows inserted by Op 1
immediately became visible to Op 2, causing the conflict. Enabling
transactions fixes this. In theory, it can still happen if Op 1 commits
before Op 2 starts inserting (due to the read committed isolation
level), but it should occur far less frequently.
Alongside this change, we should consider inserting the rows using an
upsert (update on conflict), which will get rid of this error
completely. That way, if the insertion of Op 1 is visible to Op 2, Op 2
will simply overwrite it, rather than erroring. Individual job entries
are immutable and job entries cannot be deleted, thus this shouldn't
corrupt any data.
## Mongo
In Mongo, the issue is addressed by ensuring that log row deletions
caused due to different log states in concurrent operations are not
merged back to the client job log, and by making sure the final update
includes all job logs.
There is no duplicate key error in Mongo because the array log resides
in the same document and duplicates are simply upserted. We cannot use
transactions in Mongo, as it appears to lock the document in a way that
prevents reliable parallel updates, leading to:
`MongoServerError: WriteConflict error: this operation conflicted with
another operation. Please retry your operation or multi-document
transaction`
In 3.0, we made the decision to export all types from the main package
export (e.g. `payload/types` => `payload`). This improves type
discoverability by IDEs and simplifies importing types.
This PR does the same for our db adapters, which still have a separate
`/types` subpath export. While those are kept for
backwards-compatibility, we can remove them in 4.0.
Continuation of #11489. This adds a new, optional `updateJobs` db
adapter method that reduces the amount of database calls for the jobs
queue.
## MongoDB
### Previous: running a set of 50 queued jobs
- 1x db.find (= 1x `Model.paginate`)
- 50x db.updateOne (= 50x `Model.findOneAndUpdate`)
### Now: running a set of 50 queued jobs
- 1x db.updateJobs (= 1x `Model.find` and 1x `Model.updateMany`)
**=> 51 db round trips before, 2 db round trips after**
### Previous: upon task completion
- 1x db.find (= 1x `Model.paginate`)
- 1x db.updateOne (= 1x `Model.findOneAndUpdate`)
### Now: upon task completion
- 1x db.updateJobs (= 1x `Model.findOneAndUpdate`)
**=> 2 db round trips before, 1 db round trip after**
## Drizzle (e.g. Postgres)
### running a set of 50 queued jobs
- 1x db.query[tablename].findMany
- 50x db.select
- 50x upsertRow
This is unaffected by this PR and will be addressed in a future PR