Commit Graph

618 Commits

Author SHA1 Message Date
Sasha
9d6cae0445 feat: allow findDistinct on fields nested to relationships and on virtual fields (#14026)
This adds support for using `findDistinct`
https://github.com/payloadcms/payload/pull/13102 on fields:
* Nested to a relationship, for example `category.title`
* Virtual fields that are linked to relationships, for example
`categoryTitle`


```tsx
const Category: CollectionConfig = {
  slug: 'categories',
  fields: [
    {
      name: 'title',
      type: 'text',
    },
  ],
}

const Posts: CollectionConfig = {
  slug: 'posts',
  fields: [
    {
      name: 'category',
      type: 'relationship',
      relationTo: 'categories',
    },
    {
      name: 'categoryTitle',
      type: 'text',
      virtual: 'category.title',
    },
  ],
}

// Supported now
const relationResult = await payload.findDistinct({ collection: 'posts', field: 'category.title' })
// Supported now
const virtualResult = await payload.findDistinct({ collection: 'posts', field: 'categoryTitle' })
```
2025-10-02 05:36:30 +03:00
Sasha
1e654c0a95 fix(db-mongodb): localized blocks with fallback and versions (#13974)
Fixes https://github.com/payloadcms/payload/issues/13663
The issue appears to be reproducible only with 3 or more locales
2025-10-01 21:43:28 -04:00
Tobias Odendahl
7eacd396b1 feat(db-mongodb,drizzle): add atomic array operations for relationship fields (#13891)
### What?
This PR adds atomic array operations ($append and $remove) for
relationship fields with `hasMany: true` across all database adapters.
These operations allow developers to add or remove specific items from
relationship arrays without replacing the entire array.

New API:
```
// Append relationships (prevents duplicates)
await payload.db.updateOne({
  collection: 'posts',
  id: 'post123',
  data: {
    categories: { $append: ['featured', 'trending'] }
  }
})

// Remove specific relationships
await payload.db.updateOne({
  collection: 'posts', 
  id: 'post123',
  data: {
    tags: { $remove: ['draft', 'private'] }
  }
})

// Works with polymorphic relationships
await payload.db.updateOne({
  collection: 'posts',
  id: 'post123', 
  data: {
    relatedItems: {
      $append: [
        { relationTo: 'categories', value: 'category-id' },
        { relationTo: 'tags', value: 'tag-id' }
      ]
    }
  }
})
```

### Why?
Currently, updating relationship arrays requires replacing the entire
array which requires fetching existing data before updates. Requiring
more implementation effort and potential for errors when using the API,
in particular for bulk updates.

### How?

#### Cross-Adapter Features:
- Polymorphic relationships: Full support for relationTo:
['collection1', 'collection2']
- Localized relationships: Proper locale handling when fields are
localized
- Duplicate prevention: Ensures `$append` doesn't create duplicates
- Order preservation: Appends to end of array maintaining order
- Bulk operations: Works with `updateMany` for bulk updates

#### MongoDB Implementation:
- Converts `$append` to native `$addToSet` (prevents duplicates in
contrast to `$push`)
- Converts `$remove` to native `$pull` (targeted removal)

#### Drizzle Implementation (Postgres/SQLite):
- Uses optimized batch `INSERT` with duplicate checking for `$append`
- Uses targeted `DELETE` queries for `$remove`
- Implements timestamp-based ordering for performance
- Handles locale columns conditionally based on schema

### Limitations
The current implementation is only on database-adapter level and not
(yet) for the local API. Implementation in the localAPI will be done
separately.
2025-09-30 13:58:09 -04:00
Elliot DeNolf
5b64e12c65 chore(release): v3.58.0 [skip ci] 2025-09-30 09:22:02 -04:00
Elliot DeNolf
71c684e72e chore(release): v3.57.0 [skip ci] 2025-09-25 12:49:38 -04:00
Dan Ribbens
456e3d6459 revert: "feat: adds new experimental.localizeStatus option (#13207)" (#13928)
This reverts commit 0f6d748365.

# Conflicts:
#	docs/configuration/overview.mdx
#	docs/experimental/overview.mdx
#	packages/ui/src/elements/Status/index.tsx
2025-09-25 15:14:19 +00:00
Jessica Rynkar
062c1d7e89 fix: pagination returning duplicate results when timestamps: false (#13920)
### What
Fixes a bug where collection docs paginate incorrectly when `timestamps:
false` is set — the same docs were appearing across multiple pages.

### Why
The `find` query sanitizes the `sort` parameter.  

- With `timestamps: true`, it defaults to `createdAt`
- With `timestamps: false`, it falls back to `_id`

That logic is correct, but in `find.ts` we always passed `timestamps:
true`, ignoring the collection config. With the right sort applied,
pagination works as expected.

### How
`find.ts` now passes `collectionConfig.timestamps` to
`buildSortParam()`, ensuring the correct sort field is chosen.

---

Fixes #13888
2025-09-24 14:44:41 +01:00
Sasha
e99e054d7c fix: findDistinct with polymorphic relationships (#13875)
Fixes `findDistinct` with polymorphic relationships and also fixes a bug
from https://github.com/payloadcms/payload/pull/13840 when
`findDistinct` didn't work properly for `hasMany` relationships in
mongodb if `sort` is the same as `field`

---------

Co-authored-by: Patrik Kozak <35232443+PatrikKozak@users.noreply.github.com>
2025-09-22 12:23:17 -07:00
Sasha
d0543a463f fix: support hasMany: true relationships in findDistinct (#13840)
Previously, the `findDistinct` operation didn't work correctly for
relationships with `hasMany: true`. This PR fixes it.
2025-09-17 18:21:24 +03:00
Elliot DeNolf
ae3b923139 chore(release): v3.56.0 [skip ci] 2025-09-17 09:31:12 -04:00
Sasha
24ace70b58 fix(db-mongodb): support 2x and more relationship sorting (#13819)
Previously, sorting like:
`sort: 'relationship.anotherRelationship.title'` (and more nested)
didn't work with the MongoDB adapter, this PR fixes this.
2025-09-16 21:03:48 +03:00
Sasha
09dec43c15 fix(db-mongodb): localized arrays inside blocks with versions (#13804)
Fixes https://github.com/payloadcms/payload/issues/13745
2025-09-15 14:31:25 -04:00
Elliot DeNolf
58953de241 chore(release): v3.55.1 [skip ci] 2025-09-10 14:40:53 -04:00
Elliot DeNolf
f531576da6 chore(release): v3.55.0 [skip ci] 2025-09-09 11:59:18 -04:00
Jessica Rynkar
0f6d748365 feat: adds new experimental.localizeStatus option (#13207)
### What?
Adds a new `experimental.localizeStatus` config option, set to `false`
by default. When `true`, the admin panel will display the document
status based on the *current locale* instead of the _latest_ overall
status. Also updates the edit view to only show a `changed` status when
`autosave` is enabled.

### Why?
Showing the status for the current locale is more accurate and useful in
multi-locale setups. This update will become default behavior, able to
be opted in by setting `experimental.localizeStatus: true` in the
Payload config. This option will become depreciated in V4.

### How?
When `localizeStatus` is `true`, we store the localized status in a new
`localeStatus` field group within version data. The admin panel then
reads from this field to display the correct status for the current
locale.

---------

Co-authored-by: Jarrod Flesch <jarrodmflesch@gmail.com>
2025-09-03 10:27:08 +01:00
Elliot DeNolf
c1b4960795 chore(release): v3.54.0 [skip ci] 2025-08-28 09:47:54 -04:00
Alessio Gravili
e0ffada80b feat: support parallel job queue tasks, speed up task running (#13614)
Currently, attempting to run tasks in parallel will result in DB errors.

## Solution

The problem was caused due to inefficient db update calls. After each
task completes, we need to update the log array in the payload-jobs
collection. On postgres, that's a different table.

Currently, the update works the following way:
1. Nuke the table
2. Re-insert every single row, including the new one

This will throw db errors if multiple processes start doing that.
Additionally, due to conflicts, new log rows may be lost.

This PR makes use of the the [new db $push operation
](https://github.com/payloadcms/payload/pull/13453) we recently added to
atomically push a new log row to the database in a single round-trip.
This not only reduces the amount of db round trips (=> faster job queue
system) but allows multiple tasks to perform this db operation in
parallel, without conflicts.

## Problem

**Example:**

```ts
export const fastParallelTaskWorkflow: WorkflowConfig<'fastParallelTask'> = {
  slug: 'fastParallelTask',
  handler: async ({nlineTask }) => {
    const taskFunctions = []
    for (let i = 0; i < 20; i++) {
      const idx = i + 1
      taskFunctions.push(async () => {
        return await inlineTask(`parallel task ${idx}`, {
          input: {
            test: idx,
          },
          task: () => {
            return {
              output: {
                taskID: idx.toString(),
              },
            }
          },
        })
      })
    }

    await Promise.all(taskFunctions.map((f) => f()))
  },
}
```

On SQLite, this would throw the following error:

```bash
Caught error Error: UNIQUE constraint failed: payload_jobs_log.id
    at Object.next (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/libsql@0.4.7/node_modules/libsql/index.js:335:20)
    at Statement.all (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/libsql@0.4.7/node_modules/libsql/index.js:360:16)
    at executeStmt (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5/node_modules/@libsql/client/lib-cjs/sqlite3.js:285:34)
    at Sqlite3Client.execute (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5/node_modules/@libsql/client/lib-cjs/sqlite3.js:101:16)
    at /Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/libsql/session.ts:288:58
    at LibSQLPreparedQuery.queryWithCache (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/sqlite-core/session.ts:79:18)
    at LibSQLPreparedQuery.values (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/libsql/session.ts:286:21)
    at LibSQLPreparedQuery.all (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/libsql/session.ts:214:27)
    at QueryPromise.all (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/sqlite-core/query-builders/insert.ts:402:26)
    at QueryPromise.execute (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/sqlite-core/query-builders/insert.ts:414:40)
    at QueryPromise.then (/Users/alessio/Documents/GitHub/payload/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/query-promise.ts:31:15) {
  rawCode: 1555,
  code: 'SQLITE_CONSTRAINT_PRIMARYKEY',
  libsqlError: true
}
```

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1211001438499053
2025-08-27 20:32:42 +00:00
Alessio Gravili
5ded64eaaf feat(db-*): support atomic array $push db updates (#13453)
This PR adds **atomic** `$push` **support for array fields**. It makes
it possible to safely append new items to arrays, which is especially
useful when running tasks in parallel (like job queues) where multiple
processes might update the same record at the same time. By handling
pushes atomically, we avoid race conditions and keep data consistent -
especially on postgres, where the current implementation would nuke the
entire array table before re-inserting every single array item.

The feature works for both localized and unlocalized arrays, and
supports pushing either single or multiple items at once.

This PR is a requirement for reliably running parallel tasks in the job
queue - see https://github.com/payloadcms/payload/pull/13452.

Alongside documenting `$push`, this PR also adds documentation for
`$inc`.

## Changes to updatedAt behavior

https://github.com/payloadcms/payload/pull/13335 allows us to override
the updatedAt property instead of the db always setting it to the
current date.

However, we are not able to skip updating the updatedAt property
completely. This means, usage of $push results in 2 postgres db calls:
1. set updatedAt in main row
2. append array row in arrays table

This PR changes the behavior to only automatically set updatedAt if it's
undefined. If you explicitly set it to `null`, this now allows you to
skip the db adapter automatically setting updatedAt.

=> This allows us to use $push in just one single db call

## Usage Examples

### Pushing a single item to an array

```ts
const post = (await payload.db.updateOne({
  data: {
    array: {
      $push: {
        text: 'some text 2',
        id: new mongoose.Types.ObjectId().toHexString(),
      },
    },
  },
  collection: 'posts',
  id: post.id,
}))
```

### Pushing a single item to a localized array

```ts
const post = (await payload.db.updateOne({
  data: {
    arrayLocalized: {
      $push: {
        en: {
          text: 'some text 2',
          id: new mongoose.Types.ObjectId().toHexString(),
        },
        es: {
          text: 'some text 2 es',
          id: new mongoose.Types.ObjectId().toHexString(),
        },
      },
    },
  },
  collection: 'posts',
  id: post.id,
}))
```

### Pushing multiple items to an array

```ts
const post = (await payload.db.updateOne({
  data: {
    array: {
      $push: [
        {
          text: 'some text 2',
          id: new mongoose.Types.ObjectId().toHexString(),
        },
        {
          text: 'some text 3',
          id: new mongoose.Types.ObjectId().toHexString(),
        },
      ],
    },
  },
  collection: 'posts',
  id: post.id,
}))
```

### Pushing multiple items to a localized array

```ts
const post = (await payload.db.updateOne({
  data: {
    arrayLocalized: {
      $push: {
        en: {
          text: 'some text 2',
          id: new mongoose.Types.ObjectId().toHexString(),
        },
        es: [
          {
            text: 'some text 2 es',
            id: new mongoose.Types.ObjectId().toHexString(),
          },
          {
            text: 'some text 3 es',
            id: new mongoose.Types.ObjectId().toHexString(),
          },
        ],
      },
    },
  },
  collection: 'posts',
  id: post.id,
}))
```

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1211110462564647
2025-08-27 14:11:08 -04:00
Alessio Gravili
13c24afa63 feat: allow multiple, different payload instances using getPayload in same process (#13603)
Fixes https://github.com/payloadcms/payload/issues/13433. Testing
release: `3.54.0-internal.90cf7d5`

Previously, when calling `getPayload`, you would always use the same,
cached payload instance within a single process, regardless of the
arguments passed to the `getPayload` function. This resulted in the
following issues - both are fixed by this PR:

- If, in your frontend you're calling `getPayload` without `cron: true`,
and you're hosting the Payload Admin Panel in the same process, crons
will not be enabled even if you visit the admin panel which calls
`getPayload` with `cron: true`. This will break jobs autorun depending
on which page you visit first - admin panel or frontend
- Within the same process, you are unable to use `getPayload` twice for
different instances of payload with different Payload Configs.
On postgres, you can get around this by manually calling new
`BasePayload()` which skips the cache. This did not work on mongoose
though, as mongoose was caching the models on a global singleton (this
PR addresses this).
In order to bust the cache for different Payload Config, this PR
introduces a new, optional `key` property to `getPayload`.


## Mongoose - disable using global singleton

This PR refactors the Payload Mongoose adapter to stop relying on the
global mongoose singleton. Instead, each adapter instance now creates
and manages its own scoped Connection object.

### Motivation

Previously, calling `getPayload()` more than once in the same process
would throw `Cannot overwrite model` errors because models were compiled
into the global singleton. This prevented running multiple Payload
instances side-by-side, even when pointing at different databases.

### Changes
- Replace usage of `mongoose.connect()` / `mongoose.model()` with
instance-scoped `createConnection()` and `connection.model()`.
- Ensure models, globals, and versions are compiled per connection, not
globally.
- Added proper `close()` handling on `this.connection` instead of
`mongoose.disconnect()`.

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1211114366468745
2025-08-27 10:24:37 -07:00
Alessio Gravili
5c16443431 fix: ensure updates to createdAt and updatedAt are respected (#13335)
Previously, when manually setting `createdAt` or `updatedAt` in a
`payload.db.*` or `payload.*` operation, the value may have been
ignored. In some cases it was _impossible_ to change the `updatedAt`
value, even when using direct db adapter calls. On top of that, this
behavior sometimes differed between db adapters. For example, mongodb
did accept `updatedAt` when calling `payload.db.updateVersion` -
postgres ignored it.

This PR changes this behavior to consistently respect `createdAt` and
`updatedAt` values for `payload.db.*` operations.

For `payload.*` operations, this also works with the following
exception:
- update operations do no respect `updatedAt`, as updates are commonly
performed by spreading the old data, e.g. `payload.update({ data:
{...oldData} })` - in these cases, we usually still want the `updatedAt`
to be updated. If you need to get around this, you can use the
`payload.db.updateOne` operation instead.

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210919646303994
2025-08-22 08:41:07 -04:00
Elliot DeNolf
4151a902f2 chore(release): v3.53.0 [skip ci] 2025-08-21 14:27:59 -04:00
Gregor Billing
c7b9f0f563 fix(db-mongodb): disable join aggregations in DocumentDB compatibility mode (#13270)
### What?

The PR #12763 added significant improvements for third-party databases
that are compatible with the MongoDB API. While the original PR was
focused on Firestore, other databases like DocumentDB also benefit from
these compatibility features.

In particular, the aggregate JOIN strategy does not work on AWS
DocumentDB and thus needs to be disabled. The current PR aims to provide
this as a sensible default in the `compatibilityOptions` that are
provided by Payload out-of-the-box.

As a bonus, it also fixes a small typo from `compat(a)bility` to
`compat(i)bility`.

### Why?

Because our Payload instance, which is backed by AWS DocumentDB, crashes
upon trying to access any `join` field.

### How?

By adding the existing `useJoinAggregations` with value `false` to the
compatiblity layer. Individual developers can still choose to override
it in their own local config as needed.
2025-08-20 16:31:19 +00:00
Elliot DeNolf
217606ac20 chore(release): v3.52.0 [skip ci] 2025-08-15 10:42:32 -04:00
Elliot DeNolf
0688050eb6 chore(release): v3.51.0 [skip ci] 2025-08-13 09:20:13 -04:00
Elliot DeNolf
d622d3c5e7 chore(release): v3.50.0 [skip ci] 2025-08-05 09:33:56 -04:00
Elliot DeNolf
183f313387 chore(release): v3.49.1 [skip ci] 2025-07-29 16:38:50 -04:00
Elliot DeNolf
4ac428d250 chore(release): v3.49.0 [skip ci] 2025-07-25 09:27:41 -04:00
Sasha
c1cfceb7dc fix(db-mongodb): handle duplicate unique index error for DocumentDB (#13239)
Currently, with DocumentDB instead of a friendly error like "Value must
be unique" we see a generic "Something went wrong" message.
This PR fixes that by adding a fallback to parse the message instead of
using `error.keyValue` which doesn't exist for responses from
DocumentDB.
2025-07-22 16:53:25 +00:00
Alessio Gravili
c08b2aea89 feat: scheduling jobs (#12863)
Adds a new `schedule` property to workflow and task configs that can be
used to have Payload automatically _queue_ jobs following a certain
_schedule_.

Docs:
https://payloadcms.com/docs/dynamic/jobs-queue/schedules?branch=feat/schedule-jobs

## API Example

```ts
export default buildConfig({
  // ...
  jobs: {
    // ...
    scheduler: 'manual', // Or `cron` if you're not using serverless. If `manual` is used, then user needs to set up running /api/payload-jobs/handleSchedules or payload.jobs.handleSchedules in regular intervals
    tasks: [
      {
        schedule: [
          {
            cron: '* * * * * *',
            queue: 'autorunSecond',
            // Hooks are optional
            hooks: {
              // Not an array, as providing and calling `defaultBeforeSchedule` would be more error-prone if this was an array
              beforeSchedule: async (args) => {
                // Handles verifying that there are no jobs already scheduled or processing.
                // You can override this behavior by not calling defaultBeforeSchedule, e.g. if you wanted
                // to allow a maximum of 3 scheduled jobs in the queue instead of 1, or add any additional conditions
                const result = await args.defaultBeforeSchedule(args)
                return {
                  ...result,
                  input: {
                    message: 'This task runs every second',
                  },
                }
              },
              afterSchedule: async (args) => {
                await args.defaultAfterSchedule(args) // Handles updating the payload-jobs-stats global
                args.req.payload.logger.info(
                  'EverySecond task scheduled: ' +
                  (args.status === 'success' ? args.job.id : 'skipped or failed to schedule'),
                )
              },
            },
          },
        ],
        slug: 'EverySecond',
        inputSchema: [
          {
            name: 'message',
            type: 'text',
            required: true,
          },
        ],
        handler: ({ input, req }) => {
          req.payload.logger.info(input.message)
          return {
            output: {},
          }
        },
      }
    ]
  }
})
```

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210495300843759
2025-07-18 06:48:27 -04:00
Elliot DeNolf
a3361356b2 chore(release): v3.48.0 [skip ci] 2025-07-17 14:45:59 -04:00
Sasha
a20b43624b feat: add findDistinct operation (#13102)
Adds a new operation findDistinct that can give you distinct values of a
field for a given collection
Example:
Assume you have a collection posts with multiple documents, and some of
them share the same title:
```js
// Example dataset (some titles appear multiple times)
[
  { title: 'title-1' },
  { title: 'title-2' },
  { title: 'title-1' },
  { title: 'title-3' },
  { title: 'title-2' },
  { title: 'title-4' },
  { title: 'title-5' },
  { title: 'title-6' },
  { title: 'title-7' },
  { title: 'title-8' },
  { title: 'title-9' },
]
```
You can now retrieve all unique title values using findDistinct:
```js
const result = await payload.findDistinct({
  collection: 'posts',
  field: 'title',
})

console.log(result.values)
// Output:
// [
//   'title-1',
//   'title-2',
//   'title-3',
//   'title-4',
//   'title-5',
//   'title-6',
//   'title-7',
//   'title-8',
//   'title-9'
// ]
```
You can also limit the number of distinct results:
```js
const limitedResult = await payload.findDistinct({
  collection: 'posts',
  field: 'title',
  sortOrder: 'desc',
  limit: 3,
})

console.log(limitedResult.values)
// Output:
// [
//   'title-1',
//   'title-2',
//   'title-3'
// ]
```

You can also pass a `where` query to filter the documents.
2025-07-16 17:18:14 -04:00
Elliott W
41cff6d436 fix(db-mongodb): improve compatibility with Firestore database (#12763)
### What?

Adds four more arguments to the `mongooseAdapter`:

```typescript
  useJoinAggregations?: boolean  /* The big one */
  useAlternativeDropDatabase?: boolean
  useBigIntForNumberIDs?: boolean
  usePipelineInSortLookup?: boolean
```

Also export a new `compatabilityOptions` object from
`@payloadcms/db-mongodb` where each key is a mongo-compatible database
and the value is the recommended `mongooseAdapter` settings for
compatability.

### Why?

When using firestore and visiting
`/admin/collections/media/payload-folders`, we get:

```
MongoServerError: invalid field(s) in lookup: [let, pipeline], only lookup(from, localField, foreignField, as) is supported
```

Firestore doesn't support the full MongoDB aggregation API used by
Payload which gets used when building aggregations for populating join
fields.

There are several other compatability issues with Firestore:
- The invalid `pipeline` property is used in the `$lookup` aggregation
in `buildSortParams`
- Firestore only supports number IDs of type `Long`, but Mongoose
converts custom ID fields of type number to `Double`
- Firestore does not support the `dropDatabase` command
- Firestore does not support the `createIndex` command (not addressed in
this PR)

### How?

 ```typescript
useJoinAggregations?: boolean  /* The big one */
```
When this is `false` we skip the `buildJoinAggregation()` pipeline and resolve the join fields through multiple queries. This can potentially be used with AWS DocumentDB and Azure Cosmos DB to support join fields, but I have not tested with either of these databases.

 ```typescript
useAlternativeDropDatabase?: boolean
```
When `true`, monkey-patch (replace) the `dropDatabase` function so that
it calls `collection.deleteMany({})` on every collection instead of
sending a single `dropDatabase` command to the database

 ```typescript
useBigIntForNumberIDs?: boolean
```
When `true`, use `mongoose.Schema.Types.BigInt` for custom ID fields of type `number` which converts to a firestore `Long` behind the scenes

```typescript
  usePipelineInSortLookup?: boolean
```
When `false`, modify the sortAggregation pipeline in `buildSortParams()` so that we don't use the `pipeline` property in the `$lookup` aggregation. Results in slightly worse performance when sorting by relationship properties.

### Limitations

This PR does not add support for transactions or creating indexes in firestore.

### Fixes

Fixed a bug (and added a test) where you weren't able to sort by multiple properties on a relationship field.

### Future work

1. Firestore supports simple `$lookup` aggregations but other databases might not. Could add a `useSortAggregations` property which can be used to disable aggregations in sorting.

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Sasha <64744993+r1tsuu@users.noreply.github.com>
2025-07-16 15:17:43 -04:00
Sasha
841bf891d0 feat: atomic number field updates (#13118)
Based on https://github.com/payloadcms/payload/pull/13060 which should
be merged first
This PR adds ability to update number fields atomically, which could be
important with parallel writes. For now we support this only via
`payload.db.updateOne`.

For example:
```js
// increment by 10
const res = await payload.db.updateOne({
  data: {
    number: {
      $inc: 10,
    },
  },
  collection: 'posts',
  where: { id: { equals: post.id } },
})

// decrement by 3
const res2 = await payload.db.updateOne({
  data: {
    number: {
      $inc: -3,
    },
  },
  collection: 'posts',
  where: { id: { equals: post.id } },
})
```
2025-07-15 21:53:45 -07:00
Elliot DeNolf
e5f64f7952 chore(release): v3.47.0 [skip ci] 2025-07-11 15:43:44 -04:00
Elliot DeNolf
14612b4db8 chore(release): v3.46.0 [skip ci] 2025-07-07 16:10:10 -04:00
Jarrod Flesch
6d5cc843a2 fix(db-mongodb): updateOne mutates the data object and does not transform it for read (#13065)
Fixes https://github.com/payloadcms/payload/issues/13045

`updateOne` when returning is `false` mutates the data object for write
operations on the DB, but that causes an issue when using that data
object later on since all of the id's are mutated to objectIDs and never
transformed back into read id's.

This fix ensures that the transform happens even when the result is not
returned.
2025-07-07 14:50:01 -04:00
Elliot DeNolf
1ccd7ef074 chore(release): v3.45.0 [skip ci] 2025-07-03 09:23:23 -04:00
Sasha
81532cb9c9 fix(db-mongodb): nested sorting by ID (#13016)
Fixes sorting when the `sort` path contains a relationship and ends with
`id`, for example `sort: 'post.category.id'`.
2025-07-03 08:51:45 -04:00
Paul
c902f14cb3 fix(db-mongodb): add ability to disable fallback sort and no longer adds a fallback for unique fields (#12961)
You can now disable fallback sort in the mongodb adapter by passing
`disableFallbackSort: true` in the options.

We also no longer add fallback sort to sorts on unique fields by default
now.

This came out of a discussion in this issue
https://github.com/payloadcms/payload/issues/12690
and the linked PR https://github.com/payloadcms/payload/pull/12888

Closes https://github.com/payloadcms/payload/issues/12690
2025-06-27 13:45:30 +00:00
Elliot DeNolf
c66e5ca823 chore(release): v3.44.0 [skip ci] 2025-06-27 09:23:04 -04:00
Sasha
54afaf9529 fix(db-mongodb): strip deleted from the config blocks from the result (#12869)
If you (using the MongoDB adapter) delete a block from the payload
config, but still have some data with that block in the DB, you'd
receive in the admin panel an error like:
```
Block with type "cta" was found in block data, but no block with that type is defined in the config for field with schema path pages.blocks
```

Now, we remove those "unknown" blocks at the DB adapter level.

Co-authored-by: Dan Ribbens <dan.ribbens@gmail.com>
2025-06-27 05:30:48 -04:00
Alessio Gravili
84cb2b5819 refactor: simplify job type (#12816)
Previously, there were multiple ways to type a running job:
- `GeneratedTypes['payload-jobs']` - only works in an installed project
- is `any` in monorepo
- `BaseJob` - works everywhere, but does not incorporate generated types
which may include type for custom fields added to the jobs collection
- `RunningJob<>` - more accurate version of `BaseJob`, but same problem

This PR deprecated all those types in favor of a new `Job` type.
Benefits:
- Works in both monorepo and installed projects. If no generated types
exist, it will automatically fall back to `BaseJob`
- Comes with an optional generic that can be used to narrow down
`job.input` based on the task / workflow slug. No need to use a separate
type helper like `RunningJob<>`

With this new type, I was able to replace every usage of
`GeneratedTypes['payload-jobs']`, `BaseJob` and `RunningJob<>` with the
simple `Job` type.

Additionally, this PR simplifies some of the logic used to run jobs
2025-06-16 16:15:56 -04:00
Elliot DeNolf
810869f3fa chore(release): v3.43.0 [skip ci] 2025-06-16 16:09:14 -04:00
Sasha
245a2dee7e fix(db-mongodb): 4x and more level deep relationships querying (#12800)
Fixes https://github.com/payloadcms/payload/issues/12721
2025-06-13 14:11:13 +00:00
Alessio Gravili
67fb29b2a4 fix: reduce global DOM/Node type conflicts in server-only packages (#12737)
Currently, we globally enable both DOM and Node.js types. While this
mostly works, it can cause conflicts - particularly with `fetch`. For
example, TypeScript may incorrectly allow browser-only properties (like
`cache`) and reject valid Node.js ones like `dispatcher`.

This PR disables DOM types for server-only packages like payload,
ensuring Node-specific typings are applied. This caught a few instances
of incorrect fetch usage that were previously masked by overlapping DOM
types.

This is not a perfect solution - packages that contain both server and
client code (like richtext-lexical or next) will still suffer from this
issue. However, it's an improvement in cases where we can cleanly
separate server and client types, like for the `payload` package which
is server-only.

## Use-case

This change enables https://github.com/payloadcms/payload/pull/12622 to
explore using node-native fetch + `dispatcher`, instead of `node-fetch`
+ `agent`.

Currently, it will incorrectly report that `dispatcher` is not a valid
property for node-native fetch
2025-06-11 20:59:19 +00:00
Sasha
860e0b4ff9 fix(db-mongodb): bump mongoose to 8.15.1 (#12755)
Updates the `mongoose` package to the latest version `8.15.1`.
Fixes https://github.com/payloadcms/payload/issues/12708
2025-06-11 06:30:36 +03:00
Elliot DeNolf
4ac1894cbe chore(release): v3.42.0 [skip ci] 2025-06-09 14:43:03 -04:00
Sasha
9fbc3f6453 fix: proper globals max versions clean up (#12611)
Fixes https://github.com/payloadcms/payload/issues/11879
2025-06-09 14:38:07 -04:00
Elliot DeNolf
a10c3a5ba3 chore(release): v3.41.0 [skip ci] 2025-06-05 10:05:06 -04:00
Alessio Gravili
545d870650 chore: fix various e2e test setup issues (#12670)
I noticed a few issues when running e2e tests that will be resolved by
this PR:

- Most important: for some test suites (fields, fields-relationship,
versions, queues, lexical), the database was cleared and seeded
**twice** in between each test run. This is because the onInit function
was running the clear and seed script, when it should only have been
running the seed script. Clearing the database / the snapshot workflow
is being done by the reInit endpoint, which then calls onInit to seed
the actual data.
- The slowest part of `clearAndSeedEverything` is recreating indexes on
mongodb. This PR slightly improves performance here by:
- Skipping this process for the built-in `['payload-migrations',
'payload-preferences', 'payload-locked-documents']` collections
- Previously we were calling both `createIndexes` and `ensureIndexes`.
This was unnecessary - `ensureIndexes` is a deprecated alias of
`createIndexes`. This PR changes it to only call `createIndexes`
- Makes the reinit endpoint accept GET requests instead of POST requests
- this makes it easier to debug right in the browser
- Some typescript fixes
- Adds a `dev:memorydb` script to the package.json. For some reason,
`dev` is super unreliable on mongodb locally when running e2e tests - it
frequently fails during index creation. Using the memorydb fixes this
issue, with the bonus of more closely resembling the CI environment
- Previously, you were unable to run test suites using turbopack +
postgres. This fixes it, by explicitly installing `pg` as devDependency
in our monorepo
- Fixes jest open handles warning
2025-06-04 17:34:37 -03:00