Compare commits

..

79 Commits

Author SHA1 Message Date
Paul Popus
ca626288ab empty commit for new hash 2025-07-30 11:34:40 +01:00
Paul Popus
f87db1cf59 add r2 and d1 to publishList in script 2025-07-28 20:26:48 +01:00
Sasha
fb196069d5 delete :memory 2025-07-28 21:56:21 +03:00
Sasha
759bb9899e change version 2025-07-28 20:34:11 +03:00
Sasha
72fbcc3567 Merge branch 'main' of github.com:payloadcms/payload into feat/add-d1-adapter 2025-07-28 20:16:38 +03:00
Sasha
92010510b0 storage r2 2025-07-28 19:49:17 +03:00
Alessio Gravili
5c94d2dc71 feat: support next.js 15.4.4 (#13280)
- bumps next.js from 15.3.2 to 15.4.4 in monorepo and templates. It's
important to run our tests against the latest Next.js version to
guarantee full compatibility.
- bumps playwright because of peer dependency conflict with next 15.4.4
- bumps react types because why not

https://nextjs.org/blog/next-15-4

As part of this upgrade, the functionality added by
https://github.com/payloadcms/payload/pull/11658 broke. This PR fixes it
by creating a wrapper around `React.isValidElemen`t that works for
Next.js 15.4.

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210803039809808
2025-07-28 16:23:43 +00:00
Jarrod Flesch
b1aac19668 chore(next): cleanup unused code (#13292)
Looks like a merge resolution kept unused code. The same condition is
added a couple lines below this removal.
2025-07-28 13:43:51 +00:00
Sean Zubrickas
d093bb1f00 fix: refactors toast error rendering (#13252)
Fixes #13191

- Render a single html element for single error messages
- Preserve ul structure for multiple errors
- Updates tests to check for both cases
2025-07-28 05:59:25 -07:00
Alessio Gravili
2e9ba10fb5 docs: remove obsolete scheduler property (#13278)
That property does not exist and was used in a previous, outdated
implementation of auto scheduling
2025-07-25 16:25:47 -07:00
Alessio Gravili
8518141a5e fix(drizzle): respect join.type config (#13258)
Respects join.type instead of hardcoding leftJoin
2025-07-25 15:46:20 -07:00
Alessio Gravili
6d6c9ebc56 perf(drizzle): 2x faster db.deleteMany (#13255)
Previously, `db.deleteMany` on postgres resulted in 2 roundtrips to the
database (find + delete with ids). This PR passes the where query
directly to the `deleteWhere` function, resulting in only one roundtrip
to the database (delete with where).

If the where query queries other tables (=> joins required), this falls
back to find + delete with ids. However, this is also more optimized
than before, as we now pass `select: { id: true }` to the findMany
query.

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210871676349299
2025-07-25 15:46:09 -07:00
German Jablonski
7cd4a8a602 fix(richtext-lexical): unify indent between different converters and make paragraphs and lists match without CSS (#13274)
Previously, the Lexical editor was using px, and the JSX converter was
using rem. #12848 fixed the inconsistency by changing the editor to rem,
but it should have been the other way around, changing the JSX converter
to px.

You can see the latest explanation about why it should be 40px
[here](https://github.com/payloadcms/payload/issues/13130#issuecomment-3058348085).
In short, that's the default indentation all browsers use for lists.

This time I'm making sure to leave clear comments everywhere and a test
to avoid another regression.

Here is an image of what the e2e test looks like:

<img width="321" height="678" alt="image"
src="https://github.com/user-attachments/assets/8880c7cb-a954-4487-8377-aee17c06754c"
/>

The first part is the Lexical editor, the second is the JSX converter.

As you can see, the checkbox in JSX looks a little odd because it uses
an input checkbox (as opposed to a pseudo-element in the Lexical
editor). I thought about adding an inline style to move it slightly to
the left, but I found that browsers don't have a standard size for the
checkbox; it varies by browser and device.
That requires a little more thought; I'll address that in a future PR.

Fixes #13130
2025-07-25 22:58:49 +01:00
Jarrod Flesch
bc802846c5 fix: serve svg+xml as svg (#13277)
Based from https://github.com/payloadcms/payload/pull/13276

Fixes https://github.com/payloadcms/payload/issues/7624

If an uploaded image has `.svg` ext, and the mimeType is read as
`application/xml` adjust the mimeType to `image/svg+xml`.

---------

Co-authored-by: Philipp Schneider <47689073+philipp-tailor@users.noreply.github.com>
2025-07-25 21:00:51 +00:00
Jarrod Flesch
e8f6cb5ed1 fix: svg+xml file detection (#13276)
Adds logic for svg+xml file type detection.

---------

Co-authored-by: Philipp Schneider <47689073+philipp-tailor@users.noreply.github.com>
2025-07-25 18:33:53 +00:00
Elliot DeNolf
23bd67515c templates: bump for v3.49.0 (#13273)
🤖 Automated bump of templates for v3.49.0

Triggered by user: @denolfe

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-25 13:39:09 -04:00
Jarrod Flesch
e29d1d98d4 fix(plugin-multi-tenant): prefer assigned tenants for selector population (#13213)
When populating the selector it should populate it with assigned tenants
before fetching all tenants that a user has access to.

You may have "public" tenants and while a user may have _access_ to the
tenant, the selector should show the ones they are assigned to. Users
with full access are the ones that should be able to see the public ones
for editing.
2025-07-25 10:10:26 -04:00
Elliot DeNolf
4ac428d250 chore(release): v3.49.0 [skip ci] 2025-07-25 09:27:41 -04:00
Sasha
75385de01f fix: filtering by polymorphic relationships inside other fields (#13265)
Previously, filtering by a polymorphic relationship inside an array /
group (unless the `name` is `version`) / tab caused `QueryError: The
following path cannot be queried:`.
2025-07-25 09:10:21 -04:00
Patrik
f63dc2a10c feat: adds trash support (soft deletes) (#12656)
### What?

This PR introduces complete trash (soft-delete) support. When a
collection is configured with `trash: true`, documents can now be
soft-deleted and restored via both the API and the admin panel.

```
import type { CollectionConfig } from 'payload'

const Posts: CollectionConfig = {
  slug: 'posts',
  trash: true, // <-- New collection config prop @default false
  fields: [
    {
      name: 'title',
      type: 'text',
    },
    // other fields...
  ],
}
```

### Why

Soft deletes allow developers and admins to safely remove documents
without losing data immediately. This enables workflows like reversible
deletions, trash views, and auditing—while preserving compatibility with
drafts, autosave, and version history.

### How?

#### Backend

- Adds new `trash: true` config option to collections.
- When enabled:
  - A `deletedAt` timestamp is conditionally injected into the schema.
- Soft deletion is performed by setting `deletedAt` instead of removing
the document from the database.
- Extends all relevant API operations (`find`, `findByID`, `update`,
`delete`, `versions`, etc.) to support a new `trash` param:
  - `trash: false` → excludes trashed documents (default)
  - `trash: true` → includes both trashed and non-trashed documents
- To query **only trashed** documents: use `trash: true` with a `where`
clause like `{ deletedAt: { exists: true } }`
- Enforces delete access control before allowing a soft delete via
update or updateByID.
- Disables version restoring on trashed documents (must be restored
first).

#### Admin Panel

- Adds a dedicated **Trash view**: `/collections/:collectionSlug/trash`
- Default delete action now soft-deletes documents when `trash: true` is
set.
- **Delete confirmation modal** includes a checkbox to permanently
delete instead.
- Trashed documents:
- Displays UI banner for better clarity of trashed document edit view vs
non-trashed document edit view
  - Render in a read-only edit view
  - Still allow access to **Preview**, **API**, and **Versions** tabs
- Updated Status component:
- Displays “Previously published” or “Previously a draft” for trashed
documents.
  - Disables status-changing actions when documents are in trash.
- Adds new **Restore** bulk action to clear the `deletedAt` timestamp.
- New `Restore` and `Permanently Delete` buttons for
single-trashed-document restore and permanent deletion.
- **Restore confirmation modal** includes a checkbox to restore as
`published`, defaults to `draft`.
- Adds **Empty Trash** and **Delete permanently** bulk actions.
  
#### Notes

- This feature is completely opt-in. Collections without trash: true
behave exactly as before.



https://github.com/user-attachments/assets/00b83f8a-0442-441e-a89e-d5dc1f49dd37
2025-07-25 09:08:22 -04:00
Sasha
3f09e27bdd execute method 2025-07-25 15:06:16 +03:00
German Jablonski
4a712b3483 fix(ui): preserve localized blocks and arrays when using CopyToLocale (#13216)
## Problem:
In PR #11887, a bug fix for `copyToLocale` was introduced to address
issues with copying content between locales in Postgres. However, an
incorrect algorithm was used, which removed all "id" properties from
documents being copied. This led to bug #12536, where `copyToLocale`
would mistakenly delete the document in the source language, affecting
not only Postgres but any database.

## Cause and Solution:

When copying documents with localized arrays or blocks, Postgres throws
errors if there are two blocks with the same ID. This is why PR #11887
removed all IDs from the document to avoid conflicts. However, this
removal was too broad and caused issues in cases where it was
unnecessary.


The correct solution should remove the IDs only in nested fields whose
ancestors are localized. The reasoning is as follows:
- When an array/block is **not localized** (`localized: false`), if it
contains localized fields, these fields share the same ID across
different locales.
- When an array/block **is localized** (`localized: true`), its
descendant fields cannot share the same ID across different locales if
Postgres is being used. This wouldn't be an issue if the table
containing localized blocks had a composite primary key of `locale +
id`. However, since the primary key is just `id`, we need to assign a
new ID for these fields.

This PR properly removes IDs **only for nested fields** whose ancestors
are localized.

Fixes #12536

## Example:
### Before Fix:
```js
// Original document (en)
array: [{
  id: "123",
  text: { en: "English text" }
}]

// After copying to 'es' locale, a new ID was created instead of updating the existing item
array: [{
  id: "456",  // 🐛 New ID created!
  text: { es: "Spanish text" } // 🐛 'en' locale is missing
}]
```
### After fix:
```js
// After fix
array: [{
  id: "123",  //  Same ID maintained
  text: {
    en: "English text",
    es: "Spanish text"  //  Properly merged with existing item
  }
}]
```


## Additional fixes:

### TraverseFields

In the process of designing an appropriate solution, I detected a couple
of bugs in traverseFields that are also addressed in this PR.

### Fixed MongoDB Empty Array Handling

During testing, I discovered that MongoDB and PostgreSQL behave
differently when querying documents that don't exist in a specific
locale:
- PostgreSQL: Returns the document with data from the fallback locale
- MongoDB: Returns the document with empty arrays for localized fields

This difference caused `copyToLocale` to fail in MongoDB because the
merge algorithm only checked for `null` or `undefined` values, but not
empty arrays. When MongoDB returned `content: []` for a non-existent
locale, the algorithm would attempt to iterate over the empty array
instead of using the source locale's data.

### Move test e2e to int

The test introduced in #11887 didn't catch the bug because our e2e suite
doesn't run on Postgres. I migrated the test to an integration test that
does run on Postgres and MongoDB.
2025-07-24 20:37:13 +01:00
Jarrod Flesch
fa7d209cc9 fix(ui): incorrect blocks label sizing (#13264)
Blocks container labels should match the Array and Tab labels. Uses same
styling approach as Array labels.

### Before
<img width="229" height="260" alt="CleanShot 2025-07-24 at 12 26 38"
src="https://github.com/user-attachments/assets/9c4eb7c5-3638-4b47-805b-1206f195f5eb"
/>

### After
<img width="245" height="259" alt="CleanShot 2025-07-24 at 12 27 00"
src="https://github.com/user-attachments/assets/c04933b4-226f-403b-9913-24ba00857aab"
/>
2025-07-24 19:34:29 +00:00
Jacob Fletcher
bccf6ab16f feat: group by (#13138)
Supports grouping documents by specific fields within the list view.

For example, imagine having a "posts" collection with a "categories"
field. To report on each specific category, you'd traditionally filter
for each category, one at a time. This can be quite inefficient,
especially with large datasets.

Now, you can interact with all categories simultaneously, grouped by
distinct values.

Here is a simple demonstration:


https://github.com/user-attachments/assets/0dcd19d2-e983-47e6-9ea2-cfdd2424d8b5

Enable on any collection by setting the `admin.groupBy` property:

```ts
import type { CollectionConfig } from 'payload'

const MyCollection: CollectionConfig = {
  // ...
  admin: {
    groupBy: true
  }
}
```

This is currently marked as beta to gather feedback while we reach full
stability, and to leave room for API changes and other modifications.
Use at your own risk.

Note: when using `groupBy`, bulk editing is done group-by-group. In the
future we may support cross-group bulk editing.

Dependent on #13102 (merged).

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210774523852467

---------

Co-authored-by: Paul Popus <paul@payloadcms.com>
2025-07-24 14:00:52 -04:00
Dan Ribbens
14322a71bb docs(plugin-import-export): document plugin-import-export (#13243)
Add documentation for @payloadcms/plugin-import-export.
2025-07-24 17:03:21 +00:00
Patrik
7e81d30808 fix(ui): ensure document unlocks when logging out from edit view of a locked document (#13142)
### What?

Refactors the `LeaveWithoutSaving` modal to be generic and delegates
document unlock logic back to the `DefaultEditView` component via a
callback.

### Why?

Previously, `unlockDocument` was triggered in a cleanup `useEffect` in
the edit view. When logging out from the edit view, the unlock request
would often fail due to the session ending — leaving the document in a
locked state.

### How?

- Introduced `onConfirm` and `onPrevent` props for `LeaveWithoutSaving`.
- Moved all document lock/unlock logic into `DefaultEditView`’s
`handleLeaveConfirm`.
- Captures the next navigation target via `onPrevent` and evaluates
whether to unlock based on:
  - Locking being enabled.
  - Current user owning the lock.
- Navigation not targeting internal admin views (`/preview`, `/api`,
`/versions`).

---------

Co-authored-by: Jarrod Flesch <jarrodmflesch@gmail.com>
2025-07-24 09:18:49 -07:00
Sasha
a83ed5ebb5 fix(db-postgres): search is broken when useAsTitle is not specified (#13232)
Fixes https://github.com/payloadcms/payload/issues/13171
2025-07-24 18:42:17 +03:00
Patrik
8f85da8931 fix(plugin-import-export): json preview and downloads preserve nesting and exclude disabled fields (#13210)
### What?

Improves both the JSON preview and export functionality in the
import-export plugin:
- Preserves proper nesting of object and array fields (e.g., groups,
tabs, arrays)
- Excludes any fields explicitly marked as `disabled` via
`custom.plugin-import-export`
- Ensures downloaded files use proper JSON formatting when `format` is
`json` (no CSV-style flattening)

### Why?

Previously:
- The JSON preview flattened all fields to a single level and included
disabled fields.
- Exported files with `format: json` were still CSV-style data encoded
as `.json`, rather than real JSON.

### How?

- Refactored `/preview-data` JSON handling to preserve original document
shape.
- Applied `removeDisabledFields` to clean nested fields using
dot-notation paths.
- Updated `createExport` to skip `flattenObject` for JSON formats, using
a nested JSON filter instead.
- Fixed streaming and buffered export paths to output valid JSON arrays
when `format` is `json`.
2025-07-24 11:36:46 -04:00
Jacob Fletcher
e48427e59a feat(ui): expose refresh method to list drawer context (#13173) 2025-07-24 10:12:45 -04:00
Sasha
7ae4f8c709 docs: add status to forbidden field names when using Postgres and drafts are enabled (#13233)
Fixes https://github.com/payloadcms/payload/issues/13144
2025-07-24 16:29:53 +03:00
Alessio Gravili
1ad7b55e05 refactor(drizzle): use getTableName utility (#13257)
~~Sometimes, drizzle is adding the same join to the joins array twice
(`addJoinTable`), despite the table being the same. This is due to a bug
in `getNameFromDrizzleTable` where it would sometimes return a UUID
instead of the table name.~~

~~This PR changes it to read from the drizzle:BaseName symbol instead,
which is correctly returning the table name in my testing. It falls back
to `getTableName`, which uses drizzle:Name.~~

This for some reason fails the tests. Instead, this PR just uses the
getTableName utility now instead of searching for the symbol manually.
2025-07-24 12:04:16 +00:00
Alessio Gravili
aeee0704dd chore: add new int test verifying that select *improves* performance of new optimization (#13254)
https://github.com/payloadcms/payload/pull/13186 actually made the
select API _more powerful_, as it can reduce the amount of db calls even
for complex collections with blocks down to 1.

This PR adds a test that verifies this.

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210871676349303
2025-07-23 16:48:25 -07:00
Jarrod Flesch
29fb9ee5b4 fix(ui): monomorphic relationship fields should not show relationTo option labels (#13245) 2025-07-23 16:31:05 -04:00
Jacob Fletcher
0eac58ed72 fix(next): prevent base list filters from being injected into the url (#13253)
Prevents base list filters from being injected into the URL.

This is a problem with the multi-tenant plugin, for example, where
changing the tenant adds a `baseListFilter` to the query, but should
never be exposed to the end user.

Introduced in #13200.
2025-07-23 15:19:10 -04:00
Sasha
65b110e4e7 fix type 2025-07-23 19:21:04 +03:00
Sasha
3d57b06f83 Merge branch 'main' of github.com:payloadcms/payload into feat/add-d1-adapter 2025-07-23 19:05:44 +03:00
Sasha
380ce04d5c perf(db-postgres): avoid including prettier to the bundle (#13251)
This PR optimizes bundle size with drizzle adapters by avoiding
including `prettier` to the production bundle
2025-07-23 19:05:31 +03:00
Sasha
8595b575f5 fix type 2025-07-23 19:04:23 +03:00
Sasha
9ce07c75c3 fix type 2025-07-23 18:59:59 +03:00
Sasha
be4f11cd15 Merge branch 'main' of github.com:payloadcms/payload into feat/add-d1-adapter 2025-07-23 18:59:51 +03:00
Alessio Gravili
94f5e790f6 perf(drizzle): single-roundtrip db updates for simple collections (#13186)
Currently, an optimized DB update (simple data => no
delete-and-create-row) does the following:
1. sql UPDATE
2. sql SELECT

This PR reduces this further to one single DB call for simple
collections:
1. sql UPDATE with RETURNING()

This only works for simple collections that do not have any fields that
need to be fetched from other tables. If a collection has fields like
relationship or blocks, we'll need that separate SELECT call to join in
the other tables.

In 4.0, we can remove all "complex" fields from the jobs collection and
replace them with a JSON field to make use of this optimization

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210803039809814
2025-07-23 01:45:55 -07:00
Elliot DeNolf
3f8fb6734c ci: default audit-dependencies script to high severity (#13244)
Default the audit-dependencies workflow to use high severity by default.
2025-07-22 16:44:56 -04:00
Jarrod Flesch
412bf4ff73 fix(ui): select all should reset when params change, page, filter, etc (#12612)
Fixes #11938
Fixes https://github.com/payloadcms/payload/issues/13154

When select-all is checked and you filter or change the page, the
selected documents should reset.
2025-07-22 15:23:02 -04:00
Patrik
246a42b727 chore(plugin-import-export): use debug-level logging for createExport process (#13242)
### What?

Replaces all `payload.logger.info` calls with `payload.logger.debug` in
the `createExport` function.

### Why?

info logs are too verbose. Using debug ensures detailed logs.

### How?

- Updated all logger calls in `createExport` to use `debug` instead of
`info`.
2025-07-22 18:09:04 +00:00
Elliot DeNolf
e7a652f0a8 build: suppress pnpm update notification (#13241)
Suppress pnpm update notification
2025-07-22 13:27:44 -04:00
Elliot DeNolf
77f279e768 docs: remove payload cloud (#13240)
Remove Payload Cloud from docs
2025-07-22 13:12:20 -04:00
Sasha
c1cfceb7dc fix(db-mongodb): handle duplicate unique index error for DocumentDB (#13239)
Currently, with DocumentDB instead of a friendly error like "Value must
be unique" we see a generic "Something went wrong" message.
This PR fixes that by adding a fallback to parse the message instead of
using `error.keyValue` which doesn't exist for responses from
DocumentDB.
2025-07-22 16:53:25 +00:00
fgrsource
0eb8f75946 docs: fix typo, example was not valid JSON (#13224)
### What?
A comma is missing in the example code. This results in not valid JSON.

### Why?
I stumbled upon it, while setting up a Tenant-based Payload for the
first time.

### How?
Adding a comma results in valid JSON.

Fixes #
Added a comma. ;)
2025-07-21 15:18:40 +00:00
Chandler Gonzales
af2ddff203 fix: text field validation for minLength: 1, required: false (#13124)
Fixes #13113

### How?

Does not rely on JS falseyness, instead explicitly checking for null &
undefined


I'm not actually certain this is the approach we want to take. Some
people might interpret "required" as not null, not-undefined and min
length > 1 in the case of strings. If they do, this change to the
behavior in the not-required case will break their expectations
2025-07-21 09:23:44 -04:00
Jessica Rynkar
dce898d7ca fix(ui): ensure publishSpecificLocale works during create operation (#13129)
### What?
This PR ensures that when a document is created using the `Publish in
__` button, it is saved to the correct locale.

### Why?
During document creation, the buttons `Publish` or `Publish in [locale]`
have the same effect. As a result, we overlooked the case where a user
may specifically click `Publish in [locale]` for the first save. In this
scenario, the create operation does not respect the
`publishSpecificLocale` value, so the document was always saved in the
default locale regardless of the intended one.

### How?
Passes the `publishSpecificLocale` value to the create operation,
ensuring the document and version is saved to the correct locale.

**Fixes:** #13117
2025-07-21 09:19:51 -04:00
Jarrod Flesch
7f9de6d101 fix: empty folderType arrays break relational dbs (#13219)
Relational databases were broken with folders because it was querying
on:
```ts
{
  folderType: {
    equals: []
  }
}
```

Which does not work since the select hasMany stores values in a separate
table.
2025-07-21 08:39:18 -04:00
German Jablonski
d6e21adaf0 docs: shorten line length in code snippet comments to avoid horizontal scrolling (#13217)
prettier doesn't seem to cover that, and horizontal scrolling in the
browser is even more annoying than in the IDE.
Regex used in the search engine: `^[ \t]*\* `
2025-07-18 15:28:44 +01:00
Jacob Fletcher
d7a3faa4e9 fix(ui): properly sync search params to user preferences (#13200)
Some search params within the list view do not properly sync to user
preferences, and visa versa.

For example, when selecting a query preset, the `?preset=123` param is
injected into the URL and saved to preferences, but when reloading the
page without the param, that preset is not reactivated as expected.

### Problem 

The reason this wasn't working before is that omitting this param would
also reset prefs. It was designed this way in order to support
client-side resets, e.g. clicking the query presets "x" button.

This pattern would never work, however, because this means that every
time the user navigates to the list view directly, their preference is
cleared, as no param would exist in the query.

Note: this is not an issue with _all_ params, as not all are handled in
the same way.

### Solution

The fix is to use empty values instead, e.g. `?preset=`. When the server
receives this, it knows to clear the pref. If it doesn't exist at all,
it knows to load from prefs. And if it has a value, it saves to prefs.
On the client, we sanitize those empty values back out so they don't
appear in the URL in the end.

This PR also refactors much of the list query context and its respective
provider to be significantly more predictable and easier to work with,
namely:

- The `ListQuery` type now fully aligns with what Payload APIs expect,
e.g. `page` is a number, not a string
- The provider now receives a single `query` prop which matches the
underlying context 1:1
- Propagating the query from the server to the URL is significantly more
predictable
- Any new props that may be supported in the future will automatically
work
- No more reconciling `columns` and `listPreferences.columns`, its just
`query.columns`

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210827129744922
2025-07-18 09:29:26 -04:00
iamacup
46d8a26b0d fix: handle undefined values in afterChange hooks when read:false and create:true on the field level access for parents and siblings (#12664)
<!--

Thank you for the PR! Please go through the checklist below and make
sure you've completed all the steps.

Please review the
[CONTRIBUTING.md](https://github.com/payloadcms/payload/blob/main/CONTRIBUTING.md)
document in this repository if you haven't already.

The following items will ensure that your PR is handled as smoothly as
possible:

- PR Title must follow conventional commits format. For example, `feat:
my new feature`, `fix(plugin-seo): my fix`.
- Minimal description explained as if explained to someone not
immediately familiar with the code.
- Provide before/after screenshots or code diffs if applicable.
- Link any related issues/discussions from GitHub or Discord.
- Add review comments if necessary to explain to the reviewer the logic
behind a change

-->

### What?

Fixes a bug where `afterChange` hooks would attempt to access values for
fields that are `read: false` but `create: true`, resulting in
`undefined` values and unexpected behavior.

### Why?

In scenarios where access control allows field creation (`create: true`)
but disallows reading it (`read: false`), hooks like `afterChange` would
still attempt to operate on `undefined` values from `siblingDoc` or
`previousDoc`, potentially causing errors or skipped logic.

### How?

Adds safe optional chaining and fallback object initialization in
`promise.ts` for:
- `previousDoc[field.name]`
- `siblingDoc[field.name]`
- Group, Array, and Block field traversals

This ensures that these values are treated as empty objects or arrays
where appropriate to prevent runtime errors during traversal or hook
execution.

Fixes https://github.com/payloadcms/payload/issues/12660

---------

Co-authored-by: Niall Bambury <niall.bambury@cuckoo.co>
2025-07-18 13:34:54 +01:00
Alessio Gravili
c08b2aea89 feat: scheduling jobs (#12863)
Adds a new `schedule` property to workflow and task configs that can be
used to have Payload automatically _queue_ jobs following a certain
_schedule_.

Docs:
https://payloadcms.com/docs/dynamic/jobs-queue/schedules?branch=feat/schedule-jobs

## API Example

```ts
export default buildConfig({
  // ...
  jobs: {
    // ...
    scheduler: 'manual', // Or `cron` if you're not using serverless. If `manual` is used, then user needs to set up running /api/payload-jobs/handleSchedules or payload.jobs.handleSchedules in regular intervals
    tasks: [
      {
        schedule: [
          {
            cron: '* * * * * *',
            queue: 'autorunSecond',
            // Hooks are optional
            hooks: {
              // Not an array, as providing and calling `defaultBeforeSchedule` would be more error-prone if this was an array
              beforeSchedule: async (args) => {
                // Handles verifying that there are no jobs already scheduled or processing.
                // You can override this behavior by not calling defaultBeforeSchedule, e.g. if you wanted
                // to allow a maximum of 3 scheduled jobs in the queue instead of 1, or add any additional conditions
                const result = await args.defaultBeforeSchedule(args)
                return {
                  ...result,
                  input: {
                    message: 'This task runs every second',
                  },
                }
              },
              afterSchedule: async (args) => {
                await args.defaultAfterSchedule(args) // Handles updating the payload-jobs-stats global
                args.req.payload.logger.info(
                  'EverySecond task scheduled: ' +
                  (args.status === 'success' ? args.job.id : 'skipped or failed to schedule'),
                )
              },
            },
          },
        ],
        slug: 'EverySecond',
        inputSchema: [
          {
            name: 'message',
            type: 'text',
            required: true,
          },
        ],
        handler: ({ input, req }) => {
          req.payload.logger.info(input.message)
          return {
            output: {},
          }
        },
      }
    ]
  }
})
```

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210495300843759
2025-07-18 06:48:27 -04:00
Jake Fell
4ae503d700 fix: exit payload jobs:run process after completion (#13211)
### What?

Exit the process after running jobs.

### Why?

When running the `payload jobs:run` bin script with a postgres database
the process hangs forever.

### How?

Execute `process.exit(0)` after running all jobs.
2025-07-17 19:33:49 +00:00
Elliot DeNolf
a3361356b2 chore(release): v3.48.0 [skip ci] 2025-07-17 14:45:59 -04:00
Patrik
95e373e60b fix(plugin-import-export): disabled flag to cascade to nested fields from parent containers (#13199)
### What?

Fixes the `custom.plugin-import-export.disabled` flag to correctly
disable fields in all nested structures including:
- Groups
- Arrays
- Tabs
- Blocks

Previously, only top-level fields or direct children were respected.
This update ensures nested paths (e.g. `group.array.field1`,
`blocks.hero.title`, etc.) are matched and filtered from exports.

### Why?

- Updated regex logic in both `createExport` and Preview components to
recursively support:
  - Indexed array fields (e.g. `array_0_field1`)
  - Block fields with slugs (e.g. `blocks_0_hero_title`)
  - Nested field accessors with correct part-by-part expansion

### How?

To allow users to disable entire field groups or deeply nested fields in
structured layouts.
2025-07-17 18:12:58 +00:00
Jarrod Flesch
12539c61d4 feat(ui): supports collection scoped folders (#12797)
As discussed in [this
RFC](https://github.com/payloadcms/payload/discussions/12729), this PR
supports collection-scoped folders. You can scope folders to multiple
collection types or just one.

This unlocks the possibility to have folders on a per collection instead
of always being shared on every collection. You can combine this feature
with the `browseByFolder: false` to completely isolate a collection from
other collections.

Things left to do:
- [x] ~~Create a custom react component for the selecting of
collectionSlugs to filter out available options based on the current
folders parameters~~


https://github.com/user-attachments/assets/14cb1f09-8d70-4cb9-b1e2-09da89302995


---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210564397815557
2025-07-17 13:24:22 -04:00
Alessio Gravili
6ae730b33b feat(richtext-lexical): export $createLinkNode and $isLinkNode for server use (#13205)
Exports `$createLinkNode`, `$isLinkNode` and the equivalent modules for
autolinks.

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210710489889573
2025-07-17 09:24:37 -04:00
Sasha
a20b43624b feat: add findDistinct operation (#13102)
Adds a new operation findDistinct that can give you distinct values of a
field for a given collection
Example:
Assume you have a collection posts with multiple documents, and some of
them share the same title:
```js
// Example dataset (some titles appear multiple times)
[
  { title: 'title-1' },
  { title: 'title-2' },
  { title: 'title-1' },
  { title: 'title-3' },
  { title: 'title-2' },
  { title: 'title-4' },
  { title: 'title-5' },
  { title: 'title-6' },
  { title: 'title-7' },
  { title: 'title-8' },
  { title: 'title-9' },
]
```
You can now retrieve all unique title values using findDistinct:
```js
const result = await payload.findDistinct({
  collection: 'posts',
  field: 'title',
})

console.log(result.values)
// Output:
// [
//   'title-1',
//   'title-2',
//   'title-3',
//   'title-4',
//   'title-5',
//   'title-6',
//   'title-7',
//   'title-8',
//   'title-9'
// ]
```
You can also limit the number of distinct results:
```js
const limitedResult = await payload.findDistinct({
  collection: 'posts',
  field: 'title',
  sortOrder: 'desc',
  limit: 3,
})

console.log(limitedResult.values)
// Output:
// [
//   'title-1',
//   'title-2',
//   'title-3'
// ]
```

You can also pass a `where` query to filter the documents.
2025-07-16 17:18:14 -04:00
Sean Zubrickas
cab7ba4a8a fix: Enhances field-level access controls on Users collection to address s… (#13197)
Enhance field-level access controls on Users collection to address
security concerns

- Restricted read/update access on `email` field to admins and the user
themselves
- Locked down `roles` field so only admins can create, read, or update
it
2025-07-16 15:36:32 -04:00
Elliott W
41cff6d436 fix(db-mongodb): improve compatibility with Firestore database (#12763)
### What?

Adds four more arguments to the `mongooseAdapter`:

```typescript
  useJoinAggregations?: boolean  /* The big one */
  useAlternativeDropDatabase?: boolean
  useBigIntForNumberIDs?: boolean
  usePipelineInSortLookup?: boolean
```

Also export a new `compatabilityOptions` object from
`@payloadcms/db-mongodb` where each key is a mongo-compatible database
and the value is the recommended `mongooseAdapter` settings for
compatability.

### Why?

When using firestore and visiting
`/admin/collections/media/payload-folders`, we get:

```
MongoServerError: invalid field(s) in lookup: [let, pipeline], only lookup(from, localField, foreignField, as) is supported
```

Firestore doesn't support the full MongoDB aggregation API used by
Payload which gets used when building aggregations for populating join
fields.

There are several other compatability issues with Firestore:
- The invalid `pipeline` property is used in the `$lookup` aggregation
in `buildSortParams`
- Firestore only supports number IDs of type `Long`, but Mongoose
converts custom ID fields of type number to `Double`
- Firestore does not support the `dropDatabase` command
- Firestore does not support the `createIndex` command (not addressed in
this PR)

### How?

 ```typescript
useJoinAggregations?: boolean  /* The big one */
```
When this is `false` we skip the `buildJoinAggregation()` pipeline and resolve the join fields through multiple queries. This can potentially be used with AWS DocumentDB and Azure Cosmos DB to support join fields, but I have not tested with either of these databases.

 ```typescript
useAlternativeDropDatabase?: boolean
```
When `true`, monkey-patch (replace) the `dropDatabase` function so that
it calls `collection.deleteMany({})` on every collection instead of
sending a single `dropDatabase` command to the database

 ```typescript
useBigIntForNumberIDs?: boolean
```
When `true`, use `mongoose.Schema.Types.BigInt` for custom ID fields of type `number` which converts to a firestore `Long` behind the scenes

```typescript
  usePipelineInSortLookup?: boolean
```
When `false`, modify the sortAggregation pipeline in `buildSortParams()` so that we don't use the `pipeline` property in the `$lookup` aggregation. Results in slightly worse performance when sorting by relationship properties.

### Limitations

This PR does not add support for transactions or creating indexes in firestore.

### Fixes

Fixed a bug (and added a test) where you weren't able to sort by multiple properties on a relationship field.

### Future work

1. Firestore supports simple `$lookup` aggregations but other databases might not. Could add a `useSortAggregations` property which can be used to disable aggregations in sorting.

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Sasha <64744993+r1tsuu@users.noreply.github.com>
2025-07-16 15:17:43 -04:00
Elliot DeNolf
e6da384a43 ci: disable bundle analysis for forks (#13198)
The bundle analysis action requires comment permissions which are not
available to PRs from forks.

This PR disables bundle analysis until we can implement this in a
separate workflow as shown in [the docs
here](https://github.com/exoego/esbuild-bundle-analyzer?tab=readme-ov-file#github-action-setup-for-public-repositories).
2025-07-16 12:56:42 -04:00
Alessio Gravili
7cd682c66a perf(drizzle): further optimize postgres row updates (#13184)
This is a follow-up to https://github.com/payloadcms/payload/pull/13060.

There are a bunch of other db adapter methods that use `upsertRow` for
updates: `updateGlobal`, `updateGlobalVersion`, `updateJobs`,
`updateMany`, `updateVersion`.

The previous PR had the logic for using the optimized row updating logic
inside the `updateOne` adapter. This PR moves that logic to the original
`upsertRow` function. Benefits:
- all the other db methods will benefit from this massive optimization
as well. This will be especially relevant for optimizing postgres job
queue initial updates - we should be able to close
https://github.com/payloadcms/payload/pull/11865 after another follow-up
PR
- easier to read db adapter methods due to less code.

---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1210803039809810
2025-07-16 09:45:02 -07:00
jangir-ritik
be8e8d9c7f docs: fix minor typo (#13185) 2025-07-16 05:46:17 +00:00
Sasha
841bf891d0 feat: atomic number field updates (#13118)
Based on https://github.com/payloadcms/payload/pull/13060 which should
be merged first
This PR adds ability to update number fields atomically, which could be
important with parallel writes. For now we support this only via
`payload.db.updateOne`.

For example:
```js
// increment by 10
const res = await payload.db.updateOne({
  data: {
    number: {
      $inc: 10,
    },
  },
  collection: 'posts',
  where: { id: { equals: post.id } },
})

// decrement by 3
const res2 = await payload.db.updateOne({
  data: {
    number: {
      $inc: -3,
    },
  },
  collection: 'posts',
  where: { id: { equals: post.id } },
})
```
2025-07-15 21:53:45 -07:00
Patrik
2a59c5bf8c fix(plugin-import-export): export field dropdown to properly label and path fields in named/unnamed tabs (#13180)
### What?

Fixes the export field selection dropdown to correctly differentiate
between fields in named and unnamed tabs.

### Why?

Previously, when a `tabs` field contained both named and unnamed tabs,
subfields with the same `name` would appear as duplicates in the
dropdown (e.g. `Tab To CSV`, `Tab To CSV`). Additionally, selecting a
field from a named tab would incorrectly map it to the unnamed version
due to shared labels and missing path prefixes.

### How?

- Updated the `reduceFields` utility to manually construct the field
path and label using the tab’s `name` if present.
- Ensured unnamed tabs treat subfields as top-level and skip prefixing
altogether.
- Adjusted label prefix logic to show `Named Tab > Field Name` when
appropriate.


#### Before
<img width="169" height="79" alt="Screenshot 2025-07-15 at 2 55 14 PM"
src="https://github.com/user-attachments/assets/2ab2d19e-41a3-4be2-8496-1da2a79f88e1"
/>

#### After
<img width="211" height="79" alt="Screenshot 2025-07-15 at 2 50 38 PM"
src="https://github.com/user-attachments/assets/0620e96a-71cd-4eb1-9396-30d461ed47a5"
/>
2025-07-15 16:41:07 -04:00
Alessio Gravili
64d76a3869 fix: cron jobs running when calling bin scripts, leading to db errors (#13135)
Previously, we were always initializing cronjobs when calling
`getPayload` or `payload.init`.

This is undesired in bin scripts - we don't want cron jobs to start
triggering db calls while we're running an initial migration using
`payload migrate` for example. This has previously led to a race
condition, triggering the following, occasional error, if job autoruns
were enabled:

```ts
DrizzleQueryError: Failed query: select "payload_jobs"."id", "payload_jobs"."input", "payload_jobs"."completed_at", "payload_jobs"."total_tried", "payload_jobs"."has_error", "payload_jobs"."error", "payload_jobs"."workflow_slug", "payload_jobs"."task_slug", "payload_jobs"."queue", "payload_jobs"."wait_until", "payload_jobs"."processing", "payload_jobs"."updated_at", "payload_jobs"."created_at", "payload_jobs_log"."data" as "log" from "payload_jobs" "payload_jobs" left join lateral (select coalesce(json_agg(json_build_array("payload_jobs_log"."_order", "payload_jobs_log"."id", "payload_jobs_log"."executed_at", "payload_jobs_log"."completed_at", "payload_jobs_log"."task_slug", "payload_jobs_log"."task_i_d", "payload_jobs_log"."input", "payload_jobs_log"."output", "payload_jobs_log"."state", "payload_jobs_log"."error") order by "payload_jobs_log"."_order" asc), '[]'::json) as "data" from (select * from "payload_jobs_log" "payload_jobs_log" where "payload_jobs_log"."_parent_id" = "payload_jobs"."id" order by "payload_jobs_log"."_order" asc) "payload_jobs_log") "payload_jobs_log" on true where ("payload_jobs"."completed_at" is null and ("payload_jobs"."has_error" is null or "payload_jobs"."has_error" <> $1) and "payload_jobs"."processing" = $2 and ("payload_jobs"."wait_until" is null or "payload_jobs"."wait_until" < $3) and "payload_jobs"."queue" = $4) order by "payload_jobs"."created_at" asc limit $5
params: true,false,2025-07-10T21:25:03.002Z,autorunSecond,100
    at NodePgPreparedQuery.queryWithCache (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/pg-core/session.ts:74:11)
    at processTicksAndRejections (node:internal/process/task_queues:105:5)
    at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:154:19
    ... 6 lines matching cause stack trace ...
    at N._trigger (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/croner@9.0.0/node_modules/croner/dist/croner.cjs:1:16806) {
  query: `select "payload_jobs"."id", "payload_jobs"."input", "payload_jobs"."completed_at", "payload_jobs"."total_tried", "payload_jobs"."has_error", "payload_jobs"."error", "payload_jobs"."workflow_slug", "payload_jobs"."task_slug", "payload_jobs"."queue", "payload_jobs"."wait_until", "payload_jobs"."processing", "payload_jobs"."updated_at", "payload_jobs"."created_at", "payload_jobs_log"."data" as "log" from "payload_jobs" "payload_jobs" left join lateral (select coalesce(json_agg(json_build_array("payload_jobs_log"."_order", "payload_jobs_log"."id", "payload_jobs_log"."executed_at", "payload_jobs_log"."completed_at", "payload_jobs_log"."task_slug", "payload_jobs_log"."task_i_d", "payload_jobs_log"."input", "payload_jobs_log"."output", "payload_jobs_log"."state", "payload_jobs_log"."error") order by "payload_jobs_log"."_order" asc), '[]'::json) as "data" from (select * from "payload_jobs_log" "payload_jobs_log" where "payload_jobs_log"."_parent_id" = "payload_jobs"."id" order by "payload_jobs_log"."_order" asc) "payload_jobs_log") "payload_jobs_log" on true where ("payload_jobs"."completed_at" is null and ("payload_jobs"."has_error" is null or "payload_jobs"."has_error" <> $1) and "payload_jobs"."processing" = $2 and ("payload_jobs"."wait_until" is null or "payload_jobs"."wait_until" < $3) and "payload_jobs"."queue" = $4) order by "payload_jobs"."created_at" asc limit $5`,
  params: [ true, false, '2025-07-10T21:25:03.002Z', 'autorunSecond', 100 ],
  cause: error: relation "payload_jobs" does not exist
      at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/pg@8.16.3/node_modules/pg/lib/client.js:545:17
      at processTicksAndRejections (node:internal/process/task_queues:105:5)
      at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:161:13
      at NodePgPreparedQuery.queryWithCache (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/pg-core/session.ts:72:12)
      at /Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/drizzle-orm@0.44.2_@libsql+client@0.14.0_bufferutil@4.0.8_utf-8-validate@6.0.5__@opentelemetr_asjmtflojkxlnxrshoh4fj5f6u/node_modules/src/node-postgres/session.ts:154:19
      at find (/Users/alessio/Documents/GitHub/payload2/packages/drizzle/src/find/findMany.ts:162:19)
      at Object.updateMany (/Users/alessio/Documents/GitHub/payload2/packages/drizzle/src/updateJobs.ts:26:16)
      at updateJobs (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/utilities/updateJob.ts:102:37)
      at runJobs (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/operations/runJobs/index.ts:181:25)
      at Object.run (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/queues/localAPI.ts:137:12)
      at N.fn (/Users/alessio/Documents/GitHub/payload2/packages/payload/src/index.ts:866:13)
      at N._trigger (/Users/alessio/Documents/GitHub/payload2/node_modules/.pnpm/croner@9.0.0/node_modules/croner/dist/croner.cjs:1:16806) {
    length: 112,
    severity: 'ERROR',
    code: '42P01',
    detail: undefined,
    hint: undefined,
    position: '406',
    internalPosition: undefined,
    internalQuery: undefined,
    where: undefined,
    schema: undefined,
    table: undefined,
    column: undefined,
    dataType: undefined,
    constraint: undefined,
    file: 'parse_relation.c',
    line: '1449',
    routine: 'parserOpenTable'
  }
}
```

This PR makes running crons opt-in using a new `cron` flag. By default,
no cron jobs will be created.
2025-07-15 13:24:50 -04:00
Sasha
c3af32e133 fix postgres build 2 2025-05-26 19:23:41 +03:00
Sasha
fd850e734b fix postgres build 2025-05-26 18:50:25 +03:00
Sasha
5de2f52aa0 fix vercel postgres build 2025-05-26 18:49:24 +03:00
Sasha
733594b9c2 fix package jason 2025-05-26 17:54:27 +03:00
Sasha
9bd5f6f5f8 try testing 2025-05-26 17:53:47 +03:00
Sasha
da4270f299 sqlOnly migrations 2025-05-26 17:16:15 +03:00
Sasha
52f9dcae82 finish d1 package and fix errors 2025-05-26 17:11:12 +03:00
Sasha
0829cfb712 fix imports 2025-05-25 00:58:32 +03:00
Sasha
f51c972ac1 add exports from drizzle 2025-05-24 16:25:27 +03:00
Sasha
6652608c10 move sqlite logic to the drizzle package 2025-05-24 16:17:28 +03:00
671 changed files with 28997 additions and 5623 deletions

View File

@@ -1,14 +1,15 @@
#!/bin/bash
severity=${1:-"critical"}
audit_json=$(pnpm audit --prod --json)
severity=${1:-"high"}
output_file="audit_output.json"
echo "Auditing for ${severity} vulnerabilities..."
audit_json=$(pnpm audit --prod --json)
echo "${audit_json}" | jq --arg severity "${severity}" '
.advisories | to_entries |
map(select(.value.patched_versions != "<0.0.0" and .value.severity == $severity) |
map(select(.value.patched_versions != "<0.0.0" and (.value.severity == $severity or ($severity == "high" and .value.severity == "critical"))) |
{
package: .value.module_name,
vulnerable: .value.vulnerable_versions,

View File

@@ -9,7 +9,7 @@ on:
audit-level:
description: The level of audit to run (low, moderate, high, critical)
required: false
default: critical
default: high
debug:
description: Enable debug logging
required: false

View File

@@ -153,6 +153,7 @@ jobs:
matrix:
database:
- mongodb
- firestore
- postgres
- postgres-custom-schema
- postgres-uuid
@@ -283,6 +284,8 @@ jobs:
- fields__collections__Text
- fields__collections__UI
- fields__collections__Upload
- group-by
- folders
- hooks
- lexical__collections__Lexical__e2e__main
- lexical__collections__Lexical__e2e__blocks
@@ -301,6 +304,7 @@ jobs:
- plugin-nested-docs
- plugin-seo
- sort
- trash
- versions
- uploads
env:
@@ -417,6 +421,8 @@ jobs:
- fields__collections__Text
- fields__collections__UI
- fields__collections__Upload
- group-by
- folders
- hooks
- lexical__collections__Lexical__e2e__main
- lexical__collections__Lexical__e2e__blocks
@@ -435,6 +441,7 @@ jobs:
- plugin-nested-docs
- plugin-seo
- sort
- trash
- versions
- uploads
env:
@@ -718,6 +725,8 @@ jobs:
DO_NOT_TRACK: 1 # Disable Turbopack telemetry
- name: Analyze esbuild bundle size
# Temporarily disable this for community PRs until this can be implemented in a separate workflow
if: github.event.pull_request.head.repo.fork == false
uses: exoego/esbuild-bundle-analyzer@v1
with:
metafiles: 'packages/payload/meta_index.json,packages/payload/meta_shared.json,packages/ui/meta_client.json,packages/ui/meta_shared.json,packages/next/meta_index.json,packages/richtext-lexical/meta_client.json'

7
.vscode/launch.json vendored
View File

@@ -139,6 +139,13 @@
"request": "launch",
"type": "node-terminal"
},
{
"command": "pnpm tsx --no-deprecation test/dev.ts trash",
"cwd": "${workspaceFolder}",
"name": "Run Dev Trash",
"request": "launch",
"type": "node-terminal"
},
{
"command": "pnpm tsx --no-deprecation test/dev.ts uploads",
"cwd": "${workspaceFolder}",

View File

@@ -77,13 +77,9 @@ If you wish to use your own MongoDB database for the `test` directory instead of
### Using Postgres
Our test suites supports automatic PostgreSQL + PostGIS setup using Docker. No local PostgreSQL installation required. By default, mongodb is used.
If you have postgres installed on your system, you can also run the test suites using postgres. By default, mongodb is used.
To use postgres, simply set the `PAYLOAD_DATABASE` environment variable to `postgres`.
```bash
PAYLOAD_DATABASE=postgres pnpm dev {suite}
```
To do that, simply set the `PAYLOAD_DATABASE` environment variable to `postgres`.
### Running the e2e and int tests

View File

@@ -77,7 +77,7 @@ All auto-generated files will contain the following comments at the top of each
## Admin Options
All options for the Admin Panel are defined in your [Payload Config](../configuration/overview) under the `admin` property:
All root-level options for the Admin Panel are defined in your [Payload Config](../configuration/overview) under the `admin` property:
```ts
import { buildConfig } from 'payload'

View File

@@ -1,62 +0,0 @@
---
title: Project Configuration
label: Configuration
order: 20
desc: Quickly configure and deploy your Payload Cloud project in a few simple steps.
keywords: configuration, config, settings, project, cloud, payload cloud, deploy, deployment
---
## Select your plan
Once you have created a project, you will need to select your plan. This will determine the resources that are allocated to your project and the features that are available to you.
<Banner type="success">
Note: All Payload Cloud teams that deploy a project require a card on file.
This helps us prevent fraud and abuse on our platform. If you select a plan
with a free trial, you will not be charged until your trial period is over.
Well remind you 7 days before your trial ends and you can cancel anytime.
</Banner>
## Project Details
| Option | Description |
| ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Region** | Select the region closest to your audience. This will ensure the fastest communication between your data and your client. |
| **Project Name** | A name for your project. You can change this at any time. |
| **Project Slug** | Choose a unique slug to identify your project. This needs to be unique for your team and you can change it any time. |
| **Team** | Select the team you want to create the project under. If this is your first project, a personal team will be created for you automatically. You can modify your team settings and invite new members at any time from the Team Settings page. |
## Build Settings
If you are deploying a new project from a template, the following settings will be automatically configured for you. If you are using your own repository, you need to make sure your build settings are accurate for your project to deploy correctly.
| Option | Description |
| -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Root Directory** | The folder where your `package.json` file lives. |
| **Install Command** | The command used to install your modules, for example: `yarn install` or `npm install` |
| **Build Command** | The command used to build your application, for example: `yarn build` or `npm run build` |
| **Serve Command** | The command used to serve your application, for example: `yarn serve` or `npm run serve` |
| **Branch to Deploy** | Select the branch of your repository that you want to deploy from. This is the branch that will be used to build your project when you commit new changes. |
| **Default Domain** | Set a default domain for your project. This must be unique and you will not able to change it. You can always add a custom domain later in your project settings. |
## Environment Variables
Any of the features in Payload Cloud that require environment variables will automatically be provided to your application. If your app requires any custom environment variables, you can set them here.
<Banner type="warning">
Note: For security reasons, any variables you wish to provide to the [Admin
Panel](../admin/overview) must be prefixed with `NEXT_PUBLIC_`.  Learn more
[here](../configuration/environment-vars).
</Banner>
## Payment
Payment methods can be set per project and can be updated any time. You can use teams default payment method, or add a new one. Modify your payment methods in your Project settings / Team settings.
<Banner type="success">
**Note:** All Payload Cloud teams that deploy a project require a card on
file. This helps us prevent fraud and abuse on our platform. If you select a
plan with a free trial, you will not be charged until your trial period is
over. Well remind you 7 days before your trial ends and you can cancel
anytime.
</Banner>

View File

@@ -1,53 +0,0 @@
---
title: Getting Started
label: Getting Started
order: 10
desc: Get started with Payload Cloud, a deployment solution specifically designed for Node + MongoDB applications.
keywords: cloud, hosted, database, storage, email, deployment, serverless, node, mongodb, s3, aws, cloudflare, atlas, resend, payload, cms
---
A deployment solution specifically designed for Node.js + MongoDB applications, offering seamless deployment of your entire stack in one place. You can get started in minutes with a one-click template or bring your own codebase with you.
Payload Cloud offers various plans tailored to meet your specific needs, including a MongoDB Atlas database, S3 file storage, and email delivery powered by [Resend](https://resend.com). To see a full breakdown of features and plans, see our [Cloud Pricing page](https://payloadcms.com/cloud-pricing).
To get started, you first need to create an account. Head over to [the login screen](https://payloadcms.com/login) and **Register for Free**.
<Banner type="success">
To create your first project, you can either select [a
template](#starting-from-a-template) or [import an existing
project](#importing-from-an-existing-codebase) from GitHub.
</Banner>
## Starting from a Template
Templates come preconfigured and provide a one-click solution to quickly deploy a new application.
![Screen for creating a new project from a template](https://payloadcms.com/images/docs/cloud/create-from-template.jpg)
_Creating a new project from a template._
After creating an account, select your desired template from the Projects page. At this point, you need to connect to authorize the Payload Cloud application with your GitHub account. Click Continue with GitHub and follow the prompts to authorize the app.
Next, select your `GitHub Scope`. If you belong to multiple organizations, they will show up here. If you do not see the organization you are looking for, you may need to adjust your GitHub app permissions.
After selecting your scope, create a unique `repository name` and select whether you want your repository to be public or private on GitHub.
<Banner type="warning">
**Note:** Public repositories can be accessed by anyone online, while private
repositories grant access only to you and anyone you explicitly authorize.
</Banner>
Once you are ready, click **Create Project**. This will clone the selected template to a new repository in your GitHub account, and take you to the configuration page to set up your project for deployment.
## Importing from an Existing Codebase
Payload Cloud works for any Node.js + MongoDB app. From the New Project page, select **import an existing Git codebase**. Choose the organization and select the repository you want to import. From here, you will be taken to the configuration page to set up your project for deployment.
![Screen for creating a new project from an existing repository](https://payloadcms.com/images/docs/cloud/create-from-existing.jpg)
_Creating a new project from an existing repository._
<Banner type="warning">
**Note:** In order to make use of the features of Payload Cloud in your own
codebase, you will need to add the [Cloud
Plugin](https://github.com/payloadcms/payload/tree/main/packages/payload-cloud)
to your Payload app.
</Banner>

View File

@@ -1,137 +0,0 @@
---
title: Cloud Projects
label: Projects
order: 40
desc: Manage your Payload Cloud projects.
keywords: cloud, payload cloud, projects, project, overview, database, file storage, build settings, environment variables, custom domains, email, developing locally
---
## Overview
<Banner>
The overview tab shows your most recent deployment, along with build and
deployment logs. From here, you can see your live URL, deployment details like
timestamps and commit hash, as well as the status of your deployment. You can
also trigger a redeployment manually, which will rebuild your project using
the current configuration.
</Banner>
![Payload Cloud Overview Page](https://payloadcms.com/images/docs/cloud/overview-page.jpg)
_A screenshot of the Overview page for a Cloud project._
## Database
Your Payload Cloud project comes with a MongoDB serverless Atlas DB instance or a Dedicated Atlas cluster, depending on your plan. To interact with your cloud database, you will be provided with a MongoDB connection string. This can be found under the **Database** tab of your project.
`mongodb+srv://your_connection_string`
## File Storage
Payload Cloud gives you S3 file storage backed by Cloudflare as a CDN, and this plugin extends Payload so that all of your media will be stored in S3 rather than locally.
AWS Cognito is used for authentication to your S3 bucket. The [Payload Cloud Plugin](https://github.com/payloadcms/payload/tree/main/packages/payload-cloud) will automatically pick up these values. These values are only if you'd like to access your files directly, outside of Payload Cloud.
### Accessing Files Outside of Payload Cloud
If you'd like to access your files outside of Payload Cloud, you'll need to retrieve some values from your project's settings and put them into your environment variables. In Payload Cloud, navigate to the File Storage tab and copy the values using the copy button. Put these values in your .env file. Also copy the Cognito Password value separately and put into your .env file as well.
When you are done, you should have the following values in your .env file:
```env
PAYLOAD_CLOUD=true
PAYLOAD_CLOUD_ENVIRONMENT=prod
PAYLOAD_CLOUD_COGNITO_USER_POOL_CLIENT_ID=
PAYLOAD_CLOUD_COGNITO_USER_POOL_ID=
PAYLOAD_CLOUD_COGNITO_IDENTITY_POOL_ID=
PAYLOAD_CLOUD_PROJECT_ID=
PAYLOAD_CLOUD_BUCKET=
PAYLOAD_CLOUD_BUCKET_REGION=
PAYLOAD_CLOUD_COGNITO_PASSWORD=
```
The plugin will pick up these values and use them to access your files.
## Build Settings
You can update settings from your Projects Settings tab. Changes to your build settings will trigger a redeployment of your project.
## Environment Variables
From the Environment Variables page of the Settings tab, you can add, update and delete variables for use in your project. Like build settings, these changes will trigger a redeployment of your project.
<Banner>
Note: For security reasons, any variables you wish to provide to the [Admin
Panel](../admin/overview) must be prefixed with `NEXT_PUBLIC_`. [More
details](../configuration/environment-vars).
</Banner>
## Custom Domains
With Payload Cloud, you can add custom domain names to your project. To do so, first go to the Domains page of the Settings tab of your project. Here you can see your default domain. To add a new domain, type in the domain name you wish to use.
<Banner>
Note: do not include the protocol (http:// or https://) or any paths (/page).
Only include the domain name and extension, and optionally a subdomain. -
your-domain.com - backend.your-domain.com
</Banner>
Once you click save, a DNS record will be generated for your domain name to point to your live project. Add this record into your DNS providers records, and once the records are resolving properly (this can take 1hr to 48hrs in some cases), your domain will now to point to your live project.
You will also need to configure your Payload project to use your specified domain. In your `payload.config.ts` file, specify your `serverURL` with your domain:
```ts
export default buildConfig({
serverURL: 'https://example.com',
// the rest of your config,
})
```
## Email
Powered by [Resend](https://resend.com), Payload Cloud comes with integrated email support out of the box. No configuration is needed, and you can use `payload.sendEmail()` to send email right from your Payload app. To learn more about sending email with Payload, checkout the [Email Configuration](../email/overview) overview.
If you are on the Pro or Enterprise plan, you can add your own custom Email domain name. From the Email page of your projects Settings, add the domain you wish to use for email delivery. This will generate a set of DNS records. Add these records to your DNS provider and click verify to check that your records are resolving properly. Once verified, your emails will now be sent from your custom domain name.
## Developing Locally
To make changes to your project, you will need to clone the repository defined in your project settings to your local machine. In order to run your project locally, you will need configure your local environment first. Refer to your repositorys `README.md` file to see the steps needed for your specific template.
From there, you are ready to make updates to your project. When you are ready to make your changes live, commit your changes to the branch you specified in your Project settings, and your application will automatically trigger a redeploy and build from your latest commit.
## Cloud Plugin
Projects generated from a template will come pre-configured with the official Cloud Plugin, but if you are using your own repository you will need to add this into your project. To do so, add the plugin to your Payload Config:
`pnpm add @payloadcms/payload-cloud`
```js
import { payloadCloudPlugin } from '@payloadcms/payload-cloud'
import { buildConfig } from 'payload'
export default buildConfig({
plugins: [payloadCloudPlugin()],
// rest of config
})
```
<Banner type="warning">
**Note:** If your Payload Config already has an email with transport, this
will take precedence over Payload Cloud's email service.
</Banner>
<Banner type="info">
Good to know: the Payload Cloud Plugin was previously named
`@payloadcms/plugin-cloud`. If you are using this plugin, you should update to
the new package name.
</Banner>
#### **Optional configuration**
If you wish to opt-out of any Payload cloud features, the plugin also accepts options to do so.
```js
payloadCloud({
storage: false, // Disable file storage
email: false, // Disable email delivery
})
```

View File

@@ -1,35 +0,0 @@
---
title: Cloud Teams
label: Teams
order: 30
desc: Manage your Payload Cloud team and billing settings.
keywords: team, teams, billing, subscription, payment, plan, plans, cloud, payload cloud
---
<Banner>
Within Payload Cloud, the team management feature offers you the ability to
manage your organization, team members, billing, and subscription settings.
</Banner>
![Payload Cloud Team Settings](https://payloadcms.com/images/docs/cloud/team-settings.jpg)
_A screenshot of the Team Settings page._
## Members
Each team has members that can interact with your projects. You can invite multiple people to your team and each individual can belong to more than one team. You can assign them either `owner` or `user` permissions. Owners are able to make admin-only changes, such as deleting projects, and editing billing information.
## Adding Members
To add a new member to your team, visit your Teams Settings page, and click “Invite Teammate”. You can then add their email address, and assign their role. Press “Save” to send the invitations, which will send an email to the invited team member where they can create a new account.
## Billing
Users can update billing settings and subscriptions for any teams where they are designated as an `owner`. To make updates to the teams payment methods, visit the Billing page under the Team Settings tab. You can add new cards, delete cards, and set a payment method as a default. The default payment method will be used in the event that another payment method fails.
## Subscriptions
From the Subscriptions page, a team owner can see all current plans for their team. From here, you can see the price of each plan, if there is an active trial, and when you will be billed next.
## Invoices
The Invoices page will you show you the invoices for your account, as well as the status on their payment.

View File

@@ -60,32 +60,33 @@ export const Posts: CollectionConfig = {
The following options are available:
| Option | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `admin` | The configuration options for the Admin Panel. [More details](#admin-options). |
| `access` | Provide Access Control functions to define exactly who should be able to do what with Documents in this Collection. [More details](../access-control/collections). |
| `auth` | Specify options if you would like this Collection to feature authentication. [More details](../authentication/overview). |
| `custom` | Extension point for adding custom data (e.g. for plugins) |
| `disableDuplicate` | When true, do not show the "Duplicate" button while editing documents within this Collection and prevent `duplicate` from all APIs. |
| `defaultSort` | Pass a top-level field to sort by default in the Collection List View. Prefix the name of the field with a minus symbol ("-") to sort in descending order. Multiple fields can be specified by using a string array. |
| `dbName` | Custom table or Collection name depending on the Database Adapter. Auto-generated from slug if not defined. |
| `endpoints` | Add custom routes to the REST API. Set to `false` to disable routes. [More details](../rest-api/overview#custom-endpoints). |
| `fields` \* | Array of field types that will determine the structure and functionality of the data stored within this Collection. [More details](../fields/overview). |
| `graphQL` | Manage GraphQL-related properties for this collection. [More](#graphql) |
| `hooks` | Entry point for Hooks. [More details](../hooks/overview#collection-hooks). |
| `orderable` | If true, enables custom ordering for the collection, and documents can be reordered via drag and drop. Uses [fractional indexing](https://observablehq.com/@dgreensp/implementing-fractional-indexing) for efficient reordering. |
| `labels` | Singular and plural labels for use in identifying this Collection throughout Payload. Auto-generated from slug if not defined. |
| `enableQueryPresets` | Enable query presets for this Collection. [More details](../query-presets/overview). |
| `lockDocuments` | Enables or disables document locking. By default, document locking is enabled. Set to an object to configure, or set to `false` to disable locking. [More details](../admin/locked-documents). |
| `slug` \* | Unique, URL-friendly string that will act as an identifier for this Collection. |
| `timestamps` | Set to false to disable documents' automatically generated `createdAt` and `updatedAt` timestamps. |
| `typescript` | An object with property `interface` as the text used in schema generation. Auto-generated from slug if not defined. |
| `upload` | Specify options if you would like this Collection to support file uploads. For more, consult the [Uploads](../upload/overview) documentation. |
| `versions` | Set to true to enable default options, or configure with object properties. [More details](../versions/overview#collection-config). |
| `defaultPopulate` | Specify which fields to select when this Collection is populated from another document. [More Details](../queries/select#defaultpopulate-collection-config-property). |
| `indexes` | Define compound indexes for this collection. This can be used to either speed up querying/sorting by 2 or more fields at the same time or to ensure uniqueness between several fields. [More details](../database/indexes#compound-indexes). |
| `forceSelect` | Specify which fields should be selected always, regardless of the `select` query which can be useful that the field exists for access control / hooks |
| `disableBulkEdit` | Disable the bulk edit operation for the collection in the admin panel and the REST API |
| Option | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `admin` | The configuration options for the Admin Panel. [More details](#admin-options). |
| `access` | Provide Access Control functions to define exactly who should be able to do what with Documents in this Collection. [More details](../access-control/collections). |
| `auth` | Specify options if you would like this Collection to feature authentication. [More details](../authentication/overview). |
| `custom` | Extension point for adding custom data (e.g. for plugins) |
| `disableDuplicate` | When true, do not show the "Duplicate" button while editing documents within this Collection and prevent `duplicate` from all APIs. |
| `defaultSort` | Pass a top-level field to sort by default in the Collection List View. Prefix the name of the field with a minus symbol ("-") to sort in descending order. Multiple fields can be specified by using a string array. |
| `dbName` | Custom table or Collection name depending on the Database Adapter. Auto-generated from slug if not defined. |
| `endpoints` | Add custom routes to the REST API. Set to `false` to disable routes. [More details](../rest-api/overview#custom-endpoints). |
| `fields` \* | Array of field types that will determine the structure and functionality of the data stored within this Collection. [More details](../fields/overview). |
| `graphQL` | Manage GraphQL-related properties for this collection. [More](#graphql) |
| `hooks` | Entry point for Hooks. [More details](../hooks/overview#collection-hooks). |
| `orderable` | If true, enables custom ordering for the collection, and documents can be reordered via drag and drop. Uses [fractional indexing](https://observablehq.com/@dgreensp/implementing-fractional-indexing) for efficient reordering. |
| `labels` | Singular and plural labels for use in identifying this Collection throughout Payload. Auto-generated from slug if not defined. |
| `enableQueryPresets` | Enable query presets for this Collection. [More details](../query-presets/overview). |
| `lockDocuments` | Enables or disables document locking. By default, document locking is enabled. Set to an object to configure, or set to `false` to disable locking. [More details](../admin/locked-documents). |
| `slug` \* | Unique, URL-friendly string that will act as an identifier for this Collection. |
| `timestamps` | Set to false to disable documents' automatically generated `createdAt` and `updatedAt` timestamps. |
| `trash` | A boolean to enable soft deletes for this collection. Defaults to `false`. [More details](../trash/overview). |
| `typescript` | An object with property `interface` as the text used in schema generation. Auto-generated from slug if not defined. |
| `upload` | Specify options if you would like this Collection to support file uploads. For more, consult the [Uploads](../upload/overview) documentation. |
| `versions` | Set to true to enable default options, or configure with object properties. [More details](../versions/overview#collection-config). |
| `defaultPopulate` | Specify which fields to select when this Collection is populated from another document. [More Details](../queries/select#defaultpopulate-collection-config-property). |
| `indexes` | Define compound indexes for this collection. This can be used to either speed up querying/sorting by 2 or more fields at the same time or to ensure uniqueness between several fields. |
| `forceSelect` | Specify which fields should be selected always, regardless of the `select` query which can be useful that the field exists for access control / hooks |
| `disableBulkEdit` | Disable the bulk edit operation for the collection in the admin panel and the REST API |
_\* An asterisk denotes that a property is required._
@@ -130,6 +131,7 @@ The following options are available:
| `description` | Text to display below the Collection label in the List View to give editors more information. Alternatively, you can use the `admin.components.Description` to render a React component. [More details](#custom-components). |
| `defaultColumns` | Array of field names that correspond to which columns to show by default in this Collection's List View. |
| `disableCopyToLocale` | Disables the "Copy to Locale" button while editing documents within this Collection. Only applicable when localization is enabled. |
| `groupBy` | Beta. Enable grouping by a field in the list view. |
| `hideAPIURL` | Hides the "API URL" meta field while editing documents within this Collection. |
| `enableRichTextLink` | The [Rich Text](../fields/rich-text) field features a `Link` element which allows for users to automatically reference related documents within their rich text. Set to `true` by default. |
| `enableRichTextRelationship` | The [Rich Text](../fields/rich-text) field features a `Relationship` element which allows for users to automatically reference related documents within their rich text. Set to `true` by default. |

View File

@@ -30,18 +30,22 @@ export default buildConfig({
## Options
| Option | Description |
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `autoPluralization` | Tell Mongoose to auto-pluralize any collection names if it encounters any singular words used as collection `slug`s. |
| `connectOptions` | Customize MongoDB connection options. Payload will connect to your MongoDB database using default options which you can override and extend to include all the [options](https://mongoosejs.com/docs/connections.html#options) available to mongoose. |
| `collectionsSchemaOptions` | Customize Mongoose schema options for collections. |
| `disableIndexHints` | Set to true to disable hinting to MongoDB to use 'id' as index. This is currently done when counting documents for pagination, as it increases the speed of the count function used in that query. Disabling this optimization might fix some problems with AWS DocumentDB. Defaults to false |
| `migrationDir` | Customize the directory that migrations are stored. |
| `transactionOptions` | An object with configuration properties used in [transactions](https://www.mongodb.com/docs/manual/core/transactions/) or `false` which will disable the use of transactions. |
| `collation` | Enable language-specific string comparison with customizable options. Available on MongoDB 3.4+. Defaults locale to "en". Example: `{ strength: 3 }`. For a full list of collation options and their definitions, see the [MongoDB documentation](https://www.mongodb.com/docs/manual/reference/collation/). |
| `allowAdditionalKeys` | By default, Payload strips all additional keys from MongoDB data that don't exist in the Payload schema. If you have some data that you want to include to the result but it doesn't exist in Payload, you can set this to `true`. Be careful as Payload access control _won't_ work for this data. |
| `allowIDOnCreate` | Set to `true` to use the `id` passed in data on the create API operations without using a custom ID field. |
| `disableFallbackSort` | Set to `true` to disable the adapter adding a fallback sort when sorting by non-unique fields, this can affect performance in some cases but it ensures a consistent order of results. |
| Option | Description |
| ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `autoPluralization` | Tell Mongoose to auto-pluralize any collection names if it encounters any singular words used as collection `slug`s. |
| `connectOptions` | Customize MongoDB connection options. Payload will connect to your MongoDB database using default options which you can override and extend to include all the [options](https://mongoosejs.com/docs/connections.html#options) available to mongoose. |
| `collectionsSchemaOptions` | Customize Mongoose schema options for collections. |
| `disableIndexHints` | Set to true to disable hinting to MongoDB to use 'id' as index. This is currently done when counting documents for pagination, as it increases the speed of the count function used in that query. Disabling this optimization might fix some problems with AWS DocumentDB. Defaults to false |
| `migrationDir` | Customize the directory that migrations are stored. |
| `transactionOptions` | An object with configuration properties used in [transactions](https://www.mongodb.com/docs/manual/core/transactions/) or `false` which will disable the use of transactions. |
| `collation` | Enable language-specific string comparison with customizable options. Available on MongoDB 3.4+. Defaults locale to "en". Example: `{ strength: 3 }`. For a full list of collation options and their definitions, see the [MongoDB documentation](https://www.mongodb.com/docs/manual/reference/collation/). |
| `allowAdditionalKeys` | By default, Payload strips all additional keys from MongoDB data that don't exist in the Payload schema. If you have some data that you want to include to the result but it doesn't exist in Payload, you can set this to `true`. Be careful as Payload access control _won't_ work for this data. |
| `allowIDOnCreate` | Set to `true` to use the `id` passed in data on the create API operations without using a custom ID field. |
| `disableFallbackSort` | Set to `true` to disable the adapter adding a fallback sort when sorting by non-unique fields, this can affect performance in some cases but it ensures a consistent order of results. |
| `useAlternativeDropDatabase` | Set to `true` to use an alternative `dropDatabase` implementation that calls `collection.deleteMany({})` on every collection instead of sending a raw `dropDatabase` command. Payload only uses `dropDatabase` for testing purposes. Defaults to `false`. |
| `useBigIntForNumberIDs` | Set to `true` to use `BigInt` for custom ID fields of type `'number'`. Useful for databases that don't support `double` or `int32` IDs. Defaults to `false`. |
| `useJoinAggregations` | Set to `false` to disable join aggregations (which use correlated subqueries) and instead populate join fields via multiple `find` queries. Defaults to `true`. |
| `usePipelineInSortLookup` | Set to `false` to disable the use of `pipeline` in the `$lookup` aggregation in sorting. Defaults to `true`. |
## Access to Mongoose models
@@ -56,9 +60,21 @@ You can access Mongoose models as follows:
## Using other MongoDB implementations
Limitations with [DocumentDB](https://aws.amazon.com/documentdb/) and [Azure Cosmos DB](https://azure.microsoft.com/en-us/products/cosmos-db):
You can import the `compatabilityOptions` object to get the recommended settings for other MongoDB implementations. Since these databases aren't officially supported by payload, you may still encounter issues even with these settings (please create an issue or PR if you believe these options should be updated):
- For Azure Cosmos DB you must pass `transactionOptions: false` to the adapter options. Azure Cosmos DB does not support transactions that update two and more documents in different collections, which is a common case when using Payload (via hooks).
- For Azure Cosmos DB the root config property `indexSortableFields` must be set to `true`.
- The [Join Field](../fields/join) is not supported in DocumentDB and Azure Cosmos DB, as we internally use MongoDB aggregations to query data for that field, which are limited there. This can be changed in the future.
- For DocumentDB pass `disableIndexHints: true` to disable hinting to the DB to use `id` as index which can cause problems with DocumentDB.
```ts
import { mongooseAdapter, compatabilityOptions } from '@payloadcms/db-mongodb'
export default buildConfig({
db: mongooseAdapter({
url: process.env.DATABASE_URI,
// For example, if you're using firestore:
...compatabilityOptions.firestore,
}),
})
```
We export compatability options for [DocumentDB](https://aws.amazon.com/documentdb/), [Azure Cosmos DB](https://azure.microsoft.com/en-us/products/cosmos-db) and [Firestore](https://cloud.google.com/firestore/mongodb-compatibility/docs/overview). Known limitations:
- Azure Cosmos DB does not support transactions that update two or more documents in different collections, which is a common case when using Payload (via hooks).
- Azure Cosmos DB the root config property `indexSortableFields` must be set to `true`.

View File

@@ -157,6 +157,7 @@ The following field names are forbidden and cannot be used:
- `salt`
- `hash`
- `file`
- `status` - with Postgres Adapter and when drafts are enabled
### Field-level Hooks

View File

@@ -51,7 +51,7 @@ export default buildConfig({
// add as many cron jobs as you want
],
shouldAutoRun: async (payload) => {
// Tell Payload if it should run jobs or not.
// Tell Payload if it should run jobs or not. This function is optional and will return true by default.
// This function will be invoked each time Payload goes to pick up and run jobs.
// If this function ever returns false, the cron schedule will be stopped.
return true

View File

@@ -0,0 +1,155 @@
---
title: Job Schedules
label: Schedules
order: 60
desc: Payload allows you to schedule jobs to run periodically
keywords: jobs queue, application framework, typescript, node, react, nextjs, scheduling, cron, schedule
---
Payload's `schedule` property lets you enqueue Jobs regularly according to a cron schedule - daily, weekly, hourly, or any custom interval. This is ideal for tasks or workflows that must repeat automatically and without manual intervention.
Scheduling Jobs differs significantly from running them:
- **Queueing**: Scheduling only creates (enqueues) the Job according to your cron expression. It does not immediately execute any business logic.
- **Running**: Execution happens separately through your Jobs runner - such as autorun, or manual invocation using `payload.jobs.run()` or the `payload-jobs/run` endpoint.
Use the `schedule` property specifically when you have recurring tasks or workflows. To enqueue a single Job to run once in the future, use the `waitUntil` property instead.
## Example use cases
**Regular emails or notifications**
Send nightly digests, weekly newsletters, or hourly updates.
**Batch processing during off-hours**
Process analytics data or rebuild static sites during low-traffic times.
**Periodic data synchronization**
Regularly push or pull updates to or from external APIs.
## Handling schedules
Something needs to actually trigger the scheduling of jobs (execute the scheduling lifecycle seen below). By default, the `jobs.autorun` configuration, as well as the `/api/payload-jobs/run` will also handle scheduling for the queue specified in the `autorun` configuration.
You can disable this behavior by setting `disableScheduling: true` in your `autorun` configuration, or by passing `disableScheduling=true` to the `/api/payload-jobs/run` endpoint. This is useful if you want to handle scheduling manually, for example, by using a cron job or a serverless function that calls the `/api/payload-jobs/handle-schedules` endpoint or the `payload.jobs.handleSchedules()` local API method.
## Defining schedules on Tasks or Workflows
Schedules are defined using the `schedule` property:
```ts
export type ScheduleConfig = {
cron: string // required, supports seconds precision
queue: string // required, the queue to push Jobs onto
hooks?: {
// Optional hooks to customize scheduling behavior
beforeSchedule?: BeforeScheduleFn
afterSchedule?: AfterScheduleFn
}
}
```
### Example schedule
The following example demonstrates scheduling a Job to enqueue every day at midnight:
```ts
import type { TaskConfig } from 'payload'
export const SendDigestEmail: TaskConfig<'SendDigestEmail'> = {
slug: 'SendDigestEmail',
schedule: [
{
cron: '0 0 * * *', // Every day at midnight
queue: 'nightly',
},
],
handler: async () => {
await sendDigestToAllUsers()
},
}
```
This configuration only queues the Job - it does not execute it immediately. To actually run the queued Job, you configure autorun in your Payload config (note that autorun should **not** be used on serverless platforms):
```ts
export default buildConfig({
jobs: {
autoRun: [
{
cron: '* * * * *', // Runs every minute
queue: 'nightly',
},
],
tasks: [SendDigestEmail],
},
})
```
That way, Payload's scheduler will automatically enqueue the job into the `nightly` queue every day at midnight. The autorun configuration will check the `nightly` queue every minute and execute any Jobs that are due to run.
## Scheduling lifecycle
Here's how the scheduling process operates in detail:
1. **Cron evaluation**: Payload (or your external trigger in `manual` mode) identifies which schedules are due to run. To do that, it will
read the `payload-jobs-stats` global which contains information about the last time each scheduled task or workflow was run.
2. **BeforeSchedule hook**:
- The default beforeSchedule hook checks how many active or runnable jobs of the same type that have been queued by the scheduling system currently exist.
If such a job exists, it will skip scheduling a new one.
- You can provide your own `beforeSchedule` hook to customize this behavior. For example, you might want to allow multiple overlapping Jobs or dynamically set the Job input data.
3. **Enqueue Job**: Payload queues up a new job. This job will have `waitUntil` set to the next scheduled time based on the cron expression.
4. **AfterSchedule hook**:
- The default afterSchedule hook updates the `payload-jobs-stats` global metadata with the last scheduled time for the Job.
- You can provide your own afterSchedule hook to it for custom logging, metrics, or other post-scheduling actions.
## Customizing concurrency and input (Advanced)
You may want more control over concurrency or dynamically set Job inputs at scheduling time. For instance, allowing multiple overlapping Jobs to be scheduled, even if a previously scheduled job has not completed yet, or preparing dynamic data to pass to your Job handler:
```ts
import { countRunnableOrActiveJobsForQueue } from 'payload'
schedule: [
{
cron: '* * * * *', // every minute
queue: 'reports',
hooks: {
beforeSchedule: async ({ queueable, req }) => {
const runnableOrActiveJobsForQueue =
await countRunnableOrActiveJobsForQueue({
queue: queueable.scheduleConfig.queue,
req,
taskSlug: queueable.taskConfig?.slug,
workflowSlug: queueable.workflowConfig?.slug,
onlyScheduled: true,
})
// Allow up to 3 simultaneous scheduled jobs and set dynamic input
return {
shouldSchedule: runnableOrActiveJobsForQueue < 3,
input: { text: 'Hi there' },
}
},
},
},
]
```
This allows fine-grained control over how many Jobs can run simultaneously and provides dynamically computed input values each time a Job is scheduled.
## Scheduling in serverless environments
On serverless platforms, scheduling must be triggered externally since Payload does not automatically run cron schedules in ephemeral environments. You have two main ways to trigger scheduling manually:
- **Invoke via Payload's API:** `payload.jobs.handleSchedules()`
- **Use the REST API endpoint:** `/api/payload-jobs/handle-schedules`
- **Use the run endpoint, which also handles scheduling by default:** `GET /api/payload-jobs/run`
For example, on Vercel, you can set up a Vercel Cron to regularly trigger scheduling:
- **Vercel Cron Job:** Configure Vercel Cron to periodically call `GET /api/payload-jobs/handle-schedules`. If you would like to auto-run your scheduled jobs as well, you can use the `GET /api/payload-jobs/run` endpoint.
Once Jobs are queued, their execution depends entirely on your configured runner setup (e.g., autorun, or manual invocation).

View File

@@ -51,6 +51,7 @@ export default async function Page() {
collection: 'pages',
id: '123',
draft: true,
trash: true, // add this if trash is enabled in your collection and want to preview trashed documents
})
return (

View File

@@ -194,6 +194,27 @@ const result = await payload.count({
})
```
### FindDistinct#collection-find-distinct
```js
// Result will be an object with:
// {
// values: ['value-1', 'value-2'], // array of distinct values,
// field: 'title', // the field
// totalDocs: 10, // count of the distinct values satisfies query,
// perPage: 10, // count of distinct values per page (based on provided limit)
// }
const result = await payload.findDistinct({
collection: 'posts', // required
locale: 'en',
where: {}, // pass a `where` query here
user: dummyUser,
overrideAccess: false,
field: 'title',
sort: 'title',
})
```
### Update by ID#collection-update-by-id
```js

View File

@@ -58,7 +58,7 @@ To learn more, see the [Custom Components Performance](../admin/custom-component
### Block references
Use [Block References](../fields/blocks#block-references) to share the same block across multiple fields without bloating the config. This will reduce the number of fields to traverse when processing permissions, etc. and can can significantly reduce the amount of data sent from the server to the client in the Admin Panel.
Use [Block References](../fields/blocks#block-references) to share the same block across multiple fields without bloating the config. This will reduce the number of fields to traverse when processing permissions, etc. and can significantly reduce the amount of data sent from the server to the client in the Admin Panel.
For example, if you have a block that is used in multiple fields, you can define it once and reference it in each field.

View File

@@ -1,7 +1,7 @@
---
title: Form Builder Plugin
label: Form Builder
order: 40
order: 30
desc: Easily build and manage forms from the Admin Panel. Send dynamic, personalized emails and even accept and process payments.
keywords: plugins, plugin, form, forms, form builder
---

View File

@@ -0,0 +1,155 @@
---
title: Import Export Plugin
label: Import Export
order: 40
desc: Add Import and export functionality to create CSV and JSON data exports
keywords: plugins, plugin, import, export, csv, JSON, data, ETL, download
---
![https://www.npmjs.com/package/@payloadcms/plugin-import-export](https://img.shields.io/npm/v/@payloadcms/plugin-import-export)
<Banner type="warning">
**Note**: This plugin is in **beta** as some aspects of it may change on any
minor releases. It is under development and currently only supports exporting
of collection data.
</Banner>
This plugin adds features that give admin users the ability to download or create export data as an upload collection and import it back into a project.
## Core Features
- Export data as CSV or JSON format via the admin UI
- Download the export directly through the browser
- Create a file upload of the export data
- Use the jobs queue for large exports
- (Coming soon) Import collection data
## Installation
Install the plugin using any JavaScript package manager like [pnpm](https://pnpm.io), [npm](https://npmjs.com), or [Yarn](https://yarnpkg.com):
```bash
pnpm add @payloadcms/plugin-import-export
```
## Basic Usage
In the `plugins` array of your [Payload Config](https://payloadcms.com/docs/configuration/overview), call the plugin with [options](#options):
```ts
import { buildConfig } from 'payload'
import { importExportPlugin } from '@payloadcms/plugin-import-export'
const config = buildConfig({
collections: [Pages, Media],
plugins: [
importExportPlugin({
collections: ['users', 'pages'],
// see below for a list of available options
}),
],
})
export default config
```
## Options
| Property | Type | Description |
| -------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| `collections` | string[] | Collections to include Import/Export controls in. Defaults to all collections. |
| `debug` | boolean | If true, enables debug logging. |
| `disableDownload` | boolean | If true, disables the download button in the export preview UI. |
| `disableJobsQueue` | boolean | If true, forces the export to run synchronously. |
| `disableSave` | boolean | If true, disables the save button in the export preview UI. |
| `format` | string | Forces a specific export format (`csv` or `json`), hides the format dropdown, and prevents the user from choosing the export format. |
| `overrideExportCollection` | function | Function to override the default export collection; takes the default export collection and allows you to modify and return it. |
## Field Options
In addition to the above plugin configuration options, you can granularly set the following field level options using the `custom['plugin-import-export']` properties in any of your collections.
| Property | Type | Description |
| ---------- | -------- | ----------------------------------------------------------------------------------------------------------------------------- |
| `disabled` | boolean | When `true` the field is completely excluded from the import-export plugin. |
| `toCSV` | function | Custom function used to modify the outgoing csv data by manipulating the data, siblingData or by returning the desired value. |
### Customizing the output of CSV data
To manipulate the data that a field exports you can add `toCSV` custom functions. This allows you to modify the outgoing csv data by manipulating the data, siblingData or by returning the desired value.
The toCSV function argument is an object with the following properties:
| Property | Type | Description |
| ------------ | ------- | ----------------------------------------------------------------- |
| `columnName` | string | The CSV column name given to the field. |
| `doc` | object | The top level document |
| `row` | object | The object data that can be manipulated to assign data to the CSV |
| `siblingDoc` | object | The document data at the level where it belongs |
| `value` | unknown | The data for the field. |
Example function:
```ts
const pages: CollectionConfig = {
slug: 'pages',
fields: [
{
name: 'author',
type: 'relationship',
relationTo: 'users',
custom: {
'plugin-import-export': {
toCSV: ({ value, columnName, row }) => {
// add both `author_id` and the `author_email` to the csv export
if (
value &&
typeof value === 'object' &&
'id' in value &&
'email' in value
) {
row[`${columnName}_id`] = (value as { id: number | string }).id
row[`${columnName}_email`] = (value as { email: string }).email
}
},
},
},
},
],
}
```
## Exporting Data
There are four possible ways that the plugin allows for exporting documents, the first two are available in the admin UI from the list view of a collection:
1. Direct download - Using a `POST` to `/api/exports/download` and streams the response as a file download
2. File storage - Goes to the `exports` collection as an uploads enabled collection
3. Local API - A create call to the uploads collection: `payload.create({ slug: 'uploads', ...parameters })`
4. Jobs Queue - `payload.jobs.queue({ task: 'createCollectionExport', input: parameters })`
By default, a user can use the Export drawer to create a file download by choosing `Save` or stream a downloadable file directly without persisting it by using the `Download` button. Either option can be disabled to provide the export experience you desire for your use-case.
The UI for creating exports provides options so that users can be selective about which documents to include and also which columns or fields to include.
It is necessary to add access control to the uploads collection configuration using the `overrideExportCollection` function if you have enabled this plugin on collections with data that some authenticated users should not have access to.
<Banner type="warning">
**Note**: Users who have read access to the upload collection may be able to
download data that is normally not readable due to [access
control](../access-control/overview).
</Banner>
The following parameters are used by the export function to handle requests:
| Property | Type | Description |
| ---------------- | -------- | ----------------------------------------------------------------------------------------------------------------- |
| `format` | text | Either `csv` or `json` to determine the shape of data exported |
| `limit` | number | The max number of documents to return |
| `sort` | select | The field to use for ordering documents |
| `locale` | string | The locale code to query documents or `all` |
| `draft` | string | Either `yes` or `no` to return documents with their newest drafts for drafts enabled collections |
| `fields` | string[] | Which collection fields are used to create the export, defaults to all |
| `collectionSlug` | string | The slug to query against |
| `where` | object | The WhereObject used to query documents to export. This is set by making selections or filters from the list view |
| `filename` | text | What to call the export being created |

View File

@@ -1,7 +1,7 @@
---
title: Multi-Tenant Plugin
label: Multi-Tenant
order: 40
order: 50
desc: Scaffolds multi-tenancy for your Payload application
keywords: plugins, multi-tenant, multi-tenancy, plugin, payload, cms, seo, indexing, search, search engine
---
@@ -54,7 +54,8 @@ The plugin accepts an object with the following properties:
```ts
type MultiTenantPluginConfig<ConfigTypes = unknown> = {
/**
* After a tenant is deleted, the plugin will attempt to clean up related documents
* After a tenant is deleted, the plugin will attempt
* to clean up related documents
* - removing documents with the tenant ID
* - removing the tenant from users
*
@@ -67,19 +68,22 @@ type MultiTenantPluginConfig<ConfigTypes = unknown> = {
collections: {
[key in CollectionSlug]?: {
/**
* Set to `true` if you want the collection to behave as a global
* Set to `true` if you want the collection to
* behave as a global
*
* @default false
*/
isGlobal?: boolean
/**
* Set to `false` if you want to manually apply the baseListFilter
* Set to `false` if you want to manually apply
* the baseListFilter
*
* @default true
*/
useBaseListFilter?: boolean
/**
* Set to `false` if you want to handle collection access manually without the multi-tenant constraints applied
* Set to `false` if you want to handle collection access
* manually without the multi-tenant constraints applied
*
* @default true
*/
@@ -88,7 +92,8 @@ type MultiTenantPluginConfig<ConfigTypes = unknown> = {
}
/**
* Enables debug mode
* - Makes the tenant field visible in the admin UI within applicable collections
* - Makes the tenant field visible in the
* admin UI within applicable collections
*
* @default false
*/
@@ -100,22 +105,27 @@ type MultiTenantPluginConfig<ConfigTypes = unknown> = {
*/
enabled?: boolean
/**
* Field configuration for the field added to all tenant enabled collections
* Field configuration for the field added
* to all tenant enabled collections
*/
tenantField?: {
access?: RelationshipField['access']
/**
* The name of the field added to all tenant enabled collections
* The name of the field added to all tenant
* enabled collections
*
* @default 'tenant'
*/
name?: string
}
/**
* Field configuration for the field added to the users collection
* Field configuration for the field added
* to the users collection
*
* If `includeDefaultField` is `false`, you must include the field on your users collection manually
* This is useful if you want to customize the field or place the field in a specific location
* If `includeDefaultField` is `false`, you must
* include the field on your users collection manually
* This is useful if you want to customize the field
* or place the field in a specific location
*/
tenantsArrayField?:
| {
@@ -136,7 +146,8 @@ type MultiTenantPluginConfig<ConfigTypes = unknown> = {
*/
arrayTenantFieldName?: string
/**
* When `includeDefaultField` is `true`, the field will be added to the users collection automatically
* When `includeDefaultField` is `true`, the field will
* be added to the users collection automatically
*/
includeDefaultField?: true
/**
@@ -153,7 +164,8 @@ type MultiTenantPluginConfig<ConfigTypes = unknown> = {
arrayFieldName?: string
arrayTenantFieldName?: string
/**
* When `includeDefaultField` is `false`, you must include the field on your users collection manually
* When `includeDefaultField` is `false`, you must
* include the field on your users collection manually
*/
includeDefaultField?: false
rowFields?: never
@@ -162,7 +174,8 @@ type MultiTenantPluginConfig<ConfigTypes = unknown> = {
/**
* Customize tenant selector label
*
* Either a string or an object where the keys are i18n codes and the values are the string labels
* Either a string or an object where the keys are i18n
* codes and the values are the string labels
*/
tenantSelectorLabel?:
| Partial<{
@@ -176,7 +189,8 @@ type MultiTenantPluginConfig<ConfigTypes = unknown> = {
*/
tenantsSlug?: string
/**
* Function that determines if a user has access to _all_ tenants
* Function that determines if a user has access
* to _all_ tenants
*
* Useful for super-admin type users
*/
@@ -184,15 +198,18 @@ type MultiTenantPluginConfig<ConfigTypes = unknown> = {
user: ConfigTypes extends { user: unknown } ? ConfigTypes['user'] : User,
) => boolean
/**
* Opt out of adding access constraints to the tenants collection
* Opt out of adding access constraints to
* the tenants collection
*/
useTenantsCollectionAccess?: boolean
/**
* Opt out including the baseListFilter to filter tenants by selected tenant
* Opt out including the baseListFilter to filter
* tenants by selected tenant
*/
useTenantsListFilter?: boolean
/**
* Opt out including the baseListFilter to filter users by selected tenant
* Opt out including the baseListFilter to filter
* users by selected tenant
*/
useUsersTenantFilter?: boolean
}
@@ -212,15 +229,15 @@ const config = buildConfig({
{
slug: 'tenants',
admin: {
useAsTitle: 'name'
}
useAsTitle: 'name',
},
fields: [
// remember, you own these fields
// these are merely suggestions/examples
{
name: 'name',
type: 'text',
required: true,
name: 'name',
type: 'text',
required: true,
},
{
name: 'slug',
@@ -231,7 +248,7 @@ const config = buildConfig({
name: 'domain',
type: 'text',
required: true,
}
},
],
},
],
@@ -241,7 +258,7 @@ const config = buildConfig({
pages: {},
navigation: {
isGlobal: true,
}
},
},
}),
],
@@ -327,14 +344,16 @@ type ContextType = {
/**
* Prevents a refresh when the tenant is changed
*
* If not switching tenants while viewing a "global", set to true
* If not switching tenants while viewing a "global",
* set to true
*/
setPreventRefreshOnChange: React.Dispatch<React.SetStateAction<boolean>>
/**
* Sets the selected tenant ID
*
* @param args.id - The ID of the tenant to select
* @param args.refresh - Whether to refresh the page after changing the tenant
* @param args.refresh - Whether to refresh the page
* after changing the tenant
*/
setTenant: (args: {
id: number | string | undefined

View File

@@ -1,7 +1,7 @@
---
title: Nested Docs Plugin
label: Nested Docs
order: 40
order: 60
desc: Nested documents in a parent, child, and sibling relationship.
keywords: plugins, nested, documents, parent, child, sibling, relationship
---

View File

@@ -55,6 +55,7 @@ Payload maintains a set of Official Plugins that solve for some of the common us
- [Sentry](./sentry)
- [SEO](./seo)
- [Stripe](./stripe)
- [Import/Export](./import-export)
You can also [build your own plugin](./build-your-own) to easily extend Payload's functionality in some other way. Once your plugin is ready, consider [sharing it with the community](#community-plugins).

View File

@@ -1,7 +1,7 @@
---
title: Redirects Plugin
label: Redirects
order: 40
order: 70
desc: Automatically create redirects for your Payload application
keywords: plugins, redirects, redirect, plugin, payload, cms, seo, indexing, search, search engine
---

View File

@@ -1,7 +1,7 @@
---
title: Search Plugin
label: Search
order: 40
order: 80
desc: Generates records of your documents that are extremely fast to search on.
keywords: plugins, search, search plugin, search engine, search index, search results, search bar, search box, search field, search form, search input
---

View File

@@ -1,7 +1,7 @@
---
title: Sentry Plugin
label: Sentry
order: 40
order: 90
desc: Integrate Sentry error tracking into your Payload application
keywords: plugins, sentry, error, tracking, monitoring, logging, bug, reporting, performance
---

View File

@@ -2,7 +2,7 @@
description: Manage SEO metadata from your Payload admin
keywords: plugins, seo, meta, search, engine, ranking, google
label: SEO
order: 30
order: 100
title: SEO Plugin
---

View File

@@ -1,7 +1,7 @@
---
title: Stripe Plugin
label: Stripe
order: 40
order: 110
desc: Easily accept payments with Stripe
keywords: plugins, stripe, payments, ecommerce
---

View File

@@ -24,16 +24,6 @@ Payload can be deployed _anywhere that Next.js can run_ - including Vercel, Netl
But it's important to remember that most Payload projects will also need a database, file storage, an email provider, and a CDN. Make sure you have all of the requirements that your project needs, no matter what deployment platform you choose.
Often, the easiest and fastest way to deploy Payload is to use [Payload Cloud](https://payloadcms.com/new) — where you get everything you need out of the box, including:
1. A MongoDB Atlas database
1. S3 file storage
1. Resend email service
1. Cloudflare CDN
1. Blue / green deployments
1. Logs
1. And more
## Basics
Payload runs fully in Next.js, so the [Next.js build process](https://nextjs.org/docs/app/building-your-application/deploying) is used for building Payload. If you've used `create-payload-app` to create your project, executing the `build`

View File

@@ -474,11 +474,15 @@ const MyNodeComponent = React.lazy(() =>
)
/**
* This node is a DecoratorNode. DecoratorNodes allow you to render React components in the editor.
* This node is a DecoratorNode. DecoratorNodes allow
* you to render React components in the editor.
*
* They need both createDom and decorate functions. createDom => outside of the html. decorate => React Component inside of the html.
* They need both createDom and decorate functions.
* createDom => outside of the html.
* decorate => React Component inside of the html.
*
* If we used DecoratorBlockNode instead, we would only need a decorate method
* If we used DecoratorBlockNode instead,
* we would only need a decorate method
*/
export class MyNode extends DecoratorNode<React.ReactElement> {
static clone(node: MyNode): MyNode {
@@ -490,9 +494,11 @@ export class MyNode extends DecoratorNode<React.ReactElement> {
}
/**
* Defines what happens if you copy a div element from another page and paste it into the lexical editor
* Defines what happens if you copy a div element
* from another page and paste it into the lexical editor
*
* This also determines the behavior of lexical's internal HTML -> Lexical converter
* This also determines the behavior of lexical's
* internal HTML -> Lexical converter
*/
static importDOM(): DOMConversionMap | null {
return {
@@ -504,14 +510,18 @@ export class MyNode extends DecoratorNode<React.ReactElement> {
}
/**
* The data for this node is stored serialized as JSON. This is the "load function" of that node: it takes the saved data and converts it into a node.
* The data for this node is stored serialized as JSON.
* This is the "load function" of that node: it takes
* the saved data and converts it into a node.
*/
static importJSON(serializedNode: SerializedMyNode): MyNode {
return $createMyNode()
}
/**
* Determines how the hr element is rendered in the lexical editor. This is only the "initial" / "outer" HTML element.
* Determines how the hr element is rendered in the
* lexical editor. This is only the "initial" / "outer"
* HTML element.
*/
createDOM(config: EditorConfig): HTMLElement {
const element = document.createElement('div')
@@ -519,22 +529,28 @@ export class MyNode extends DecoratorNode<React.ReactElement> {
}
/**
* Allows you to render a React component within whatever createDOM returns.
* Allows you to render a React component within
* whatever createDOM returns.
*/
decorate(): React.ReactElement {
return <MyNodeComponent nodeKey={this.__key} />
}
/**
* Opposite of importDOM, this function defines what happens when you copy a div element from the lexical editor and paste it into another page.
* Opposite of importDOM, this function defines what
* happens when you copy a div element from the lexical
* editor and paste it into another page.
*
* This also determines the behavior of lexical's internal Lexical -> HTML converter
* This also determines the behavior of lexical's
* internal Lexical -> HTML converter
*/
exportDOM(): DOMExportOutput {
return { element: document.createElement('div') }
}
/**
* Opposite of importJSON. This determines what data is saved in the database / in the lexical editor state.
* Opposite of importJSON. This determines what
* data is saved in the database / in the lexical
* editor state.
*/
exportJSON(): SerializedLexicalNode {
return {
@@ -556,18 +572,23 @@ export class MyNode extends DecoratorNode<React.ReactElement> {
}
}
// This is used in the importDOM method. Totally optional if you do not want your node to be created automatically when copy & pasting certain dom elements
// into your editor.
// This is used in the importDOM method. Totally optional
// if you do not want your node to be created automatically
// when copy & pasting certain dom elements into your editor.
function $yourConversionMethod(): DOMConversionOutput {
return { node: $createMyNode() }
}
// This is a utility method to create a new MyNode. Utility methods prefixed with $ make it explicit that this should only be used within lexical
// This is a utility method to create a new MyNode.
// Utility methods prefixed with $ make it explicit
// that this should only be used within lexical
export function $createMyNode(): MyNode {
return $applyNodeReplacement(new MyNode())
}
// This is just a utility method you can use to check if a node is a MyNode. This also ensures correct typing.
// This is just a utility method you can use
// to check if a node is a MyNode. This also
// ensures correct typing.
export function $isMyNode(
node: LexicalNode | null | undefined,
): node is MyNode {
@@ -626,10 +647,12 @@ export const INSERT_MYNODE_COMMAND: LexicalCommand<void> = createCommand(
)
/**
* Plugin which registers a lexical command to insert a new MyNode into the editor
* Plugin which registers a lexical command to
* insert a new MyNode into the editor
*/
export const MyNodePlugin: PluginComponent = () => {
// The useLexicalComposerContext hook can be used to access the lexical editor instance
// The useLexicalComposerContext hook can be used
// to access the lexical editor instance
const [editor] = useLexicalComposerContext()
useEffect(() => {

View File

@@ -124,12 +124,15 @@ HeadingFeature({
```ts
type IndentFeatureProps = {
/**
* The nodes that should not be indented. "type" property of the nodes you don't want to be indented.
* These can be: "paragraph", "heading", "listitem", "quote" or other indentable nodes if they exist.
* The nodes that should not be indented. "type"
* property of the nodes you don't want to be indented.
* These can be: "paragraph", "heading", "listitem",
* "quote" or other indentable nodes if they exist.
*/
disabledNodes?: string[]
/**
* If true, pressing Tab in the middle of a block such as a paragraph or heading will not insert a tabNode.
* If true, pressing Tab in the middle of a block such
* as a paragraph or heading will not insert a tabNode.
* Instead, Tab will only be used for block-level indentation.
* @default false
*/
@@ -180,7 +183,8 @@ type LinkFeatureServerProps = {
*/
disableAutoLinks?: 'creationOnly' | true
/**
* A function or array defining additional fields for the link feature.
* A function or array defining additional
* fields for the link feature.
* These will be displayed in the link editor drawer.
*/
fields?:
@@ -235,7 +239,9 @@ LinkFeature({
```ts
type RelationshipFeatureProps = {
/**
* Sets a maximum population depth for this relationship, regardless of the remaining depth when the respective field is reached.
* Sets a maximum population depth for this relationship,
* regardless of the remaining depth when the respective
* field is reached.
*/
maxDepth?: number
} & ExclusiveRelationshipFeatureProps
@@ -274,7 +280,10 @@ type UploadFeatureProps = {
}
}
/**
* Sets a maximum population depth for this upload (not the fields for this upload), regardless of the remaining depth when the respective field is reached.
* Sets a maximum population depth for this upload
* (not the fields for this upload), regardless of
* the remaining depth when the respective field is
* reached.
*/
maxDepth?: number
}

200
docs/trash/overview.mdx Normal file
View File

@@ -0,0 +1,200 @@
---
title: Trash
label: Overview
order: 10
desc: Enable soft deletes for your collections to mark documents as deleted without permanently removing them.
keywords: trash, soft delete, deletedAt, recovery, restore
---
Trash (also known as soft delete) allows documents to be marked as deleted without being permanently removed. When enabled on a collection, deleted documents will receive a `deletedAt` timestamp, making it possible to restore them later, view them in a dedicated Trash view, or permanently delete them.
Soft delete is a safer way to manage content lifecycle, giving editors a chance to review and recover documents that may have been deleted by mistake.
<Banner type="warning">
**Note:** The Trash feature is currently in beta and may be subject to change
in minor version updates.
</Banner>
## Collection Configuration
To enable soft deleting for a collection, set the `trash` property to `true`:
```ts
import type { CollectionConfig } from 'payload'
export const Posts: CollectionConfig = {
slug: 'posts',
trash: true,
fields: [
{
name: 'title',
type: 'text',
},
// other fields...
],
}
```
When enabled, Payload automatically injects a deletedAt field into the collection's schema. This timestamp is set when a document is soft-deleted, and cleared when the document is restored.
## Admin Panel behavior
Once `trash` is enabled, the Admin Panel provides a dedicated Trash view for each collection:
- A new route is added at `/collections/:collectionSlug/trash`
- The `Trash` view shows all documents that have a `deletedAt` timestamp
From the Trash view, you can:
- Use bulk actions to manage trashed documents:
- **Restore** to clear the `deletedAt` timestamp and return documents to their original state
- **Delete** to permanently remove selected documents
- **Empty Trash** to select and permanently delete all trashed documents at once
- Enter each document's **edit view**, just like in the main list view. While in the edit view of a trashed document:
- All fields are in a **read-only** state
- Standard document actions (e.g., Save, Publish, Restore Version) are hidden and disabled.
- The available actions are **Restore** and **Permanently Delete**.
- Access to the **API**, **Versions**, and **Preview** views is preserved.
When deleting a document from the main collection List View, Payload will soft-delete the document by default. A checkbox in the delete confirmation modal allows users to skip the trash and permanently delete instead.
## API Support
Soft deletes are fully supported across all Payload APIs: **Local**, **REST**, and **GraphQL**.
The following operations respect and support the `trash` functionality:
- `find`
- `findByID`
- `update`
- `updateByID`
- `delete`
- `deleteByID`
- `findVersions`
- `findVersionByID`
### Understanding `trash` Behavior
Passing `trash: true` to these operations will **include soft-deleted documents** in the query results.
To return _only_ soft-deleted documents, you must combine `trash: true` with a `where` clause that checks if `deletedAt` exists.
### Examples
#### Local API
Return all documents including trashed:
```ts
const result = await payload.find({
collection: 'posts',
trash: true,
})
```
Return only trashed documents:
```ts
const result = await payload.find({
collection: 'posts',
trash: true,
where: {
deletedAt: {
exists: true,
},
},
})
```
Return only non-trashed documents:
```ts
const result = await payload.find({
collection: 'posts',
trash: false,
})
```
#### REST
Return **all** documents including trashed:
```http
GET /api/posts?trash=true
```
Return **only trashed** documents:
```http
GET /api/posts?trash=true&where[deletedAt][exists]=true
```
Return only non-trashed documents:
```http
GET /api/posts?trash=false
```
#### GraphQL
Return all documents including trashed:
```ts
query {
Posts(trash: true) {
docs {
id
deletedAt
}
}
}
```
Return only trashed documents:
```ts
query {
Posts(
trash: true
where: { deletedAt: { exists: true } }
) {
docs {
id
deletedAt
}
}
}
```
Return only non-trashed documents:
```ts
query {
Posts(trash: false) {
docs {
id
deletedAt
}
}
}
```
## Access Control
All trash-related actions (delete, permanent delete) respect the `delete` access control defined in your collection config.
This means:
- If a user is denied delete access, they cannot soft delete or permanently delete documents
## Versions and Trash
When a document is soft-deleted:
- It can no longer have a version **restored** until it is first restored from trash
- Attempting to restore a version while the document is in trash will result in an error
- This ensures consistency between the current document state and its version history
However, versions are still fully **visible and accessible** from the **edit view** of a trashed document. You can view the full version history, but must restore the document itself before restoring any individual version.

View File

@@ -292,7 +292,8 @@ Reference any of the existing storage adapters for guidance on how this should b
```ts
export interface GeneratedAdapter {
/**
* Additional fields to be injected into the base collection and image sizes
* Additional fields to be injected into the base
* collection and image sizes
*/
fields?: Field[]
/**

View File

@@ -6,6 +6,8 @@ import { anyone } from './access/anyone'
import { checkRole } from './access/checkRole'
import { loginAfterCreate } from './hooks/loginAfterCreate'
import { protectRoles } from './hooks/protectRoles'
import { access } from 'fs'
import { create } from 'domain'
export const Users: CollectionConfig = {
slug: 'users',
@@ -32,6 +34,34 @@ export const Users: CollectionConfig = {
afterChange: [loginAfterCreate],
},
fields: [
{
name: 'email',
type: 'email',
required: true,
unique: true,
access: {
read: adminsAndUser,
update: adminsAndUser,
},
},
{
name: 'password',
type: 'password',
required: true,
admin: {
description: 'Leave blank to keep the current password.',
},
},
{
name: 'resetPasswordToken',
type: 'text',
hidden: true,
},
{
name: 'resetPasswordExpiration',
type: 'date',
hidden: true,
},
{
name: 'firstName',
type: 'text',
@@ -45,6 +75,11 @@ export const Users: CollectionConfig = {
type: 'select',
hasMany: true,
saveToJWT: true,
access: {
read: admins,
update: admins,
create: admins,
},
hooks: {
beforeChange: [protectRoles],
},

View File

@@ -1,6 +1,6 @@
{
"name": "payload-monorepo",
"version": "3.47.0",
"version": "3.49.0",
"private": true,
"type": "module",
"workspaces": [
@@ -19,6 +19,7 @@
"build:core": "turbo build --filter \"!@payloadcms/plugin-*\" --filter \"!@payloadcms/storage-*\" --filter \"!blank\" --filter \"!website\"",
"build:core:force": "pnpm clean:build && pnpm build:core --no-cache --force",
"build:create-payload-app": "turbo build --filter create-payload-app",
"build:db-d1-sqlite": "turbo build --filter \"@payloadcms/db-d1-sqlite\"",
"build:db-mongodb": "turbo build --filter \"@payloadcms/db-mongodb\"",
"build:db-postgres": "turbo build --filter \"@payloadcms/db-postgres\"",
"build:db-sqlite": "turbo build --filter \"@payloadcms/db-sqlite\"",
@@ -76,8 +77,6 @@
"dev:prod:memorydb": "cross-env NODE_OPTIONS=--no-deprecation tsx ./test/dev.ts --prod --start-memory-db",
"dev:vercel-postgres": "cross-env PAYLOAD_DATABASE=vercel-postgres pnpm runts ./test/dev.ts",
"devsafe": "node ./scripts/delete-recursively.js '**/.next' && pnpm dev",
"docker:postgres": "docker compose -f test/docker-compose.yml up -d postgres",
"docker:postgres:stop": "docker compose -f test/docker-compose.yml down postgres",
"docker:restart": "pnpm docker:stop --remove-orphans && pnpm docker:start",
"docker:start": "docker compose -f test/docker-compose.yml up -d",
"docker:stop": "docker compose -f test/docker-compose.yml down",
@@ -114,6 +113,7 @@
"test:e2e:prod:ci": "pnpm prepare-run-test-against-prod:ci && pnpm runts ./test/runE2E.ts --prod",
"test:e2e:prod:ci:noturbo": "pnpm prepare-run-test-against-prod:ci && pnpm runts ./test/runE2E.ts --prod --no-turbo",
"test:int": "cross-env NODE_OPTIONS=\"--no-deprecation --no-experimental-strip-types\" NODE_NO_WARNINGS=1 DISABLE_LOGGING=true jest --forceExit --detectOpenHandles --config=test/jest.config.js --runInBand",
"test:int:firestore": "cross-env NODE_OPTIONS=\"--no-deprecation --no-experimental-strip-types\" NODE_NO_WARNINGS=1 PAYLOAD_DATABASE=firestore DISABLE_LOGGING=true jest --forceExit --detectOpenHandles --config=test/jest.config.js --runInBand",
"test:int:postgres": "cross-env NODE_OPTIONS=\"--no-deprecation --no-experimental-strip-types\" NODE_NO_WARNINGS=1 PAYLOAD_DATABASE=postgres DISABLE_LOGGING=true jest --forceExit --detectOpenHandles --config=test/jest.config.js --runInBand",
"test:int:sqlite": "cross-env NODE_OPTIONS=\"--no-deprecation --no-experimental-strip-types\" NODE_NO_WARNINGS=1 PAYLOAD_DATABASE=sqlite DISABLE_LOGGING=true jest --forceExit --detectOpenHandles --config=test/jest.config.js --runInBand",
"test:types": "tstyche",
@@ -133,12 +133,12 @@
"devDependencies": {
"@jest/globals": "29.7.0",
"@libsql/client": "0.14.0",
"@next/bundle-analyzer": "15.3.2",
"@next/bundle-analyzer": "15.4.4",
"@payloadcms/db-postgres": "workspace:*",
"@payloadcms/eslint-config": "workspace:*",
"@payloadcms/eslint-plugin": "workspace:*",
"@payloadcms/live-preview-react": "workspace:*",
"@playwright/test": "1.50.0",
"@playwright/test": "1.54.1",
"@sentry/nextjs": "^8.33.1",
"@sentry/node": "^8.33.1",
"@swc-node/register": "1.10.10",
@@ -148,8 +148,8 @@
"@types/jest": "29.5.12",
"@types/minimist": "1.2.5",
"@types/node": "22.15.30",
"@types/react": "19.1.0",
"@types/react-dom": "19.1.2",
"@types/react": "19.1.8",
"@types/react-dom": "19.1.6",
"@types/shelljs": "0.8.15",
"chalk": "^4.1.2",
"comment-json": "^4.2.3",
@@ -169,12 +169,12 @@
"lint-staged": "15.2.7",
"minimist": "1.2.8",
"mongodb-memory-server": "10.1.4",
"next": "15.3.2",
"next": "15.4.4",
"open": "^10.1.0",
"p-limit": "^5.0.0",
"pg": "8.16.3",
"playwright": "1.50.0",
"playwright-core": "1.50.0",
"playwright": "1.54.1",
"playwright-core": "1.54.1",
"prettier": "3.5.3",
"react": "19.1.0",
"react-dom": "19.1.0",

View File

@@ -1,6 +1,6 @@
{
"name": "@payloadcms/admin-bar",
"version": "3.47.0",
"version": "3.49.0",
"description": "An admin bar for React apps using Payload",
"homepage": "https://payloadcms.com",
"repository": {
@@ -42,8 +42,8 @@
},
"devDependencies": {
"@payloadcms/eslint-config": "workspace:*",
"@types/react": "19.1.0",
"@types/react-dom": "19.1.2",
"@types/react": "19.1.8",
"@types/react-dom": "19.1.6",
"payload": "workspace:*"
},
"peerDependencies": {

View File

@@ -1,6 +1,6 @@
{
"name": "create-payload-app",
"version": "3.47.0",
"version": "3.49.0",
"homepage": "https://payloadcms.com",
"repository": {
"type": "git",

1
packages/db-d1-sqlite/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
/migrations

View File

@@ -0,0 +1,10 @@
.tmp
**/.git
**/.hg
**/.pnp.*
**/.svn
**/.yarn/**
**/build
**/dist/**
**/node_modules
**/temp

View File

@@ -0,0 +1,15 @@
{
"$schema": "https://json.schemastore.org/swcrc",
"sourceMaps": true,
"jsc": {
"target": "esnext",
"parser": {
"syntax": "typescript",
"tsx": true,
"dts": true
}
},
"module": {
"type": "es6"
}
}

View File

@@ -0,0 +1,22 @@
MIT License
Copyright (c) 2018-2025 Payload CMS, Inc. <info@payloadcms.com>
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
'Software'), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -0,0 +1,30 @@
# Payload SQLite Adapter
Official SQLite adapter for [Payload](https://payloadcms.com).
- [Main Repository](https://github.com/payloadcms/payload)
- [Payload Docs](https://payloadcms.com/docs)
## Installation
```bash
npm install @payloadcms/db-sqlite
```
## Usage
```ts
import { buildConfig } from 'payload/config'
import { sqliteAdapter } from '@payloadcms/db-sqlite'
export default buildConfig({
db: sqliteAdapter({
client: {
url: process.env.DATABASE_URI,
},
}),
// ...rest of config
})
```
More detailed usage can be found in the [Payload Docs](https://payloadcms.com/docs/configuration/overview).

View File

@@ -0,0 +1,38 @@
import * as esbuild from 'esbuild'
import fs from 'fs'
import path from 'path'
import { fileURLToPath } from 'url'
const filename = fileURLToPath(import.meta.url)
const dirname = path.dirname(filename)
import { commonjs } from '@hyrious/esbuild-plugin-commonjs'
async function build() {
const resultServer = await esbuild.build({
entryPoints: ['src/index.ts'],
bundle: true,
platform: 'node',
format: 'esm',
outfile: 'dist/index.js',
splitting: false,
external: [
'*.scss',
'*.css',
'drizzle-kit',
'libsql',
'pg',
'@payloadcms/translations',
'@payloadcms/drizzle',
'payload',
'payload/*',
],
minify: true,
metafile: true,
tsconfig: path.resolve(dirname, './tsconfig.json'),
plugins: [commonjs()],
sourcemap: true,
})
console.log('db-sqlite bundled successfully')
fs.writeFileSync('meta_server.json', JSON.stringify(resultServer.metafile))
}
await build()

View File

@@ -0,0 +1,145 @@
{
"name": "@payloadcms/db-d1-sqlite",
"version": "3.49.0",
"description": "The officially supported D1 SQLite database adapter for Payload",
"homepage": "https://payloadcms.com",
"repository": {
"type": "git",
"url": "https://github.com/payloadcms/payload.git",
"directory": "packages/db-d1-sqlite"
},
"license": "MIT",
"author": "Payload <dev@payloadcms.com> (https://payloadcms.com)",
"maintainers": [
{
"name": "Payload",
"email": "info@payloadcms.com",
"url": "https://payloadcms.com"
}
],
"type": "module",
"exports": {
".": {
"import": "./src/index.ts",
"require": "./src/index.ts",
"types": "./src/index.ts"
},
"./types": {
"import": "./src/exports/types-deprecated.ts",
"require": "./src/exports/types-deprecated.ts",
"types": "./src/exports/types-deprecated.ts"
},
"./migration-utils": {
"import": "./src/exports/migration-utils.ts",
"require": "./src/exports/migration-utils.ts",
"types": "./src/exports/migration-utils.ts"
},
"./drizzle": {
"import": "./src/drizzle-proxy/index.ts",
"types": "./src/drizzle-proxy/index.ts",
"default": "./src/drizzle-proxy/index.ts"
},
"./drizzle/sqlite-core": {
"import": "./src/drizzle-proxy/sqlite-core.ts",
"types": "./src/drizzle-proxy/sqlite-core.ts",
"default": "./src/drizzle-proxy/sqlite-core.ts"
},
"./drizzle/d1": {
"import": "./src/drizzle-proxy/d1.ts",
"types": "./src/drizzle-proxy/d1.ts",
"default": "./src/drizzle-proxy/d1.ts"
},
"./drizzle/relations": {
"import": "./src/drizzle-proxy/relations.ts",
"types": "./src/drizzle-proxy/relations.ts",
"default": "./src/drizzle-proxy/relations.ts"
},
"./drizzle/miniflare": {
"import": "./src/drizzle-proxy/miniflare.ts",
"types": "./src/drizzle-proxy/miniflare.ts",
"default": "./src/drizzle-proxy/miniflare.ts"
}
},
"main": "./src/index.ts",
"types": "./src/index.ts",
"files": [
"dist",
"mock.js"
],
"scripts": {
"build": "pnpm build:swc && pnpm build:types",
"build:swc": "swc ./src -d ./dist --config-file .swcrc --strip-leading-paths",
"build:types": "tsc --emitDeclarationOnly --outDir dist",
"clean": "rimraf -g {dist,*.tsbuildinfo}",
"lint": "eslint .",
"lint:fix": "eslint . --fix",
"prepack": "pnpm clean && pnpm turbo build",
"prepublishOnly": "pnpm clean && pnpm turbo build"
},
"dependencies": {
"@miniflare/d1": "2.14.4",
"@payloadcms/drizzle": "workspace:*",
"console-table-printer": "2.12.1",
"drizzle-kit": "0.28.0",
"drizzle-orm": "0.36.1",
"prompts": "2.4.2",
"to-snake-case": "1.0.0",
"uuid": "9.0.0"
},
"devDependencies": {
"@payloadcms/eslint-config": "workspace:*",
"@types/pg": "8.10.2",
"@types/to-snake-case": "1.0.0",
"@types/uuid": "10.0.0",
"payload": "workspace:*"
},
"peerDependencies": {
"payload": "workspace:*"
},
"publishConfig": {
"exports": {
".": {
"import": "./dist/index.js",
"require": "./dist/index.js",
"types": "./dist/index.d.ts"
},
"./types": {
"import": "./dist/exports/types-deprecated.js",
"require": "./dist/exports/types-deprecated.js",
"types": "./dist/exports/types-deprecated.d.ts"
},
"./migration-utils": {
"import": "./dist/exports/migration-utils.js",
"require": "./dist/exports/migration-utils.js",
"types": "./dist/exports/migration-utils.d.ts"
},
"./drizzle": {
"import": "./dist/drizzle-proxy/index.js",
"types": "./dist/drizzle-proxy/index.d.ts",
"default": "./dist/drizzle-proxy/index.js"
},
"./drizzle/sqlite-core": {
"import": "./dist/drizzle-proxy/sqlite-core.js",
"types": "./dist/drizzle-proxy/sqlite-core.d.ts",
"default": "./dist/drizzle-proxy/sqlite-core.js"
},
"./drizzle/d1": {
"import": "./dist/drizzle-proxy/d1.js",
"types": "./dist/drizzle-proxy/d1.d.ts",
"default": "./dist/drizzle-proxy/d1.js"
},
"./drizzle/relations": {
"import": "./dist/drizzle-proxy/relations.js",
"types": "./dist/drizzle-proxy/relations.d.ts",
"default": "./dist/drizzle-proxy/relations.js"
},
"./drizzle/miniflare": {
"import": "./dist/drizzle-proxy/miniflare.js",
"types": "./dist/drizzle-proxy/miniflare.d.ts",
"default": "./dist/drizzle-proxy/miniflare.js"
}
},
"main": "./dist/index.js",
"types": "./dist/index.d.ts"
}
}

View File

@@ -0,0 +1,62 @@
import type { DrizzleAdapter } from '@payloadcms/drizzle/types'
import type { Connect, Migration } from 'payload'
import { D1Database } from '@miniflare/d1'
import { pushDevSchema } from '@payloadcms/drizzle'
import { drizzle } from 'drizzle-orm/d1'
import type { SQLiteD1Adapter } from './types.js'
export const connect: Connect = async function connect(
this: SQLiteD1Adapter,
options = {
hotReload: false,
},
) {
const { hotReload } = options
this.schema = {
...this.tables,
...this.relations,
}
try {
const logger = this.logger || false
this.drizzle = drizzle(new D1Database(this.binding), { logger })
this.client = this.drizzle.$client as any
if (!hotReload) {
if (process.env.PAYLOAD_DROP_DATABASE === 'true') {
this.payload.logger.info(`---- DROPPING TABLES ----`)
await this.dropDatabase({ adapter: this })
this.payload.logger.info('---- DROPPED TABLES ----')
}
}
} catch (err) {
const message = err instanceof Error ? err.message : String(err)
this.payload.logger.error({ err, msg: `Error: cannot connect to SQLite: ${message}` })
if (typeof this.rejectInitializing === 'function') {
this.rejectInitializing()
}
console.error(err)
process.exit(1)
}
// Only push schema if not in production
if (
process.env.NODE_ENV !== 'production' &&
process.env.PAYLOAD_MIGRATING !== 'true' &&
this.push !== false
) {
await pushDevSchema(this as unknown as DrizzleAdapter)
}
if (typeof this.resolveInitializing === 'function') {
this.resolveInitializing()
}
if (process.env.NODE_ENV === 'production' && this.prodMigrations) {
await this.migrate({ migrations: this.prodMigrations as Migration[] })
}
}

View File

@@ -0,0 +1 @@
export * from 'drizzle-orm/d1'

View File

@@ -0,0 +1 @@
export * from 'drizzle-orm'

View File

@@ -0,0 +1 @@
export * from '@miniflare/d1'

View File

@@ -0,0 +1 @@
export * from 'drizzle-orm/relations'

View File

@@ -0,0 +1 @@
export * from 'drizzle-orm/sqlite-core'

View File

@@ -0,0 +1,67 @@
import type { Execute } from '@payloadcms/drizzle'
import type { SQLiteRaw } from 'drizzle-orm/sqlite-core/query-builders/raw'
import { sql } from 'drizzle-orm'
interface D1Meta {
changed_db: boolean
changes: number
duration: number
last_row_id: number
rows_read: number
rows_written: number
/**
* True if-and-only-if the database instance that executed the query was the primary.
*/
served_by_primary?: boolean
/**
* The region of the database instance that executed the query.
*/
served_by_region?: string
size_after: number
timings?: {
/**
* The duration of the SQL query execution by the database instance. It doesn't include any network time.
*/
sql_duration_ms: number
}
}
interface D1Response {
error?: never
meta: D1Meta & Record<string, unknown>
success: true
}
type D1Result<T = unknown> = {
results: T[]
} & D1Response
export const execute: Execute<any> = function execute({ db, drizzle, raw, sql: statement }) {
const executeFrom: any = (db ?? drizzle)!
const mapToLibSql = (query: SQLiteRaw<D1Result<unknown>>): any => {
const execute = query.execute
query.execute = async () => {
const result: D1Result = await execute()
const resultLibSQL = {
columns: undefined,
columnTypes: undefined,
lastInsertRowid: BigInt(result.meta.last_row_id),
rows: result.results as any[],
rowsAffected: result.meta.rows_written,
}
return Object.assign(result, resultLibSQL)
}
return query
}
if (raw) {
const result = mapToLibSql(executeFrom.run(sql.raw(raw)))
return result
} else {
const result = mapToLibSql(executeFrom.run(statement))
return result
}
}

View File

@@ -0,0 +1,79 @@
import type {
Args as _Args,
CountDistinct as _CountDistinct,
DeleteWhere as _DeleteWhere,
DropDatabase as _DropDatabase,
Execute as _Execute,
GeneratedDatabaseSchema as _GeneratedDatabaseSchema,
GenericColumns as _GenericColumns,
GenericRelation as _GenericRelation,
GenericTable as _GenericTable,
IDType as _IDType,
Insert as _Insert,
MigrateDownArgs as _MigrateDownArgs,
MigrateUpArgs as _MigrateUpArgs,
SQLiteD1Adapter as _SQLiteAdapter,
SQLiteSchemaHook as _SQLiteSchemaHook,
} from '../types.js'
/**
* @deprecated - import from `@payloadcms/db-sqlite` instead
*/
export type SQLiteAdapter = _SQLiteAdapter
/**
* @deprecated - import from `@payloadcms/db-sqlite` instead
*/
export type Args = _Args
/**
* @deprecated - import from `@payloadcms/db-sqlite` instead
*/
export type CountDistinct = _CountDistinct
/**
* @deprecated - import from `@payloadcms/db-sqlite` instead
*/
export type DeleteWhere = _DeleteWhere
/**
* @deprecated - import from `@payloadcms/db-sqlite` instead
*/
export type DropDatabase = _DropDatabase
/**
* @deprecated - import from `@payloadcms/db-sqlite` instead
*/
export type Execute<T> = _Execute<T>
/**
* @deprecated - import from `@payloadcms/db-sqlite` instead
*/
export type GeneratedDatabaseSchema = _GeneratedDatabaseSchema
/**
* @deprecated - import from `@payloadcms/db-sqlite` instead
*/
export type GenericColumns = _GenericColumns
/**
* @deprecated - import from `@payloadcms/db-sqlite` instead
*/
export type GenericRelation = _GenericRelation
/**
* @deprecated - import from `@payloadcms/db-sqlite` instead
*/
export type GenericTable = _GenericTable
/**
* @deprecated - import from `@payloadcms/db-sqlite` instead
*/
export type IDType = _IDType
/**
* @deprecated - import from `@payloadcms/db-sqlite` instead
*/
export type Insert = _Insert
/**
* @deprecated - import from `@payloadcms/db-sqlite` instead
*/
export type MigrateDownArgs = _MigrateDownArgs
/**
* @deprecated - import from `@payloadcms/db-sqlite` instead
*/
export type MigrateUpArgs = _MigrateUpArgs
/**
* @deprecated - import from `@payloadcms/db-sqlite` instead
*/
export type SQLiteSchemaHook = _SQLiteSchemaHook

View File

@@ -0,0 +1,224 @@
import type { Operators } from '@payloadcms/drizzle'
import type { DatabaseAdapterObj, Payload } from 'payload'
import {
beginTransaction,
buildCreateMigration,
commitTransaction,
count,
countGlobalVersions,
countVersions,
create,
createGlobal,
createGlobalVersion,
createSchemaGenerator,
createVersion,
deleteMany,
deleteOne,
deleteVersions,
destroy,
find,
findGlobal,
findGlobalVersions,
findMigrationDir,
findOne,
findVersions,
migrate,
migrateDown,
migrateFresh,
migrateRefresh,
migrateReset,
migrateStatus,
operatorMap,
queryDrafts,
rollbackTransaction,
updateGlobal,
updateGlobalVersion,
updateJobs,
updateMany,
updateOne,
updateVersion,
} from '@payloadcms/drizzle'
import {
columnToCodeConverter,
convertPathToJSONTraversal,
countDistinct,
createJSONQuery,
defaultDrizzleSnapshot,
deleteWhere,
dropDatabase,
init,
insert,
requireDrizzleKit,
} from '@payloadcms/drizzle/sqlite'
import { like, notLike } from 'drizzle-orm'
import { createDatabaseAdapter, defaultBeginTransaction } from 'payload'
import { fileURLToPath } from 'url'
import type { Args, SQLiteD1Adapter } from './types.js'
import { connect } from './connect.js'
import { execute } from './execute.js'
const filename = fileURLToPath(import.meta.url)
export function sqliteD1Adapter(args: Args): DatabaseAdapterObj<SQLiteD1Adapter> {
const sqliteIDType = args.idType || 'number'
const payloadIDType = sqliteIDType === 'uuid' ? 'text' : 'number'
const allowIDOnCreate = args.allowIDOnCreate ?? false
function adapter({ payload }: { payload: Payload }) {
const migrationDir = findMigrationDir(args.migrationDir)
let resolveInitializing: () => void = () => {}
let rejectInitializing: () => void = () => {}
const initializing = new Promise<void>((res, rej) => {
resolveInitializing = res
rejectInitializing = rej
})
// sqlite's like operator is case-insensitive, so we overwrite the DrizzleAdapter operators to not use ilike
const operators = {
...operatorMap,
contains: like,
like,
not_like: notLike,
} as unknown as Operators
return createDatabaseAdapter<SQLiteD1Adapter>({
name: 'sqlite',
afterSchemaInit: args.afterSchemaInit ?? [],
allowIDOnCreate,
autoIncrement: args.autoIncrement ?? false,
beforeSchemaInit: args.beforeSchemaInit ?? [],
binding: args.binding,
// @ts-expect-error - vestiges of when tsconfig was not strict. Feel free to improve
client: undefined,
defaultDrizzleSnapshot,
// @ts-expect-error - vestiges of when tsconfig was not strict. Feel free to improve
drizzle: undefined,
features: {
json: true,
},
fieldConstraints: {},
generateSchema: createSchemaGenerator({
columnToCodeConverter,
corePackageSuffix: 'sqlite-core',
defaultOutputFile: args.generateSchemaOutputFile,
tableImport: 'sqliteTable',
}),
idType: sqliteIDType,
initializing,
localesSuffix: args.localesSuffix || '_locales',
logger: args.logger,
operators,
prodMigrations: args.prodMigrations,
// @ts-expect-error - vestiges of when tsconfig was not strict. Feel free to improve
push: args.push,
rawRelations: {},
rawTables: {},
relations: {},
relationshipsSuffix: args.relationshipsSuffix || '_rels',
schema: {},
schemaName: args.schemaName,
sessions: {},
tableNameMap: new Map<string, string>(),
tables: {},
// @ts-expect-error - vestiges of when tsconfig was not strict. Feel free to improve
execute,
// @ts-expect-error - vestiges of when tsconfig was not strict. Feel free to improve
transactionOptions: args.transactionOptions || undefined,
updateJobs,
updateMany,
versionsSuffix: args.versionsSuffix || '_v',
// DatabaseAdapter
beginTransaction: args.transactionOptions ? beginTransaction : defaultBeginTransaction(),
commitTransaction,
connect,
convertPathToJSONTraversal,
count,
countDistinct,
countGlobalVersions,
countVersions,
create,
createGlobal,
createGlobalVersion,
createJSONQuery,
createMigration: buildCreateMigration({
executeMethod: 'run',
filename,
sanitizeStatements({ sqlExecute, statements }) {
return statements
.map((statement) => `${sqlExecute}${statement?.replaceAll('`', '\\`')}\`)`)
.join('\n')
},
sqlOnly: true,
}),
createVersion,
defaultIDType: payloadIDType,
deleteMany,
deleteOne,
deleteVersions,
deleteWhere,
destroy,
dropDatabase,
find,
findGlobal,
findGlobalVersions,
findOne,
findVersions,
indexes: new Set<string>(),
init,
insert,
migrate,
migrateDown,
migrateFresh,
migrateRefresh,
migrateReset,
migrateStatus,
migrationDir,
packageName: '@payloadcms/db-d1-sqlite',
payload,
queryDrafts,
rejectInitializing,
requireDrizzleKit,
resolveInitializing,
rollbackTransaction,
updateGlobal,
updateGlobalVersion,
updateOne,
updateVersion,
upsert: updateOne,
})
}
return {
name: 'd1-sqlite',
allowIDOnCreate,
defaultIDType: payloadIDType,
init: adapter,
}
}
/**
* @todo deprecate /types subpath export in 4.0
*/
export type {
Args as SQLiteAdapterArgs,
CountDistinct,
DeleteWhere,
DropDatabase,
Execute,
GeneratedDatabaseSchema,
GenericColumns,
GenericRelation,
GenericTable,
IDType,
Insert,
MigrateDownArgs,
MigrateUpArgs,
SQLiteD1Adapter as SQLiteAdapter,
SQLiteSchemaHook,
} from './types.js'
export { sql } from 'drizzle-orm'

View File

@@ -0,0 +1,202 @@
import type { Client, ResultSet } from '@libsql/client'
import type { D1Database, D1Options, DatabaseBinding } from '@miniflare/d1'
const o: D1Options = {}
import type { extendDrizzleTable } from '@payloadcms/drizzle'
import type { BaseSQLiteAdapter, BaseSQLiteArgs } from '@payloadcms/drizzle/sqlite'
import type { BuildQueryJoinAliases, DrizzleAdapter } from '@payloadcms/drizzle/types'
import type { DrizzleConfig, Relation, Relations, SQL } from 'drizzle-orm'
import type { DrizzleD1Database } from 'drizzle-orm/d1'
import type { LibSQLDatabase } from 'drizzle-orm/libsql'
import type {
AnySQLiteColumn,
SQLiteInsertOnConflictDoUpdateConfig,
SQLiteTableWithColumns,
SQLiteTransactionConfig,
} from 'drizzle-orm/sqlite-core'
import type { SQLiteRaw } from 'drizzle-orm/sqlite-core/query-builders/raw'
import type { Payload, PayloadRequest } from 'payload'
type SQLiteSchema = {
relations: Record<string, GenericRelation>
tables: Record<string, SQLiteTableWithColumns<any>>
}
type SQLiteSchemaHookArgs = {
extendTable: typeof extendDrizzleTable
schema: SQLiteSchema
}
export type SQLiteSchemaHook = (args: SQLiteSchemaHookArgs) => Promise<SQLiteSchema> | SQLiteSchema
export type Args = {
binding: DatabaseBinding
} & BaseSQLiteArgs
export type GenericColumns = {
[x: string]: AnySQLiteColumn
}
export type GenericTable = SQLiteTableWithColumns<{
columns: GenericColumns
dialect: string
name: string
schema: string
}>
export type GenericRelation = Relations<string, Record<string, Relation<string>>>
export type CountDistinct = (args: {
db: LibSQLDatabase
joins: BuildQueryJoinAliases
tableName: string
where: SQL
}) => Promise<number>
export type DeleteWhere = (args: {
db: LibSQLDatabase
tableName: string
where: SQL
}) => Promise<void>
export type DropDatabase = (args: { adapter: SQLiteD1Adapter }) => Promise<void>
export type Execute<T> = (args: {
db?: LibSQLDatabase
drizzle?: LibSQLDatabase
raw?: string
sql?: SQL<unknown>
}) => SQLiteRaw<Promise<T>> | SQLiteRaw<ResultSet>
export type Insert = (args: {
db: LibSQLDatabase
onConflictDoUpdate?: SQLiteInsertOnConflictDoUpdateConfig<any>
tableName: string
values: Record<string, unknown> | Record<string, unknown>[]
}) => Promise<Record<string, unknown>[]>
// Explicitly omit drizzle property for complete override in SQLiteAdapter, required in ts 5.5
type SQLiteDrizzleAdapter = Omit<
DrizzleAdapter,
| 'countDistinct'
| 'deleteWhere'
| 'drizzle'
| 'dropDatabase'
| 'execute'
| 'idType'
| 'insert'
| 'operators'
| 'relations'
>
export interface GeneratedDatabaseSchema {
schemaUntyped: Record<string, unknown>
}
type ResolveSchemaType<T> = 'schema' extends keyof T
? T['schema']
: GeneratedDatabaseSchema['schemaUntyped']
type Drizzle = { $client: D1Database } & DrizzleD1Database<Record<string, any>>
export type SQLiteD1Adapter = {
binding: Args['binding']
client: D1Database
drizzle: Drizzle
} & BaseSQLiteAdapter &
SQLiteDrizzleAdapter
export type IDType = 'integer' | 'numeric' | 'text'
export type MigrateUpArgs = {
/**
* The SQLite Drizzle instance that you can use to execute SQL directly within the current transaction.
* @example
* ```ts
* import { type MigrateUpArgs, sql } from '@payloadcms/db-sqlite'
*
* export async function up({ db, payload, req }: MigrateUpArgs): Promise<void> {
* const { rows: posts } = await db.run(sql`SELECT * FROM posts`)
* }
* ```
*/
db: Drizzle
/**
* The Payload instance that you can use to execute Local API methods
* To use the current transaction you must pass `req` to arguments
* @example
* ```ts
* import { type MigrateUpArgs } from '@payloadcms/db-sqlite'
*
* export async function up({ db, payload, req }: MigrateUpArgs): Promise<void> {
* const posts = await payload.find({ collection: 'posts', req })
* }
* ```
*/
payload: Payload
/**
* The `PayloadRequest` object that contains the current transaction
*/
req: PayloadRequest
}
export type MigrateDownArgs = {
/**
* The SQLite Drizzle instance that you can use to execute SQL directly within the current transaction.
* @example
* ```ts
* import { type MigrateDownArgs, sql } from '@payloadcms/db-sqlite'
*
* export async function down({ db, payload, req }: MigrateDownArgs): Promise<void> {
* const { rows: posts } = await db.run(sql`SELECT * FROM posts`)
* }
* ```
*/
db: Drizzle
/**
* The Payload instance that you can use to execute Local API methods
* To use the current transaction you must pass `req` to arguments
* @example
* ```ts
* import { type MigrateDownArgs } from '@payloadcms/db-sqlite'
*
* export async function down({ db, payload, req }: MigrateDownArgs): Promise<void> {
* const posts = await payload.find({ collection: 'posts', req })
* }
* ```
*/
payload: Payload
/**
* The `PayloadRequest` object that contains the current transaction
*/
req: PayloadRequest
}
declare module 'payload' {
export interface DatabaseAdapter
extends Omit<Args, 'idType' | 'logger' | 'migrationDir' | 'pool'>,
DrizzleAdapter {
beginTransaction: (options?: SQLiteTransactionConfig) => Promise<null | number | string>
drizzle: Drizzle
/**
* An object keyed on each table, with a key value pair where the constraint name is the key, followed by the dot-notation field name
* Used for returning properly formed errors from unique fields
*/
fieldConstraints: Record<string, Record<string, string>>
idType: Args['idType']
initializing: Promise<void>
localesSuffix?: string
logger: DrizzleConfig['logger']
prodMigrations?: {
down: (args: MigrateDownArgs) => Promise<void>
name: string
up: (args: MigrateUpArgs) => Promise<void>
}[]
push: boolean
rejectInitializing: () => void
relationshipsSuffix?: string
resolveInitializing: () => void
schema: Record<string, GenericRelation | GenericTable>
tableNameMap: Map<string, string>
transactionOptions: SQLiteTransactionConfig
versionsSuffix?: string
}
}

View File

@@ -0,0 +1,14 @@
{
"extends": "../../tsconfig.base.json",
"references": [
{
"path": "../payload"
},
{
"path": "../translations"
},
{
"path": "../drizzle"
}
]
}

View File

@@ -1,6 +1,6 @@
{
"name": "@payloadcms/db-mongodb",
"version": "3.47.0",
"version": "3.49.0",
"description": "The officially supported MongoDB database adapter for Payload",
"homepage": "https://payloadcms.com",
"repository": {

View File

@@ -36,6 +36,25 @@ export const connect: Connect = async function connect(
try {
this.connection = (await mongoose.connect(urlToConnect, connectionOptions)).connection
if (this.useAlternativeDropDatabase) {
if (this.connection.db) {
// Firestore doesn't support dropDatabase, so we monkey patch
// dropDatabase to delete all documents from all collections instead
this.connection.db.dropDatabase = async function (): Promise<boolean> {
const existingCollections = await this.listCollections().toArray()
await Promise.all(
existingCollections.map(async (collectionInfo) => {
const collection = this.collection(collectionInfo.name)
await collection.deleteMany({})
}),
)
return true
}
this.connection.dropDatabase = async function () {
await this.db?.dropDatabase()
}
}
}
// If we are running a replica set with MongoDB Memory Server,
// wait until the replica set elects a primary before proceeding

View File

@@ -12,6 +12,7 @@ import { buildJoinAggregation } from './utilities/buildJoinAggregation.js'
import { buildProjectionFromSelect } from './utilities/buildProjectionFromSelect.js'
import { getCollection } from './utilities/getEntity.js'
import { getSession } from './utilities/getSession.js'
import { resolveJoins } from './utilities/resolveJoins.js'
import { transform } from './utilities/transform.js'
export const find: Find = async function find(
@@ -155,6 +156,16 @@ export const find: Find = async function find(
result = await Model.paginate(query, paginationOptions)
}
if (!this.useJoinAggregations) {
await resolveJoins({
adapter: this,
collectionSlug,
docs: result.docs as Record<string, unknown>[],
joins,
locale,
})
}
transform({
adapter: this,
data: result.docs,

View File

@@ -0,0 +1,141 @@
import type { PipelineStage } from 'mongoose'
import { type FindDistinct, getFieldByPath } from 'payload'
import type { MongooseAdapter } from './index.js'
import { buildQuery } from './queries/buildQuery.js'
import { buildSortParam } from './queries/buildSortParam.js'
import { getCollection } from './utilities/getEntity.js'
import { getSession } from './utilities/getSession.js'
export const findDistinct: FindDistinct = async function (this: MongooseAdapter, args) {
const { collectionConfig, Model } = getCollection({
adapter: this,
collectionSlug: args.collection,
})
const session = await getSession(this, args.req)
const { where = {} } = args
const sortAggregation: PipelineStage[] = []
const sort = buildSortParam({
adapter: this,
config: this.payload.config,
fields: collectionConfig.flattenedFields,
locale: args.locale,
sort: args.sort ?? args.field,
sortAggregation,
timestamps: true,
})
const query = await buildQuery({
adapter: this,
collectionSlug: args.collection,
fields: collectionConfig.flattenedFields,
locale: args.locale,
where,
})
const fieldPathResult = getFieldByPath({
fields: collectionConfig.flattenedFields,
path: args.field,
})
let fieldPath = args.field
if (fieldPathResult?.pathHasLocalized && args.locale) {
fieldPath = fieldPathResult.localizedPath.replace('<locale>', args.locale)
}
const page = args.page || 1
const sortProperty = Object.keys(sort)[0]! // assert because buildSortParam always returns at least 1 key.
const sortDirection = sort[sortProperty] === 'asc' ? 1 : -1
const pipeline: PipelineStage[] = [
{
$match: query,
},
...(sortAggregation.length > 0 ? sortAggregation : []),
{
$group: {
_id: {
_field: `$${fieldPath}`,
...(sortProperty === fieldPath
? {}
: {
_sort: `$${sortProperty}`,
}),
},
},
},
{
$sort: {
[sortProperty === fieldPath ? '_id._field' : '_id._sort']: sortDirection,
},
},
]
const getValues = async () => {
return Model.aggregate(pipeline, { session }).then((res) =>
res.map((each) => ({
[args.field]: JSON.parse(JSON.stringify(each._id._field)),
})),
)
}
if (args.limit) {
pipeline.push({
$skip: (page - 1) * args.limit,
})
pipeline.push({ $limit: args.limit })
const totalDocs = await Model.aggregate(
[
{
$match: query,
},
{
$group: {
_id: `$${fieldPath}`,
},
},
{ $count: 'count' },
],
{
session,
},
).then((res) => res[0]?.count ?? 0)
const totalPages = Math.ceil(totalDocs / args.limit)
const hasPrevPage = page > 1
const hasNextPage = totalPages > page
const pagingCounter = (page - 1) * args.limit + 1
return {
hasNextPage,
hasPrevPage,
limit: args.limit,
nextPage: hasNextPage ? page + 1 : null,
page,
pagingCounter,
prevPage: hasPrevPage ? page - 1 : null,
totalDocs,
totalPages,
values: await getValues(),
}
}
const values = await getValues()
return {
hasNextPage: false,
hasPrevPage: false,
limit: 0,
page: 1,
pagingCounter: 1,
totalDocs: values.length,
totalPages: 1,
values,
}
}

View File

@@ -10,6 +10,7 @@ import { buildJoinAggregation } from './utilities/buildJoinAggregation.js'
import { buildProjectionFromSelect } from './utilities/buildProjectionFromSelect.js'
import { getCollection } from './utilities/getEntity.js'
import { getSession } from './utilities/getSession.js'
import { resolveJoins } from './utilities/resolveJoins.js'
import { transform } from './utilities/transform.js'
export const findOne: FindOne = async function findOne(
@@ -67,6 +68,16 @@ export const findOne: FindOne = async function findOne(
doc = await Model.findOne(query, {}, options)
}
if (doc && !this.useJoinAggregations) {
await resolveJoins({
adapter: this,
collectionSlug,
docs: [doc] as Record<string, unknown>[],
joins,
locale,
})
}
if (!doc) {
return null
}

View File

@@ -42,6 +42,7 @@ import { deleteOne } from './deleteOne.js'
import { deleteVersions } from './deleteVersions.js'
import { destroy } from './destroy.js'
import { find } from './find.js'
import { findDistinct } from './findDistinct.js'
import { findGlobal } from './findGlobal.js'
import { findGlobalVersions } from './findGlobalVersions.js'
import { findOne } from './findOne.js'
@@ -143,6 +144,29 @@ export interface Args {
/** The URL to connect to MongoDB or false to start payload and prevent connecting */
url: false | string
/**
* Set to `true` to use an alternative `dropDatabase` implementation that calls `collection.deleteMany({})` on every collection instead of sending a raw `dropDatabase` command.
* Payload only uses `dropDatabase` for testing purposes.
* @default false
*/
useAlternativeDropDatabase?: boolean
/**
* Set to `true` to use `BigInt` for custom ID fields of type `'number'`.
* Useful for databases that don't support `double` or `int32` IDs.
* @default false
*/
useBigIntForNumberIDs?: boolean
/**
* Set to `false` to disable join aggregations (which use correlated subqueries) and instead populate join fields via multiple `find` queries.
* @default true
*/
useJoinAggregations?: boolean
/**
* Set to `false` to disable the use of `pipeline` in the `$lookup` aggregation in sorting.
* @default true
*/
usePipelineInSortLookup?: boolean
}
export type MongooseAdapter = {
@@ -159,6 +183,10 @@ export type MongooseAdapter = {
up: (args: MigrateUpArgs) => Promise<void>
}[]
sessions: Record<number | string, ClientSession>
useAlternativeDropDatabase: boolean
useBigIntForNumberIDs: boolean
useJoinAggregations: boolean
usePipelineInSortLookup: boolean
versions: {
[slug: string]: CollectionModel
}
@@ -194,6 +222,10 @@ declare module 'payload' {
updateVersion: <T extends TypeWithID = TypeWithID>(
args: { options?: QueryOptions } & UpdateVersionArgs<T>,
) => Promise<TypeWithVersion<T>>
useAlternativeDropDatabase: boolean
useBigIntForNumberIDs: boolean
useJoinAggregations: boolean
usePipelineInSortLookup: boolean
versions: {
[slug: string]: CollectionModel
}
@@ -214,6 +246,10 @@ export function mongooseAdapter({
prodMigrations,
transactionOptions = {},
url,
useAlternativeDropDatabase = false,
useBigIntForNumberIDs = false,
useJoinAggregations = true,
usePipelineInSortLookup = true,
}: Args): DatabaseAdapterObj {
function adapter({ payload }: { payload: Payload }) {
const migrationDir = findMigrationDir(migrationDirArg)
@@ -262,6 +298,7 @@ export function mongooseAdapter({
destroy,
disableFallbackSort,
find,
findDistinct,
findGlobal,
findGlobalVersions,
findOne,
@@ -279,6 +316,10 @@ export function mongooseAdapter({
updateOne,
updateVersion,
upsert,
useAlternativeDropDatabase,
useBigIntForNumberIDs,
useJoinAggregations,
usePipelineInSortLookup,
})
}
@@ -290,6 +331,8 @@ export function mongooseAdapter({
}
}
export { compatabilityOptions } from './utilities/compatabilityOptions.js'
/**
* Attempt to find migrations directory.
*

View File

@@ -143,7 +143,12 @@ export const buildSchema = (args: {
const idField = schemaFields.find((field) => fieldAffectsData(field) && field.name === 'id')
if (idField) {
fields = {
_id: idField.type === 'number' ? Number : String,
_id:
idField.type === 'number'
? payload.db.useBigIntForNumberIDs
? mongoose.Schema.Types.BigInt
: Number
: String,
}
schemaFields = schemaFields.filter(
(field) => !(fieldAffectsData(field) && field.name === 'id'),
@@ -900,7 +905,11 @@ const getRelationshipValueType = (field: RelationshipField | UploadField, payloa
}
if (customIDType === 'number') {
return mongoose.Schema.Types.Number
if (payload.db.useBigIntForNumberIDs) {
return mongoose.Schema.Types.BigInt
} else {
return mongoose.Schema.Types.Number
}
}
return mongoose.Schema.Types.String

View File

@@ -99,31 +99,57 @@ const relationshipSort = ({
sortFieldPath = foreignFieldPath.localizedPath.replace('<locale>', locale)
}
if (
!sortAggregation.some((each) => {
return '$lookup' in each && each.$lookup.as === `__${path}`
})
) {
const as = `__${relationshipPath.replace(/\./g, '__')}`
// If we have not already sorted on this relationship yet, we need to add a lookup stage
if (!sortAggregation.some((each) => '$lookup' in each && each.$lookup.as === as)) {
let localField = versions ? `version.${relationshipPath}` : relationshipPath
if (adapter.usePipelineInSortLookup) {
const flattenedField = `__${localField.replace(/\./g, '__')}_lookup`
sortAggregation.push({
$addFields: {
[flattenedField]: `$${localField}`,
},
})
localField = flattenedField
}
sortAggregation.push({
$lookup: {
as: `__${path}`,
as,
foreignField: '_id',
from: foreignCollection.Model.collection.name,
localField: versions ? `version.${relationshipPath}` : relationshipPath,
pipeline: [
{
$project: {
[sortFieldPath]: true,
localField,
...(!adapter.usePipelineInSortLookup && {
pipeline: [
{
$project: {
[sortFieldPath]: true,
},
},
},
],
],
}),
},
})
sort[`__${path}.${sortFieldPath}`] = sortDirection
return true
if (adapter.usePipelineInSortLookup) {
sortAggregation.push({
$unset: localField,
})
}
}
if (!adapter.usePipelineInSortLookup) {
const lookup = sortAggregation.find(
(each) => '$lookup' in each && each.$lookup.as === as,
) as PipelineStage.Lookup
const pipeline = lookup.$lookup.pipeline![0] as PipelineStage.Project
pipeline.$project[sortFieldPath] = true
}
sort[`${as}.${sortFieldPath}`] = sortDirection
return true
}
}

View File

@@ -12,6 +12,7 @@ import { buildJoinAggregation } from './utilities/buildJoinAggregation.js'
import { buildProjectionFromSelect } from './utilities/buildProjectionFromSelect.js'
import { getCollection } from './utilities/getEntity.js'
import { getSession } from './utilities/getSession.js'
import { resolveJoins } from './utilities/resolveJoins.js'
import { transform } from './utilities/transform.js'
export const queryDrafts: QueryDrafts = async function queryDrafts(
@@ -158,6 +159,17 @@ export const queryDrafts: QueryDrafts = async function queryDrafts(
result = await Model.paginate(versionQuery, paginationOptions)
}
if (!this.useJoinAggregations) {
await resolveJoins({
adapter: this,
collectionSlug,
docs: result.docs as Record<string, unknown>[],
joins,
locale,
versions: true,
})
}
transform({
adapter: this,
data: result.docs,

View File

@@ -1,4 +1,4 @@
import type { MongooseUpdateQueryOptions } from 'mongoose'
import type { MongooseUpdateQueryOptions, UpdateQuery } from 'mongoose'
import type { UpdateOne } from 'payload'
import type { MongooseAdapter } from './index.js'
@@ -50,15 +50,20 @@ export const updateOne: UpdateOne = async function updateOne(
let result
transform({ adapter: this, data, fields, operation: 'write' })
const $inc: Record<string, number> = {}
let updateData: UpdateQuery<any> = data
transform({ $inc, adapter: this, data, fields, operation: 'write' })
if (Object.keys($inc).length) {
updateData = { $inc, $set: updateData }
}
try {
if (returning === false) {
await Model.updateOne(query, data, options)
await Model.updateOne(query, updateData, options)
transform({ adapter: this, data, fields, operation: 'read' })
return null
} else {
result = await Model.findOneAndUpdate(query, data, options)
result = await Model.findOneAndUpdate(query, updateData, options)
}
} catch (error) {
handleError({ collection: collectionSlug, error, req })

View File

@@ -76,7 +76,11 @@ export const aggregatePaginate = async ({
countPromise = Model.estimatedDocumentCount(query)
} else {
const hint = adapter.disableIndexHints !== true ? { _id: 1 } : undefined
countPromise = Model.countDocuments(query, { collation, hint, session })
countPromise = Model.countDocuments(query, {
collation,
session,
...(hint ? { hint } : {}),
})
}
}

View File

@@ -44,6 +44,9 @@ export const buildJoinAggregation = async ({
projection,
versions,
}: BuildJoinAggregationArgs): Promise<PipelineStage[] | undefined> => {
if (!adapter.useJoinAggregations) {
return
}
if (
(Object.keys(collectionConfig.joins).length === 0 &&
collectionConfig.polymorphicJoins.length == 0) ||

View File

@@ -0,0 +1,25 @@
import type { Args } from '../index.js'
/**
* Each key is a mongo-compatible database and the value
* is the recommended `mongooseAdapter` settings for compatability.
*/
export const compatabilityOptions = {
cosmosdb: {
transactionOptions: false,
useJoinAggregations: false,
usePipelineInSortLookup: false,
},
documentdb: {
disableIndexHints: true,
},
firestore: {
disableIndexHints: true,
ensureIndexes: false,
transactionOptions: false,
useAlternativeDropDatabase: true,
useBigIntForNumberIDs: true,
useJoinAggregations: false,
usePipelineInSortLookup: false,
},
} satisfies Record<string, Partial<Args>>

View File

@@ -2,6 +2,15 @@ import type { PayloadRequest } from 'payload'
import { ValidationError } from 'payload'
function extractFieldFromMessage(message: string) {
// eslint-disable-next-line regexp/no-super-linear-backtracking
const match = message.match(/index:\s*(.*?)_/)
if (match && match[1]) {
return match[1] // e.g., returns "email" from "index: email_1"
}
return null
}
export const handleError = ({
collection,
error,
@@ -18,20 +27,22 @@ export const handleError = ({
}
// Handle uniqueness error from MongoDB
if (
'code' in error &&
error.code === 11000 &&
'keyValue' in error &&
error.keyValue &&
typeof error.keyValue === 'object'
) {
if ('code' in error && error.code === 11000) {
let path: null | string = null
if ('keyValue' in error && error.keyValue && typeof error.keyValue === 'object') {
path = Object.keys(error.keyValue)[0] ?? ''
} else if ('message' in error && typeof error.message === 'string') {
path = extractFieldFromMessage(error.message)
}
throw new ValidationError(
{
collection,
errors: [
{
message: req?.t ? req.t('error:valueMustBeUnique') : 'Value must be unique',
path: Object.keys(error.keyValue)[0] ?? '',
path: path ?? '',
},
],
global,

View File

@@ -0,0 +1,647 @@
import type { JoinQuery, SanitizedJoins, Where } from 'payload'
import {
appendVersionToQueryKey,
buildVersionCollectionFields,
combineQueries,
getQueryDraftsSort,
} from 'payload'
import { fieldShouldBeLocalized } from 'payload/shared'
import type { MongooseAdapter } from '../index.js'
import { buildQuery } from '../queries/buildQuery.js'
import { buildSortParam } from '../queries/buildSortParam.js'
import { transform } from './transform.js'
export type ResolveJoinsArgs = {
/** The MongoDB adapter instance */
adapter: MongooseAdapter
/** The slug of the collection being queried */
collectionSlug: string
/** Array of documents to resolve joins for */
docs: Record<string, unknown>[]
/** Join query specifications (which joins to resolve and how) */
joins?: JoinQuery
/** Optional locale for localized queries */
locale?: string
/** Optional projection for the join query */
projection?: Record<string, true>
/** Whether to resolve versions instead of published documents */
versions?: boolean
}
/**
* Resolves join relationships for a collection of documents.
* This function fetches related documents based on join configurations and
* attaches them to the original documents with pagination support.
*/
export async function resolveJoins({
adapter,
collectionSlug,
docs,
joins,
locale,
projection,
versions = false,
}: ResolveJoinsArgs): Promise<void> {
// Early return if no joins are specified or no documents to process
if (!joins || docs.length === 0) {
return
}
// Get the collection configuration from the adapter
const collectionConfig = adapter.payload.collections[collectionSlug]?.config
if (!collectionConfig) {
return
}
// Build a map of join paths to their configurations for quick lookup
// This flattens the nested join structure into a single map keyed by join path
const joinMap: Record<string, { targetCollection: string } & SanitizedJoin> = {}
// Add regular joins
for (const [target, joinList] of Object.entries(collectionConfig.joins)) {
for (const join of joinList) {
joinMap[join.joinPath] = { ...join, targetCollection: target }
}
}
// Add polymorphic joins
for (const join of collectionConfig.polymorphicJoins || []) {
// For polymorphic joins, we use the collections array as the target
joinMap[join.joinPath] = { ...join, targetCollection: join.field.collection as string }
}
// Process each requested join concurrently
const joinPromises = Object.entries(joins).map(async ([joinPath, joinQuery]) => {
if (!joinQuery) {
return null
}
// If a projection is provided, and the join path is not in the projection, skip it
if (projection && !projection[joinPath]) {
return null
}
// Get the join definition from our map
const joinDef = joinMap[joinPath]
if (!joinDef) {
return null
}
// Normalize collections to always be an array for unified processing
const allCollections = Array.isArray(joinDef.field.collection)
? joinDef.field.collection
: [joinDef.field.collection]
// Use the provided locale or fall back to the default locale for localized fields
const localizationConfig = adapter.payload.config.localization
const effectiveLocale =
locale ||
(typeof localizationConfig === 'object' &&
localizationConfig &&
localizationConfig.defaultLocale)
// Extract relationTo filter from the where clause to determine which collections to query
const relationToFilter = extractRelationToFilter(joinQuery.where || {})
// Determine which collections to query based on relationTo filter
const collections = relationToFilter
? allCollections.filter((col) => relationToFilter.includes(col))
: allCollections
// Check if this is a polymorphic collection join (where field.collection is an array)
const isPolymorphicJoin = Array.isArray(joinDef.field.collection)
// Apply pagination settings
const limit = joinQuery.limit ?? joinDef.field.defaultLimit ?? 10
const page = joinQuery.page ?? 1
const skip = (page - 1) * limit
// Process collections concurrently
const collectionPromises = collections.map(async (joinCollectionSlug) => {
const targetConfig = adapter.payload.collections[joinCollectionSlug]?.config
if (!targetConfig) {
return null
}
const useDrafts = versions && Boolean(targetConfig.versions?.drafts)
let JoinModel
if (useDrafts) {
JoinModel = adapter.versions[targetConfig.slug]
} else {
JoinModel = adapter.collections[targetConfig.slug]
}
if (!JoinModel) {
return null
}
// Extract all parent document IDs to use in the join query
const parentIDs = docs.map((d) => (versions ? (d.parent ?? d._id ?? d.id) : (d._id ?? d.id)))
// Build the base query
let whereQuery: null | Record<string, unknown> = null
whereQuery = isPolymorphicJoin
? filterWhereForCollection(
joinQuery.where || {},
targetConfig.flattenedFields,
true, // exclude relationTo for individual collections
)
: joinQuery.where || {}
// Skip this collection if the WHERE clause cannot be satisfied for polymorphic collection joins
if (whereQuery === null) {
return null
}
whereQuery = useDrafts
? await JoinModel.buildQuery({
locale,
payload: adapter.payload,
where: combineQueries(appendVersionToQueryKey(whereQuery as Where), {
latest: {
equals: true,
},
}),
})
: await buildQuery({
adapter,
collectionSlug: joinCollectionSlug,
fields: targetConfig.flattenedFields,
locale,
where: whereQuery as Where,
})
// Handle localized paths and version prefixes
let dbFieldName = joinDef.field.on
if (effectiveLocale && typeof localizationConfig === 'object' && localizationConfig) {
const pathSegments = joinDef.field.on.split('.')
const transformedSegments: string[] = []
const fields = useDrafts
? buildVersionCollectionFields(adapter.payload.config, targetConfig, true)
: targetConfig.flattenedFields
for (let i = 0; i < pathSegments.length; i++) {
const segment = pathSegments[i]!
transformedSegments.push(segment)
// Check if this segment corresponds to a localized field
const fieldAtSegment = fields.find((f) => f.name === segment)
if (fieldAtSegment && fieldAtSegment.localized) {
transformedSegments.push(effectiveLocale)
}
}
dbFieldName = transformedSegments.join('.')
}
// Add version prefix for draft queries
if (useDrafts) {
dbFieldName = `version.${dbFieldName}`
}
// Check if the target field is a polymorphic relationship
const isPolymorphic = joinDef.targetField
? Array.isArray(joinDef.targetField.relationTo)
: false
if (isPolymorphic) {
// For polymorphic relationships, we need to match both relationTo and value
whereQuery[`${dbFieldName}.relationTo`] = collectionSlug
whereQuery[`${dbFieldName}.value`] = { $in: parentIDs }
} else {
// For regular relationships and polymorphic collection joins
whereQuery[dbFieldName] = { $in: parentIDs }
}
// Build the sort parameters for the query
const fields = useDrafts
? buildVersionCollectionFields(adapter.payload.config, targetConfig, true)
: targetConfig.flattenedFields
const sort = buildSortParam({
adapter,
config: adapter.payload.config,
fields,
locale,
sort: useDrafts
? getQueryDraftsSort({
collectionConfig: targetConfig,
sort: joinQuery.sort || joinDef.field.defaultSort || targetConfig.defaultSort,
})
: joinQuery.sort || joinDef.field.defaultSort || targetConfig.defaultSort,
timestamps: true,
})
const projection = buildJoinProjection(dbFieldName, useDrafts, sort)
const [results, dbCount] = await Promise.all([
JoinModel.find(whereQuery, projection, {
sort,
...(isPolymorphicJoin ? {} : { limit, skip }),
}).lean(),
isPolymorphicJoin ? Promise.resolve(0) : JoinModel.countDocuments(whereQuery),
])
const count = isPolymorphicJoin ? results.length : dbCount
transform({
adapter,
data: results,
fields: useDrafts
? buildVersionCollectionFields(adapter.payload.config, targetConfig, false)
: targetConfig.fields,
operation: 'read',
})
// Return results with collection info for grouping
return {
collectionSlug: joinCollectionSlug,
count,
dbFieldName,
results,
sort,
useDrafts,
}
})
const collectionResults = await Promise.all(collectionPromises)
// Group the results by parent ID
const grouped: Record<
string,
{
docs: Record<string, unknown>[]
sort: Record<string, string>
}
> = {}
let totalCount = 0
for (const collectionResult of collectionResults) {
if (!collectionResult) {
continue
}
const { collectionSlug, count, dbFieldName, results, sort, useDrafts } = collectionResult
totalCount += count
for (const result of results) {
if (useDrafts) {
result.id = result.parent
}
const parentValues = getByPathWithArrays(result, dbFieldName) as (
| { relationTo: string; value: number | string }
| number
| string
)[]
if (parentValues.length === 0) {
continue
}
for (let parentValue of parentValues) {
if (!parentValue) {
continue
}
if (typeof parentValue === 'object') {
parentValue = parentValue.value
}
const joinData = {
relationTo: collectionSlug,
value: result.id,
}
const parentKey = parentValue as string
if (!grouped[parentKey]) {
grouped[parentKey] = {
docs: [],
sort,
}
}
// Always store the ObjectID reference in polymorphic format
grouped[parentKey].docs.push({
...result,
__joinData: joinData,
})
}
}
}
for (const results of Object.values(grouped)) {
results.docs.sort((a, b) => {
for (const [fieldName, sortOrder] of Object.entries(results.sort)) {
const sort = sortOrder === 'asc' ? 1 : -1
const aValue = a[fieldName] as Date | number | string
const bValue = b[fieldName] as Date | number | string
if (aValue < bValue) {
return -1 * sort
}
if (aValue > bValue) {
return 1 * sort
}
}
return 0
})
results.docs = results.docs.map(
(doc) => (isPolymorphicJoin ? doc.__joinData : doc.id) as Record<string, unknown>,
)
}
// Determine if the join field should be localized
const localeSuffix =
fieldShouldBeLocalized({
field: joinDef.field,
parentIsLocalized: joinDef.parentIsLocalized,
}) &&
adapter.payload.config.localization &&
effectiveLocale
? `.${effectiveLocale}`
: ''
// Adjust the join path with locale suffix if needed
const localizedJoinPath = `${joinPath}${localeSuffix}`
return {
grouped,
isPolymorphicJoin,
joinQuery,
limit,
localizedJoinPath,
page,
skip,
totalCount,
}
})
// Wait for all join operations to complete
const joinResults = await Promise.all(joinPromises)
// Process the results and attach them to documents
for (const joinResult of joinResults) {
if (!joinResult) {
continue
}
const { grouped, isPolymorphicJoin, joinQuery, limit, localizedJoinPath, skip, totalCount } =
joinResult
// Attach the joined data to each parent document
for (const doc of docs) {
const id = (versions ? (doc.parent ?? doc._id ?? doc.id) : (doc._id ?? doc.id)) as string
const all = grouped[id]?.docs || []
// Calculate the slice for pagination
// When limit is 0, it means unlimited - return all results
const slice = isPolymorphicJoin
? limit === 0
? all
: all.slice(skip, skip + limit)
: // For non-polymorphic joins, we assume that page and limit were applied at the database level
all
// Create the join result object with pagination metadata
const value: Record<string, unknown> = {
docs: slice,
hasNextPage: limit === 0 ? false : totalCount > skip + slice.length,
}
// Include total count if requested
if (joinQuery.count) {
value.totalDocs = totalCount
}
// Navigate to the correct nested location in the document and set the join data
// This handles nested join paths like "user.posts" by creating intermediate objects
const segments = localizedJoinPath.split('.')
let ref: Record<string, unknown>
if (versions) {
if (!doc.version) {
doc.version = {}
}
ref = doc.version as Record<string, unknown>
} else {
ref = doc
}
for (let i = 0; i < segments.length - 1; i++) {
const seg = segments[i]!
if (!ref[seg]) {
ref[seg] = {}
}
ref = ref[seg] as Record<string, unknown>
}
// Set the final join data at the target path
ref[segments[segments.length - 1]!] = value
}
}
}
/**
* Extracts relationTo filter values from a WHERE clause
* @param where - The WHERE clause to search
* @returns Array of collection slugs if relationTo filter found, null otherwise
*/
function extractRelationToFilter(where: Record<string, unknown>): null | string[] {
if (!where || typeof where !== 'object') {
return null
}
// Check for direct relationTo conditions
if (where.relationTo && typeof where.relationTo === 'object') {
const relationTo = where.relationTo as Record<string, unknown>
if (relationTo.in && Array.isArray(relationTo.in)) {
return relationTo.in as string[]
}
if (relationTo.equals) {
return [relationTo.equals as string]
}
}
// Check for relationTo in logical operators
if (where.and && Array.isArray(where.and)) {
for (const condition of where.and) {
const result = extractRelationToFilter(condition)
if (result) {
return result
}
}
}
if (where.or && Array.isArray(where.or)) {
for (const condition of where.or) {
const result = extractRelationToFilter(condition)
if (result) {
return result
}
}
}
return null
}
/**
* Filters a WHERE clause to only include fields that exist in the target collection
* This is needed for polymorphic joins where different collections have different fields
* @param where - The original WHERE clause
* @param availableFields - The fields available in the target collection
* @param excludeRelationTo - Whether to exclude relationTo field (for individual collections)
* @returns A filtered WHERE clause, or null if the query cannot match this collection
*/
function filterWhereForCollection(
where: Record<string, unknown>,
availableFields: Array<{ name: string }>,
excludeRelationTo: boolean = false,
): null | Record<string, unknown> {
if (!where || typeof where !== 'object') {
return where
}
const fieldNames = new Set(availableFields.map((f) => f.name))
// Add special fields that are available in polymorphic relationships
if (!excludeRelationTo) {
fieldNames.add('relationTo')
}
const filtered: Record<string, unknown> = {}
for (const [key, value] of Object.entries(where)) {
if (key === 'and') {
// Handle AND operator - all conditions must be satisfiable
if (Array.isArray(value)) {
const filteredConditions: Record<string, unknown>[] = []
for (const condition of value) {
const filteredCondition = filterWhereForCollection(
condition,
availableFields,
excludeRelationTo,
)
// If any condition in AND cannot be satisfied, the whole AND fails
if (filteredCondition === null) {
return null
}
if (Object.keys(filteredCondition).length > 0) {
filteredConditions.push(filteredCondition)
}
}
if (filteredConditions.length > 0) {
filtered[key] = filteredConditions
}
}
} else if (key === 'or') {
// Handle OR operator - at least one condition must be satisfiable
if (Array.isArray(value)) {
const filteredConditions = value
.map((condition) =>
filterWhereForCollection(condition, availableFields, excludeRelationTo),
)
.filter((condition) => condition !== null && Object.keys(condition).length > 0)
if (filteredConditions.length > 0) {
filtered[key] = filteredConditions
}
// If no OR conditions can be satisfied, we still continue (OR is more permissive)
}
} else if (key === 'relationTo' && excludeRelationTo) {
// Skip relationTo field for non-polymorphic collections
continue
} else if (fieldNames.has(key)) {
// Include the condition if the field exists in this collection
filtered[key] = value
} else {
// Field doesn't exist in this collection - this makes the query unsatisfiable
return null
}
}
return filtered
}
type SanitizedJoin = SanitizedJoins[string][number]
/**
* Builds projection for join queries
*/
function buildJoinProjection(
baseFieldName: string,
useDrafts: boolean,
sort: Record<string, string>,
): Record<string, 1> {
const projection: Record<string, 1> = {
_id: 1,
[baseFieldName]: 1,
}
if (useDrafts) {
projection.parent = 1
}
for (const fieldName of Object.keys(sort)) {
projection[fieldName] = 1
}
return projection
}
/**
* Enhanced utility function to safely traverse nested object properties using dot notation
* Handles arrays by searching through array elements for matching values
* @param doc - The document to traverse
* @param path - Dot-separated path (e.g., "array.category")
* @returns Array of values found at the specified path (for arrays) or single value
*/
function getByPathWithArrays(doc: unknown, path: string): unknown[] {
const segments = path.split('.')
let current = doc
for (let i = 0; i < segments.length; i++) {
const segment = segments[i]!
if (current === undefined || current === null) {
return []
}
// Get the value at the current segment
const value = (current as Record<string, unknown>)[segment]
if (value === undefined || value === null) {
return []
}
// If this is the last segment, return the value(s)
if (i === segments.length - 1) {
return Array.isArray(value) ? value : [value]
}
// If the value is an array and we have more segments to traverse
if (Array.isArray(value)) {
const remainingPath = segments.slice(i + 1).join('.')
const results: unknown[] = []
// Search through each array element
for (const item of value) {
if (item && typeof item === 'object') {
const subResults = getByPathWithArrays(item, remainingPath)
results.push(...subResults)
}
}
return results
}
// Continue traversing
current = value
}
return []
}

View File

@@ -208,6 +208,7 @@ const sanitizeDate = ({
}
type Args = {
$inc?: Record<string, number>
/** instance of the adapter */
adapter: MongooseAdapter
/** data to transform, can be an array of documents or a single document */
@@ -396,6 +397,7 @@ const stripFields = ({
}
export const transform = ({
$inc,
adapter,
data,
fields,
@@ -404,9 +406,13 @@ export const transform = ({
parentIsLocalized = false,
validateRelationships = true,
}: Args) => {
if (!data) {
return null
}
if (Array.isArray(data)) {
for (const item of data) {
transform({ adapter, data: item, fields, globalSlug, operation, validateRelationships })
transform({ $inc, adapter, data: item, fields, globalSlug, operation, validateRelationships })
}
return
}
@@ -424,6 +430,11 @@ export const transform = ({
data.id = data.id.toHexString()
}
// Handle BigInt conversion for custom ID fields of type 'number'
if (adapter.useBigIntForNumberIDs && typeof data.id === 'bigint') {
data.id = Number(data.id)
}
if (!adapter.allowAdditionalKeys) {
stripFields({
config,
@@ -438,13 +449,27 @@ export const transform = ({
data.globalType = globalSlug
}
const sanitize: TraverseFieldsCallback = ({ field, ref: incomingRef }) => {
const sanitize: TraverseFieldsCallback = ({ field, parentPath, ref: incomingRef }) => {
if (!incomingRef || typeof incomingRef !== 'object') {
return
}
const ref = incomingRef as Record<string, unknown>
if (
$inc &&
field.type === 'number' &&
operation === 'write' &&
field.name in ref &&
ref[field.name]
) {
const value = ref[field.name]
if (value && typeof value === 'object' && '$inc' in value && typeof value.$inc === 'number') {
$inc[`${parentPath}${field.name}`] = value.$inc
delete ref[field.name]
}
}
if (field.type === 'date' && operation === 'read' && field.name in ref && ref[field.name]) {
if (config.localization && fieldShouldBeLocalized({ field, parentIsLocalized })) {
const fieldRef = ref[field.name] as Record<string, unknown>

View File

@@ -1,6 +1,6 @@
{
"name": "@payloadcms/db-postgres",
"version": "3.47.0",
"version": "3.49.0",
"description": "The officially supported Postgres database adapter for Payload",
"homepage": "https://payloadcms.com",
"repository": {

View File

@@ -17,6 +17,7 @@ import {
deleteVersions,
destroy,
find,
findDistinct,
findGlobal,
findGlobalVersions,
findMigrationDir,
@@ -120,6 +121,7 @@ export function postgresAdapter(args: Args): DatabaseAdapterObj<PostgresAdapter>
json: true,
},
fieldConstraints: {},
findDistinct,
generateSchema: createSchemaGenerator({
columnToCodeConverter,
corePackageSuffix: 'pg-core',
@@ -178,8 +180,6 @@ export function postgresAdapter(args: Args): DatabaseAdapterObj<PostgresAdapter>
find,
findGlobal,
findGlobalVersions,
updateJobs,
// @ts-expect-error - vestiges of when tsconfig was not strict. Feel free to improve
findOne,
findVersions,
indexes: new Set<string>(),
@@ -197,6 +197,7 @@ export function postgresAdapter(args: Args): DatabaseAdapterObj<PostgresAdapter>
queryDrafts,
rawRelations: {},
rawTables: {},
updateJobs,
// @ts-expect-error - vestiges of when tsconfig was not strict. Feel free to improve
rejectInitializing,
requireDrizzleKit,

View File

@@ -1,6 +1,6 @@
{
"name": "@payloadcms/db-sqlite",
"version": "3.47.0",
"version": "3.49.0",
"description": "The officially supported SQLite database adapter for Payload",
"homepage": "https://payloadcms.com",
"repository": {

View File

@@ -18,6 +18,7 @@ import {
deleteVersions,
destroy,
find,
findDistinct,
findGlobal,
findGlobalVersions,
findMigrationDir,
@@ -40,24 +41,26 @@ import {
updateVersion,
upsert,
} from '@payloadcms/drizzle'
import {
columnToCodeConverter,
convertPathToJSONTraversal,
countDistinct,
createJSONQuery,
defaultDrizzleSnapshot,
deleteWhere,
dropDatabase,
execute,
init,
insert,
requireDrizzleKit,
} from '@payloadcms/drizzle/sqlite'
import { like, notLike } from 'drizzle-orm'
import { createDatabaseAdapter, defaultBeginTransaction } from 'payload'
import { fileURLToPath } from 'url'
import type { Args, SQLiteAdapter } from './types.js'
import { columnToCodeConverter } from './columnToCodeConverter.js'
import { connect } from './connect.js'
import { countDistinct } from './countDistinct.js'
import { convertPathToJSONTraversal } from './createJSONQuery/convertPathToJSONTraversal.js'
import { createJSONQuery } from './createJSONQuery/index.js'
import { defaultDrizzleSnapshot } from './defaultSnapshot.js'
import { deleteWhere } from './deleteWhere.js'
import { dropDatabase } from './dropDatabase.js'
import { execute } from './execute.js'
import { init } from './init.js'
import { insert } from './insert.js'
import { requireDrizzleKit } from './requireDrizzleKit.js'
const filename = fileURLToPath(import.meta.url)
@@ -68,8 +71,8 @@ export function sqliteAdapter(args: Args): DatabaseAdapterObj<SQLiteAdapter> {
function adapter({ payload }: { payload: Payload }) {
const migrationDir = findMigrationDir(args.migrationDir)
let resolveInitializing
let rejectInitializing
let resolveInitializing: () => void = () => {}
let rejectInitializing: () => void = () => {}
const initializing = new Promise<void>((res, rej) => {
resolveInitializing = res
@@ -101,6 +104,7 @@ export function sqliteAdapter(args: Args): DatabaseAdapterObj<SQLiteAdapter> {
json: true,
},
fieldConstraints: {},
findDistinct,
generateSchema: createSchemaGenerator({
columnToCodeConverter,
corePackageSuffix: 'sqlite-core',
@@ -129,7 +133,6 @@ export function sqliteAdapter(args: Args): DatabaseAdapterObj<SQLiteAdapter> {
updateJobs,
updateMany,
versionsSuffix: args.versionsSuffix || '_v',
// DatabaseAdapter
beginTransaction: args.transactionOptions ? beginTransaction : defaultBeginTransaction(),
commitTransaction,
@@ -164,7 +167,6 @@ export function sqliteAdapter(args: Args): DatabaseAdapterObj<SQLiteAdapter> {
find,
findGlobal,
findGlobalVersions,
// @ts-expect-error - vestiges of when tsconfig was not strict. Feel free to improve
findOne,
findVersions,
indexes: new Set<string>(),
@@ -180,10 +182,8 @@ export function sqliteAdapter(args: Args): DatabaseAdapterObj<SQLiteAdapter> {
packageName: '@payloadcms/db-sqlite',
payload,
queryDrafts,
// @ts-expect-error - vestiges of when tsconfig was not strict. Feel free to improve
rejectInitializing,
requireDrizzleKit,
// @ts-expect-error - vestiges of when tsconfig was not strict. Feel free to improve
resolveInitializing,
rollbackTransaction,
updateGlobal,

View File

@@ -1,10 +1,12 @@
import type { Client, Config, ResultSet } from '@libsql/client'
import type { extendDrizzleTable, Operators } from '@payloadcms/drizzle'
import type { BaseSQLiteAdapter, BaseSQLiteArgs } from '@payloadcms/drizzle/sqlite'
import type { BuildQueryJoinAliases, DrizzleAdapter } from '@payloadcms/drizzle/types'
import type { DrizzleConfig, Relation, Relations, SQL } from 'drizzle-orm'
import type { LibSQLDatabase } from 'drizzle-orm/libsql'
import type {
AnySQLiteColumn,
SQLiteColumn,
SQLiteInsertOnConflictDoUpdateConfig,
SQLiteTableWithColumns,
SQLiteTransactionConfig,
@@ -55,23 +57,7 @@ export type Args = {
*/
blocksAsJSON?: boolean
client: Config
/** Generated schema from payload generate:db-schema file path */
generateSchemaOutputFile?: string
idType?: 'number' | 'uuid'
localesSuffix?: string
logger?: DrizzleConfig['logger']
migrationDir?: string
prodMigrations?: {
down: (args: MigrateDownArgs) => Promise<void>
name: string
up: (args: MigrateUpArgs) => Promise<void>
}[]
push?: boolean
relationshipsSuffix?: string
schemaName?: string
transactionOptions?: false | SQLiteTransactionConfig
versionsSuffix?: string
}
} & BaseSQLiteArgs
export type GenericColumns = {
[x: string]: AnySQLiteColumn
@@ -87,6 +73,7 @@ export type GenericTable = SQLiteTableWithColumns<{
export type GenericRelation = Relations<string, Record<string, Relation<string>>>
export type CountDistinct = (args: {
column?: SQLiteColumn<any>
db: LibSQLDatabase
joins: BuildQueryJoinAliases
tableName: string
@@ -140,45 +127,11 @@ type ResolveSchemaType<T> = 'schema' extends keyof T
type Drizzle = { $client: Client } & LibSQLDatabase<ResolveSchemaType<GeneratedDatabaseSchema>>
export type SQLiteAdapter = {
afterSchemaInit: SQLiteSchemaHook[]
autoIncrement: boolean
beforeSchemaInit: SQLiteSchemaHook[]
client: Client
clientConfig: Args['client']
countDistinct: CountDistinct
defaultDrizzleSnapshot: any
deleteWhere: DeleteWhere
drizzle: Drizzle
dropDatabase: DropDatabase
execute: Execute<unknown>
/**
* An object keyed on each table, with a key value pair where the constraint name is the key, followed by the dot-notation field name
* Used for returning properly formed errors from unique fields
*/
fieldConstraints: Record<string, Record<string, string>>
idType: Args['idType']
initializing: Promise<void>
insert: Insert
localesSuffix?: string
logger: DrizzleConfig['logger']
operators: Operators
prodMigrations?: {
down: (args: MigrateDownArgs) => Promise<void>
name: string
up: (args: MigrateUpArgs) => Promise<void>
}[]
push: boolean
rejectInitializing: () => void
relations: Record<string, GenericRelation>
relationshipsSuffix?: string
resolveInitializing: () => void
schema: Record<string, GenericRelation | GenericTable>
schemaName?: Args['schemaName']
tableNameMap: Map<string, string>
tables: Record<string, GenericTable>
transactionOptions: SQLiteTransactionConfig
versionsSuffix?: string
} & SQLiteDrizzleAdapter
} & BaseSQLiteAdapter &
SQLiteDrizzleAdapter
export type IDType = 'integer' | 'numeric' | 'text'

View File

@@ -1,6 +1,6 @@
{
"name": "@payloadcms/db-vercel-postgres",
"version": "3.47.0",
"version": "3.49.0",
"description": "Vercel Postgres adapter for Payload",
"homepage": "https://payloadcms.com",
"repository": {

View File

@@ -18,6 +18,7 @@ import {
deleteVersions,
destroy,
find,
findDistinct,
findGlobal,
findGlobalVersions,
findMigrationDir,
@@ -174,10 +175,9 @@ export function vercelPostgresAdapter(args: Args = {}): DatabaseAdapterObj<Verce
dropDatabase,
execute,
find,
findDistinct,
findGlobal,
findGlobalVersions,
readReplicaOptions: args.readReplicas,
// @ts-expect-error - vestiges of when tsconfig was not strict. Feel free to improve
findOne,
findVersions,
init,
@@ -192,6 +192,7 @@ export function vercelPostgresAdapter(args: Args = {}): DatabaseAdapterObj<Verce
packageName: '@payloadcms/db-vercel-postgres',
payload,
queryDrafts,
readReplicaOptions: args.readReplicas,
// @ts-expect-error - vestiges of when tsconfig was not strict. Feel free to improve
rejectInitializing,
requireDrizzleKit,

View File

@@ -1,6 +1,6 @@
{
"name": "@payloadcms/drizzle",
"version": "3.47.0",
"version": "3.49.0",
"description": "A library of shared functions used by different payload database adapters",
"homepage": "https://payloadcms.com",
"repository": {
@@ -30,6 +30,11 @@
"types": "./src/exports/postgres.ts",
"default": "./src/exports/postgres.ts"
},
"./sqlite": {
"import": "./src/exports/sqlite.ts",
"types": "./src/exports/sqlite.ts",
"default": "./src/exports/sqlite.ts"
},
"./types": {
"import": "./src/exports/types-deprecated.ts",
"types": "./src/exports/types-deprecated.ts",
@@ -82,6 +87,11 @@
"types": "./dist/exports/postgres.d.ts",
"default": "./dist/exports/postgres.js"
},
"./sqlite": {
"import": "./dist/exports/sqlite.js",
"types": "./dist/exports/sqlite.d.ts",
"default": "./dist/exports/sqlite.js"
},
"./types": {
"import": "./dist/exports/types-deprecated.js",
"types": "./dist/exports/types-deprecated.d.ts",

View File

@@ -23,7 +23,7 @@ export async function createGlobalVersion<T extends TypeWithID>(
updatedAt,
versionData,
}: CreateGlobalVersionArgs,
) {
): Promise<TypeWithVersion<T>> {
const db = await getTransaction(this, req)
const global = this.payload.globals.config.find(({ slug }) => slug === globalSlug)

View File

@@ -24,7 +24,7 @@ export async function createVersion<T extends TypeWithID>(
updatedAt,
versionData,
}: CreateVersionArgs<T>,
) {
): Promise<TypeWithVersion<T>> {
const db = await getTransaction(this, req)
const collection = this.payload.collections[collectionSlug].config
const defaultTableName = toSnakeCase(collection.slug)

View File

@@ -6,41 +6,58 @@ import toSnakeCase from 'to-snake-case'
import type { DrizzleAdapter } from './types.js'
import { findMany } from './find/findMany.js'
import { buildQuery } from './queries/buildQuery.js'
import { getTransaction } from './utilities/getTransaction.js'
export const deleteMany: DeleteMany = async function deleteMany(
this: DrizzleAdapter,
{ collection, req, where },
{ collection, req, where: whereArg },
) {
const db = await getTransaction(this, req)
const collectionConfig = this.payload.collections[collection].config
const tableName = this.tableNameMap.get(toSnakeCase(collectionConfig.slug))
const result = await findMany({
const table = this.tables[tableName]
const { joins, where } = buildQuery({
adapter: this,
fields: collectionConfig.flattenedFields,
joins: false,
limit: 0,
locale: req?.locale,
page: 1,
pagination: false,
req,
tableName,
where,
where: whereArg,
})
const ids = []
let whereToUse = where
result.docs.forEach((data) => {
ids.push(data.id)
})
if (ids.length > 0) {
await this.deleteWhere({
db,
if (joins?.length) {
// Difficult to support joins (through where referencing other tables) in deleteMany. => 2 separate queries.
// We can look into supporting this using one single query (through a subquery) in the future, though that's difficult to do in a generic way.
const result = await findMany({
adapter: this,
fields: collectionConfig.flattenedFields,
joins: false,
limit: 0,
locale: req?.locale,
page: 1,
pagination: false,
req,
select: {
id: true,
},
tableName,
where: inArray(this.tables[tableName].id, ids),
where: whereArg,
})
whereToUse = inArray(
table.id,
result.docs.map((doc) => doc.id),
)
}
await this.deleteWhere({
db,
tableName,
where: whereToUse,
})
}

View File

@@ -0,0 +1,12 @@
export { columnToCodeConverter } from '../sqlite/columnToCodeConverter.js'
export { countDistinct } from '../sqlite/countDistinct.js'
export { convertPathToJSONTraversal } from '../sqlite/createJSONQuery/convertPathToJSONTraversal.js'
export { createJSONQuery } from '../sqlite/createJSONQuery/index.js'
export { defaultDrizzleSnapshot } from '../sqlite/defaultSnapshot.js'
export { deleteWhere } from '../sqlite/deleteWhere.js'
export { dropDatabase } from '../sqlite/dropDatabase.js'
export { execute } from '../sqlite/execute.js'
export { init } from '../sqlite/init.js'
export { insert } from '../sqlite/insert.js'
export { requireDrizzleKit } from '../sqlite/requireDrizzleKit.js'
export * from '../sqlite/types.js'

View File

@@ -44,7 +44,7 @@ export const buildFindManyArgs = ({
select,
tableName,
versions,
}: BuildFindQueryArgs): Record<string, unknown> => {
}: BuildFindQueryArgs): Result => {
const result: Result = {
extras: {},
with: {},
@@ -134,5 +134,12 @@ export const buildFindManyArgs = ({
result.with._locales = _locales
}
// Delete properties that are empty
for (const key of Object.keys(result)) {
if (!Object.keys(result[key]).length) {
delete result[key]
}
}
return result
}

View File

@@ -0,0 +1,108 @@
import type { FindDistinct, SanitizedCollectionConfig } from 'payload'
import toSnakeCase from 'to-snake-case'
import type { DrizzleAdapter, GenericColumn } from './types.js'
import { buildQuery } from './queries/buildQuery.js'
import { selectDistinct } from './queries/selectDistinct.js'
import { getTransaction } from './utilities/getTransaction.js'
import { DistinctSymbol } from './utilities/rawConstraint.js'
export const findDistinct: FindDistinct = async function (this: DrizzleAdapter, args) {
const db = await getTransaction(this, args.req)
const collectionConfig: SanitizedCollectionConfig =
this.payload.collections[args.collection].config
const page = args.page || 1
const offset = args.limit ? (page - 1) * args.limit : undefined
const tableName = this.tableNameMap.get(toSnakeCase(collectionConfig.slug))
const { joins, orderBy, selectFields, where } = buildQuery({
adapter: this,
fields: collectionConfig.flattenedFields,
locale: args.locale,
sort: args.sort ?? args.field,
tableName,
where: {
and: [
args.where ?? {},
{
[args.field]: {
equals: DistinctSymbol,
},
},
],
},
})
orderBy.pop()
const selectDistinctResult = await selectDistinct({
adapter: this,
db,
forceRun: true,
joins,
query: ({ query }) => {
query = query.orderBy(() => orderBy.map(({ column, order }) => order(column)))
if (args.limit) {
if (offset) {
query = query.offset(offset)
}
query = query.limit(args.limit)
}
return query
},
selectFields: {
_selected: selectFields['_selected'],
...(orderBy[0].column === selectFields['_selected'] ? {} : { _order: orderBy[0].column }),
} as Record<string, GenericColumn>,
tableName,
where,
})
const values = selectDistinctResult.map((each) => ({
[args.field]: (each as Record<string, any>)._selected,
}))
if (args.limit) {
const totalDocs = await this.countDistinct({
column: selectFields['_selected'],
db,
joins,
tableName,
where,
})
const totalPages = Math.ceil(totalDocs / args.limit)
const hasPrevPage = page > 1
const hasNextPage = totalPages > page
const pagingCounter = (page - 1) * args.limit + 1
return {
hasNextPage,
hasPrevPage,
limit: args.limit,
nextPage: hasNextPage ? page + 1 : null,
page,
pagingCounter,
prevPage: hasPrevPage ? page - 1 : null,
totalDocs,
totalPages,
values,
}
}
return {
hasNextPage: false,
hasPrevPage: false,
limit: 0,
page: 1,
pagingCounter: 1,
totalDocs: values.length,
totalPages: 1,
values,
}
}

View File

@@ -9,7 +9,7 @@ import { findMany } from './find/findMany.js'
export async function findOne<T extends TypeWithID>(
this: DrizzleAdapter,
{ collection, draftsEnabled, joins, locale, req, select, where }: FindOneArgs,
): Promise<T> {
): Promise<null | T> {
const collectionConfig: SanitizedCollectionConfig = this.payload.collections[collection].config
const tableName = this.tableNameMap.get(toSnakeCase(collectionConfig.slug))

View File

@@ -12,6 +12,7 @@ export { deleteVersions } from './deleteVersions.js'
export { destroy } from './destroy.js'
export { find } from './find.js'
export { chainMethods } from './find/chainMethods.js'
export { findDistinct } from './findDistinct.js'
export { findGlobal } from './findGlobal.js'
export { findGlobalVersions } from './findGlobalVersions.js'
export { findMigrationDir } from './findMigrationDir.js'

View File

@@ -6,13 +6,13 @@ import type { BasePostgresAdapter, CountDistinct } from './types.js'
export const countDistinct: CountDistinct = async function countDistinct(
this: BasePostgresAdapter,
{ db, joins, tableName, where },
{ column, db, joins, tableName, where },
) {
// When we don't have any joins - use a simple COUNT(*) query.
if (joins.length === 0) {
const countResult = await db
.select({
count: count(),
count: column ? count(sql`DISTINCT ${column}`) : count(),
})
.from(this.tables[tableName])
.where(where)
@@ -26,12 +26,12 @@ export const countDistinct: CountDistinct = async function countDistinct(
})
.from(this.tables[tableName])
.where(where)
.groupBy(this.tables[tableName].id)
.groupBy(column || this.tables[tableName].id)
.limit(1)
.$dynamic()
joins.forEach(({ condition, table }) => {
query = query.leftJoin(table as PgTableWithColumns<any>, condition)
joins.forEach(({ type, condition, table }) => {
query = query[type ?? 'leftJoin'](table as PgTableWithColumns<any>, condition)
})
// When we have any joins, we need to count each individual ID only once.

View File

@@ -20,6 +20,7 @@ import type {
UniqueConstraintBuilder,
} from 'drizzle-orm/pg-core'
import type { PgTableFn } from 'drizzle-orm/pg-core/table'
import type { SQLiteColumn } from 'drizzle-orm/sqlite-core'
import type { Payload, PayloadRequest } from 'payload'
import type { ClientConfig, QueryResult } from 'pg'
@@ -64,6 +65,7 @@ export type GenericRelation = Relations<string, Record<string, Relation<string>>
export type PostgresDB = NodePgDatabase<Record<string, unknown>>
export type CountDistinct = (args: {
column?: PgColumn<any> | SQLiteColumn<any>
db: PostgresDB | TransactionPg
joins: BuildQueryJoinAliases
tableName: string

View File

@@ -10,6 +10,7 @@ import type { DrizzleAdapter, GenericColumn } from '../types.js'
import type { BuildQueryJoinAliases } from './buildQuery.js'
import { getNameFromDrizzleTable } from '../utilities/getNameFromDrizzleTable.js'
import { DistinctSymbol } from '../utilities/rawConstraint.js'
import { buildAndOrConditions } from './buildAndOrConditions.js'
import { getTableColumnFromPath } from './getTableColumnFromPath.js'
import { sanitizeQueryValue } from './sanitizeQueryValue.js'
@@ -108,6 +109,17 @@ export function parseParams({
value: val,
})
const resolvedColumn =
rawColumn ||
(aliasTable && tableName === getNameFromDrizzleTable(table)
? aliasTable[columnName]
: table[columnName])
if (val === DistinctSymbol) {
selectFields['_selected'] = resolvedColumn
break
}
queryConstraints.forEach(({ columnName: col, table: constraintTable, value }) => {
if (typeof value === 'string' && value.indexOf('%') > -1) {
constraints.push(adapter.operators.like(constraintTable[col], value))
@@ -207,7 +219,10 @@ export function parseParams({
if (
operator === 'like' &&
(field.type === 'number' || table[columnName].columnType === 'PgUUID')
(field.type === 'number' ||
field.type === 'relationship' ||
field.type === 'upload' ||
table[columnName].columnType === 'PgUUID')
) {
operator = 'equals'
}
@@ -281,12 +296,6 @@ export function parseParams({
break
}
const resolvedColumn =
rawColumn ||
(aliasTable && tableName === getNameFromDrizzleTable(table)
? aliasTable[columnName]
: table[columnName])
if (queryOperator === 'not_equals' && queryValue !== null) {
constraints.push(
or(

View File

@@ -112,9 +112,14 @@ export const sanitizeQueryValue = ({
if (field.type === 'date' && operator !== 'exists') {
if (typeof val === 'string') {
formattedValue = new Date(val).toISOString()
if (Number.isNaN(Date.parse(formattedValue))) {
return { operator, value: undefined }
if (val === 'null' || val === '') {
formattedValue = null
} else {
const date = new Date(val)
if (Number.isNaN(date.getTime())) {
return { operator, value: undefined }
}
formattedValue = date.toISOString()
}
} else if (typeof val === 'number') {
formattedValue = new Date(val).toISOString()

View File

@@ -14,6 +14,7 @@ import type { BuildQueryJoinAliases } from './buildQuery.js'
type Args = {
adapter: DrizzleAdapter
db: DrizzleAdapter['drizzle'] | DrizzleTransaction
forceRun?: boolean
joins: BuildQueryJoinAliases
query?: (args: { query: SQLiteSelect }) => SQLiteSelect
selectFields: Record<string, GenericColumn>
@@ -27,13 +28,14 @@ type Args = {
export const selectDistinct = ({
adapter,
db,
forceRun,
joins,
query: queryModifier = ({ query }) => query,
selectFields,
tableName,
where,
}: Args): QueryPromise<{ id: number | string }[] & Record<string, GenericColumn>> => {
if (Object.keys(joins).length > 0) {
if (forceRun || Object.keys(joins).length > 0) {
let query: SQLiteSelect
const table = adapter.tables[tableName]
@@ -54,8 +56,8 @@ export const selectDistinct = ({
query = query.where(where)
}
joins.forEach(({ condition, table }) => {
query = query.leftJoin(table, condition)
joins.forEach(({ type, condition, table }) => {
query = query[type ?? 'leftJoin'](table, condition)
})
return queryModifier({

View File

@@ -1,4 +1,4 @@
import type { ColumnToCodeConverter } from '@payloadcms/drizzle/types'
import type { ColumnToCodeConverter } from '../types.js'
export const columnToCodeConverter: ColumnToCodeConverter = ({
adapter,

View File

@@ -2,17 +2,17 @@ import type { SQLiteSelect } from 'drizzle-orm/sqlite-core'
import { count, sql } from 'drizzle-orm'
import type { CountDistinct, SQLiteAdapter } from './types.js'
import type { BaseSQLiteAdapter, CountDistinct } from './types.js'
export const countDistinct: CountDistinct = async function countDistinct(
this: SQLiteAdapter,
{ db, joins, tableName, where },
this: BaseSQLiteAdapter,
{ column, db, joins, tableName, where },
) {
// When we don't have any joins - use a simple COUNT(*) query.
if (joins.length === 0) {
const countResult = await db
.select({
count: count(),
count: column ? count(sql`DISTINCT ${column}`) : count(),
})
.from(this.tables[tableName])
.where(where)
@@ -25,12 +25,12 @@ export const countDistinct: CountDistinct = async function countDistinct(
})
.from(this.tables[tableName])
.where(where)
.groupBy(this.tables[tableName].id)
.groupBy(column ?? this.tables[tableName].id)
.limit(1)
.$dynamic()
joins.forEach(({ condition, table }) => {
query = query.leftJoin(table, condition)
joins.forEach(({ type, condition, table }) => {
query = query[type ?? 'leftJoin'](table, condition)
})
// When we have any joins, we need to count each individual ID only once.

View File

@@ -1,4 +1,4 @@
import type { CreateJSONQueryArgs } from '@payloadcms/drizzle/types'
import type { CreateJSONQueryArgs } from '../../types.js'
type FromArrayArgs = {
isRoot?: true
@@ -74,7 +74,7 @@ export const createJSONQuery = ({
treatAsArray,
value,
}: CreateJSONQueryArgs): string => {
if (treatAsArray?.includes(pathSegments[1]!) && table) {
if (treatAsArray?.includes(pathSegments[1]) && table) {
return fromArray({
operator,
pathSegments,

View File

@@ -1,9 +1,9 @@
import type { DeleteWhere, SQLiteAdapter } from './types.js'
import type { BaseSQLiteAdapter, DeleteWhere } from './types.js'
export const deleteWhere: DeleteWhere = async function (
// Here 'this' is not a parameter. See:
// https://www.typescriptlang.org/docs/handbook/2/classes.html#this-parameters
this: SQLiteAdapter,
this: BaseSQLiteAdapter,
{ db, tableName, where },
) {
const table = this.tables[tableName]

View File

@@ -1,15 +1,15 @@
import type { Row } from '@libsql/client'
import type { DropDatabase, SQLiteAdapter } from './types.js'
import type { BaseSQLiteAdapter, DropDatabase } from './types.js'
const getTables = (adapter: SQLiteAdapter) => {
const getTables = (adapter: BaseSQLiteAdapter) => {
return adapter.client.execute(`SELECT name
FROM sqlite_master
WHERE type = 'table'
AND name NOT LIKE 'sqlite_%';`)
}
const dropTables = (adapter: SQLiteAdapter, rows: Row[]) => {
const dropTables = (adapter: BaseSQLiteAdapter, rows: Row[]) => {
const multi = `
PRAGMA foreign_keys = OFF;\n
${rows.map(({ name }) => `DROP TABLE IF EXISTS ${name as string}`).join(';\n ')};\n

View File

@@ -3,13 +3,13 @@ import { sql } from 'drizzle-orm'
import type { Execute } from './types.js'
export const execute: Execute<any> = function execute({ db, drizzle, raw, sql: statement }) {
const executeFrom = (db ?? drizzle)!
const executeFrom = (db ?? drizzle)
if (raw) {
const result = executeFrom.run(sql.raw(raw))
return result
} else {
const result = executeFrom.run(statement!)
const result = executeFrom.run(statement)
return result
}
}

Some files were not shown because too many files have changed in this diff Show More