fix(db-mongodb): improve compatibility with Firestore database (#12763)

### What?

Adds four more arguments to the `mongooseAdapter`:

```typescript
  useJoinAggregations?: boolean  /* The big one */
  useAlternativeDropDatabase?: boolean
  useBigIntForNumberIDs?: boolean
  usePipelineInSortLookup?: boolean
```

Also export a new `compatabilityOptions` object from
`@payloadcms/db-mongodb` where each key is a mongo-compatible database
and the value is the recommended `mongooseAdapter` settings for
compatability.

### Why?

When using firestore and visiting
`/admin/collections/media/payload-folders`, we get:

```
MongoServerError: invalid field(s) in lookup: [let, pipeline], only lookup(from, localField, foreignField, as) is supported
```

Firestore doesn't support the full MongoDB aggregation API used by
Payload which gets used when building aggregations for populating join
fields.

There are several other compatability issues with Firestore:
- The invalid `pipeline` property is used in the `$lookup` aggregation
in `buildSortParams`
- Firestore only supports number IDs of type `Long`, but Mongoose
converts custom ID fields of type number to `Double`
- Firestore does not support the `dropDatabase` command
- Firestore does not support the `createIndex` command (not addressed in
this PR)

### How?

 ```typescript
useJoinAggregations?: boolean  /* The big one */
```
When this is `false` we skip the `buildJoinAggregation()` pipeline and resolve the join fields through multiple queries. This can potentially be used with AWS DocumentDB and Azure Cosmos DB to support join fields, but I have not tested with either of these databases.

 ```typescript
useAlternativeDropDatabase?: boolean
```
When `true`, monkey-patch (replace) the `dropDatabase` function so that
it calls `collection.deleteMany({})` on every collection instead of
sending a single `dropDatabase` command to the database

 ```typescript
useBigIntForNumberIDs?: boolean
```
When `true`, use `mongoose.Schema.Types.BigInt` for custom ID fields of type `number` which converts to a firestore `Long` behind the scenes

```typescript
  usePipelineInSortLookup?: boolean
```
When `false`, modify the sortAggregation pipeline in `buildSortParams()` so that we don't use the `pipeline` property in the `$lookup` aggregation. Results in slightly worse performance when sorting by relationship properties.

### Limitations

This PR does not add support for transactions or creating indexes in firestore.

### Fixes

Fixed a bug (and added a test) where you weren't able to sort by multiple properties on a relationship field.

### Future work

1. Firestore supports simple `$lookup` aggregations but other databases might not. Could add a `useSortAggregations` property which can be used to disable aggregations in sorting.

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Sasha <64744993+r1tsuu@users.noreply.github.com>
This commit is contained in:
Elliott W
2025-07-17 01:02:43 +05:45
committed by GitHub
parent e6da384a43
commit 41cff6d436
20 changed files with 938 additions and 40 deletions

View File

@@ -153,6 +153,7 @@ jobs:
matrix:
database:
- mongodb
- firestore
- postgres
- postgres-custom-schema
- postgres-uuid

View File

@@ -30,18 +30,22 @@ export default buildConfig({
## Options
| Option | Description |
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `autoPluralization` | Tell Mongoose to auto-pluralize any collection names if it encounters any singular words used as collection `slug`s. |
| `connectOptions` | Customize MongoDB connection options. Payload will connect to your MongoDB database using default options which you can override and extend to include all the [options](https://mongoosejs.com/docs/connections.html#options) available to mongoose. |
| `collectionsSchemaOptions` | Customize Mongoose schema options for collections. |
| `disableIndexHints` | Set to true to disable hinting to MongoDB to use 'id' as index. This is currently done when counting documents for pagination, as it increases the speed of the count function used in that query. Disabling this optimization might fix some problems with AWS DocumentDB. Defaults to false |
| `migrationDir` | Customize the directory that migrations are stored. |
| `transactionOptions` | An object with configuration properties used in [transactions](https://www.mongodb.com/docs/manual/core/transactions/) or `false` which will disable the use of transactions. |
| `collation` | Enable language-specific string comparison with customizable options. Available on MongoDB 3.4+. Defaults locale to "en". Example: `{ strength: 3 }`. For a full list of collation options and their definitions, see the [MongoDB documentation](https://www.mongodb.com/docs/manual/reference/collation/). |
| `allowAdditionalKeys` | By default, Payload strips all additional keys from MongoDB data that don't exist in the Payload schema. If you have some data that you want to include to the result but it doesn't exist in Payload, you can set this to `true`. Be careful as Payload access control _won't_ work for this data. |
| `allowIDOnCreate` | Set to `true` to use the `id` passed in data on the create API operations without using a custom ID field. |
| `disableFallbackSort` | Set to `true` to disable the adapter adding a fallback sort when sorting by non-unique fields, this can affect performance in some cases but it ensures a consistent order of results. |
| Option | Description |
| ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `autoPluralization` | Tell Mongoose to auto-pluralize any collection names if it encounters any singular words used as collection `slug`s. |
| `connectOptions` | Customize MongoDB connection options. Payload will connect to your MongoDB database using default options which you can override and extend to include all the [options](https://mongoosejs.com/docs/connections.html#options) available to mongoose. |
| `collectionsSchemaOptions` | Customize Mongoose schema options for collections. |
| `disableIndexHints` | Set to true to disable hinting to MongoDB to use 'id' as index. This is currently done when counting documents for pagination, as it increases the speed of the count function used in that query. Disabling this optimization might fix some problems with AWS DocumentDB. Defaults to false |
| `migrationDir` | Customize the directory that migrations are stored. |
| `transactionOptions` | An object with configuration properties used in [transactions](https://www.mongodb.com/docs/manual/core/transactions/) or `false` which will disable the use of transactions. |
| `collation` | Enable language-specific string comparison with customizable options. Available on MongoDB 3.4+. Defaults locale to "en". Example: `{ strength: 3 }`. For a full list of collation options and their definitions, see the [MongoDB documentation](https://www.mongodb.com/docs/manual/reference/collation/). |
| `allowAdditionalKeys` | By default, Payload strips all additional keys from MongoDB data that don't exist in the Payload schema. If you have some data that you want to include to the result but it doesn't exist in Payload, you can set this to `true`. Be careful as Payload access control _won't_ work for this data. |
| `allowIDOnCreate` | Set to `true` to use the `id` passed in data on the create API operations without using a custom ID field. |
| `disableFallbackSort` | Set to `true` to disable the adapter adding a fallback sort when sorting by non-unique fields, this can affect performance in some cases but it ensures a consistent order of results. |
| `useAlternativeDropDatabase` | Set to `true` to use an alternative `dropDatabase` implementation that calls `collection.deleteMany({})` on every collection instead of sending a raw `dropDatabase` command. Payload only uses `dropDatabase` for testing purposes. Defaults to `false`. |
| `useBigIntForNumberIDs` | Set to `true` to use `BigInt` for custom ID fields of type `'number'`. Useful for databases that don't support `double` or `int32` IDs. Defaults to `false`. |
| `useJoinAggregations` | Set to `false` to disable join aggregations (which use correlated subqueries) and instead populate join fields via multiple `find` queries. Defaults to `true`. |
| `usePipelineInSortLookup` | Set to `false` to disable the use of `pipeline` in the `$lookup` aggregation in sorting. Defaults to `true`. |
## Access to Mongoose models
@@ -56,9 +60,21 @@ You can access Mongoose models as follows:
## Using other MongoDB implementations
Limitations with [DocumentDB](https://aws.amazon.com/documentdb/) and [Azure Cosmos DB](https://azure.microsoft.com/en-us/products/cosmos-db):
You can import the `compatabilityOptions` object to get the recommended settings for other MongoDB implementations. Since these databases aren't officially supported by payload, you may still encounter issues even with these settings (please create an issue or PR if you believe these options should be updated):
- For Azure Cosmos DB you must pass `transactionOptions: false` to the adapter options. Azure Cosmos DB does not support transactions that update two and more documents in different collections, which is a common case when using Payload (via hooks).
- For Azure Cosmos DB the root config property `indexSortableFields` must be set to `true`.
- The [Join Field](../fields/join) is not supported in DocumentDB and Azure Cosmos DB, as we internally use MongoDB aggregations to query data for that field, which are limited there. This can be changed in the future.
- For DocumentDB pass `disableIndexHints: true` to disable hinting to the DB to use `id` as index which can cause problems with DocumentDB.
```ts
import { mongooseAdapter, compatabilityOptions } from '@payloadcms/db-mongodb'
export default buildConfig({
db: mongooseAdapter({
url: process.env.DATABASE_URI,
// For example, if you're using firestore:
...compatabilityOptions.firestore,
}),
})
```
We export compatability options for [DocumentDB](https://aws.amazon.com/documentdb/), [Azure Cosmos DB](https://azure.microsoft.com/en-us/products/cosmos-db) and [Firestore](https://cloud.google.com/firestore/mongodb-compatibility/docs/overview). Known limitations:
- Azure Cosmos DB does not support transactions that update two or more documents in different collections, which is a common case when using Payload (via hooks).
- Azure Cosmos DB the root config property `indexSortableFields` must be set to `true`.

View File

@@ -112,6 +112,7 @@
"test:e2e:prod:ci": "pnpm prepare-run-test-against-prod:ci && pnpm runts ./test/runE2E.ts --prod",
"test:e2e:prod:ci:noturbo": "pnpm prepare-run-test-against-prod:ci && pnpm runts ./test/runE2E.ts --prod --no-turbo",
"test:int": "cross-env NODE_OPTIONS=\"--no-deprecation --no-experimental-strip-types\" NODE_NO_WARNINGS=1 DISABLE_LOGGING=true jest --forceExit --detectOpenHandles --config=test/jest.config.js --runInBand",
"test:int:firestore": "cross-env NODE_OPTIONS=\"--no-deprecation --no-experimental-strip-types\" NODE_NO_WARNINGS=1 PAYLOAD_DATABASE=firestore DISABLE_LOGGING=true jest --forceExit --detectOpenHandles --config=test/jest.config.js --runInBand",
"test:int:postgres": "cross-env NODE_OPTIONS=\"--no-deprecation --no-experimental-strip-types\" NODE_NO_WARNINGS=1 PAYLOAD_DATABASE=postgres DISABLE_LOGGING=true jest --forceExit --detectOpenHandles --config=test/jest.config.js --runInBand",
"test:int:sqlite": "cross-env NODE_OPTIONS=\"--no-deprecation --no-experimental-strip-types\" NODE_NO_WARNINGS=1 PAYLOAD_DATABASE=sqlite DISABLE_LOGGING=true jest --forceExit --detectOpenHandles --config=test/jest.config.js --runInBand",
"test:types": "tstyche",

View File

@@ -36,6 +36,25 @@ export const connect: Connect = async function connect(
try {
this.connection = (await mongoose.connect(urlToConnect, connectionOptions)).connection
if (this.useAlternativeDropDatabase) {
if (this.connection.db) {
// Firestore doesn't support dropDatabase, so we monkey patch
// dropDatabase to delete all documents from all collections instead
this.connection.db.dropDatabase = async function (): Promise<boolean> {
const existingCollections = await this.listCollections().toArray()
await Promise.all(
existingCollections.map(async (collectionInfo) => {
const collection = this.collection(collectionInfo.name)
await collection.deleteMany({})
}),
)
return true
}
this.connection.dropDatabase = async function () {
await this.db?.dropDatabase()
}
}
}
// If we are running a replica set with MongoDB Memory Server,
// wait until the replica set elects a primary before proceeding

View File

@@ -12,6 +12,7 @@ import { buildJoinAggregation } from './utilities/buildJoinAggregation.js'
import { buildProjectionFromSelect } from './utilities/buildProjectionFromSelect.js'
import { getCollection } from './utilities/getEntity.js'
import { getSession } from './utilities/getSession.js'
import { resolveJoins } from './utilities/resolveJoins.js'
import { transform } from './utilities/transform.js'
export const find: Find = async function find(
@@ -155,6 +156,16 @@ export const find: Find = async function find(
result = await Model.paginate(query, paginationOptions)
}
if (!this.useJoinAggregations) {
await resolveJoins({
adapter: this,
collectionSlug,
docs: result.docs as Record<string, unknown>[],
joins,
locale,
})
}
transform({
adapter: this,
data: result.docs,

View File

@@ -10,6 +10,7 @@ import { buildJoinAggregation } from './utilities/buildJoinAggregation.js'
import { buildProjectionFromSelect } from './utilities/buildProjectionFromSelect.js'
import { getCollection } from './utilities/getEntity.js'
import { getSession } from './utilities/getSession.js'
import { resolveJoins } from './utilities/resolveJoins.js'
import { transform } from './utilities/transform.js'
export const findOne: FindOne = async function findOne(
@@ -67,6 +68,16 @@ export const findOne: FindOne = async function findOne(
doc = await Model.findOne(query, {}, options)
}
if (doc && !this.useJoinAggregations) {
await resolveJoins({
adapter: this,
collectionSlug,
docs: [doc] as Record<string, unknown>[],
joins,
locale,
})
}
if (!doc) {
return null
}

View File

@@ -143,6 +143,29 @@ export interface Args {
/** The URL to connect to MongoDB or false to start payload and prevent connecting */
url: false | string
/**
* Set to `true` to use an alternative `dropDatabase` implementation that calls `collection.deleteMany({})` on every collection instead of sending a raw `dropDatabase` command.
* Payload only uses `dropDatabase` for testing purposes.
* @default false
*/
useAlternativeDropDatabase?: boolean
/**
* Set to `true` to use `BigInt` for custom ID fields of type `'number'`.
* Useful for databases that don't support `double` or `int32` IDs.
* @default false
*/
useBigIntForNumberIDs?: boolean
/**
* Set to `false` to disable join aggregations (which use correlated subqueries) and instead populate join fields via multiple `find` queries.
* @default true
*/
useJoinAggregations?: boolean
/**
* Set to `false` to disable the use of `pipeline` in the `$lookup` aggregation in sorting.
* @default true
*/
usePipelineInSortLookup?: boolean
}
export type MongooseAdapter = {
@@ -159,6 +182,10 @@ export type MongooseAdapter = {
up: (args: MigrateUpArgs) => Promise<void>
}[]
sessions: Record<number | string, ClientSession>
useAlternativeDropDatabase: boolean
useBigIntForNumberIDs: boolean
useJoinAggregations: boolean
usePipelineInSortLookup: boolean
versions: {
[slug: string]: CollectionModel
}
@@ -194,6 +221,10 @@ declare module 'payload' {
updateVersion: <T extends TypeWithID = TypeWithID>(
args: { options?: QueryOptions } & UpdateVersionArgs<T>,
) => Promise<TypeWithVersion<T>>
useAlternativeDropDatabase: boolean
useBigIntForNumberIDs: boolean
useJoinAggregations: boolean
usePipelineInSortLookup: boolean
versions: {
[slug: string]: CollectionModel
}
@@ -214,6 +245,10 @@ export function mongooseAdapter({
prodMigrations,
transactionOptions = {},
url,
useAlternativeDropDatabase = false,
useBigIntForNumberIDs = false,
useJoinAggregations = true,
usePipelineInSortLookup = true,
}: Args): DatabaseAdapterObj {
function adapter({ payload }: { payload: Payload }) {
const migrationDir = findMigrationDir(migrationDirArg)
@@ -279,6 +314,10 @@ export function mongooseAdapter({
updateOne,
updateVersion,
upsert,
useAlternativeDropDatabase,
useBigIntForNumberIDs,
useJoinAggregations,
usePipelineInSortLookup,
})
}
@@ -290,6 +329,8 @@ export function mongooseAdapter({
}
}
export { compatabilityOptions } from './utilities/compatabilityOptions.js'
/**
* Attempt to find migrations directory.
*

View File

@@ -143,7 +143,12 @@ export const buildSchema = (args: {
const idField = schemaFields.find((field) => fieldAffectsData(field) && field.name === 'id')
if (idField) {
fields = {
_id: idField.type === 'number' ? Number : String,
_id:
idField.type === 'number'
? payload.db.useBigIntForNumberIDs
? mongoose.Schema.Types.BigInt
: Number
: String,
}
schemaFields = schemaFields.filter(
(field) => !(fieldAffectsData(field) && field.name === 'id'),
@@ -900,7 +905,11 @@ const getRelationshipValueType = (field: RelationshipField | UploadField, payloa
}
if (customIDType === 'number') {
return mongoose.Schema.Types.Number
if (payload.db.useBigIntForNumberIDs) {
return mongoose.Schema.Types.BigInt
} else {
return mongoose.Schema.Types.Number
}
}
return mongoose.Schema.Types.String

View File

@@ -99,31 +99,57 @@ const relationshipSort = ({
sortFieldPath = foreignFieldPath.localizedPath.replace('<locale>', locale)
}
if (
!sortAggregation.some((each) => {
return '$lookup' in each && each.$lookup.as === `__${path}`
})
) {
const as = `__${relationshipPath.replace(/\./g, '__')}`
// If we have not already sorted on this relationship yet, we need to add a lookup stage
if (!sortAggregation.some((each) => '$lookup' in each && each.$lookup.as === as)) {
let localField = versions ? `version.${relationshipPath}` : relationshipPath
if (adapter.usePipelineInSortLookup) {
const flattenedField = `__${localField.replace(/\./g, '__')}_lookup`
sortAggregation.push({
$addFields: {
[flattenedField]: `$${localField}`,
},
})
localField = flattenedField
}
sortAggregation.push({
$lookup: {
as: `__${path}`,
as,
foreignField: '_id',
from: foreignCollection.Model.collection.name,
localField: versions ? `version.${relationshipPath}` : relationshipPath,
pipeline: [
{
$project: {
[sortFieldPath]: true,
localField,
...(!adapter.usePipelineInSortLookup && {
pipeline: [
{
$project: {
[sortFieldPath]: true,
},
},
},
],
],
}),
},
})
sort[`__${path}.${sortFieldPath}`] = sortDirection
return true
if (adapter.usePipelineInSortLookup) {
sortAggregation.push({
$unset: localField,
})
}
}
if (!adapter.usePipelineInSortLookup) {
const lookup = sortAggregation.find(
(each) => '$lookup' in each && each.$lookup.as === as,
) as PipelineStage.Lookup
const pipeline = lookup.$lookup.pipeline![0] as PipelineStage.Project
pipeline.$project[sortFieldPath] = true
}
sort[`${as}.${sortFieldPath}`] = sortDirection
return true
}
}

View File

@@ -12,6 +12,7 @@ import { buildJoinAggregation } from './utilities/buildJoinAggregation.js'
import { buildProjectionFromSelect } from './utilities/buildProjectionFromSelect.js'
import { getCollection } from './utilities/getEntity.js'
import { getSession } from './utilities/getSession.js'
import { resolveJoins } from './utilities/resolveJoins.js'
import { transform } from './utilities/transform.js'
export const queryDrafts: QueryDrafts = async function queryDrafts(
@@ -158,6 +159,17 @@ export const queryDrafts: QueryDrafts = async function queryDrafts(
result = await Model.paginate(versionQuery, paginationOptions)
}
if (!this.useJoinAggregations) {
await resolveJoins({
adapter: this,
collectionSlug,
docs: result.docs as Record<string, unknown>[],
joins,
locale,
versions: true,
})
}
transform({
adapter: this,
data: result.docs,

View File

@@ -76,7 +76,11 @@ export const aggregatePaginate = async ({
countPromise = Model.estimatedDocumentCount(query)
} else {
const hint = adapter.disableIndexHints !== true ? { _id: 1 } : undefined
countPromise = Model.countDocuments(query, { collation, hint, session })
countPromise = Model.countDocuments(query, {
collation,
session,
...(hint ? { hint } : {}),
})
}
}

View File

@@ -44,6 +44,9 @@ export const buildJoinAggregation = async ({
projection,
versions,
}: BuildJoinAggregationArgs): Promise<PipelineStage[] | undefined> => {
if (!adapter.useJoinAggregations) {
return
}
if (
(Object.keys(collectionConfig.joins).length === 0 &&
collectionConfig.polymorphicJoins.length == 0) ||

View File

@@ -0,0 +1,25 @@
import type { Args } from '../index.js'
/**
* Each key is a mongo-compatible database and the value
* is the recommended `mongooseAdapter` settings for compatability.
*/
export const compatabilityOptions = {
cosmosdb: {
transactionOptions: false,
useJoinAggregations: false,
usePipelineInSortLookup: false,
},
documentdb: {
disableIndexHints: true,
},
firestore: {
disableIndexHints: true,
ensureIndexes: false,
transactionOptions: false,
useAlternativeDropDatabase: true,
useBigIntForNumberIDs: true,
useJoinAggregations: false,
usePipelineInSortLookup: false,
},
} satisfies Record<string, Partial<Args>>

View File

@@ -0,0 +1,647 @@
import type { JoinQuery, SanitizedJoins, Where } from 'payload'
import {
appendVersionToQueryKey,
buildVersionCollectionFields,
combineQueries,
getQueryDraftsSort,
} from 'payload'
import { fieldShouldBeLocalized } from 'payload/shared'
import type { MongooseAdapter } from '../index.js'
import { buildQuery } from '../queries/buildQuery.js'
import { buildSortParam } from '../queries/buildSortParam.js'
import { transform } from './transform.js'
export type ResolveJoinsArgs = {
/** The MongoDB adapter instance */
adapter: MongooseAdapter
/** The slug of the collection being queried */
collectionSlug: string
/** Array of documents to resolve joins for */
docs: Record<string, unknown>[]
/** Join query specifications (which joins to resolve and how) */
joins?: JoinQuery
/** Optional locale for localized queries */
locale?: string
/** Optional projection for the join query */
projection?: Record<string, true>
/** Whether to resolve versions instead of published documents */
versions?: boolean
}
/**
* Resolves join relationships for a collection of documents.
* This function fetches related documents based on join configurations and
* attaches them to the original documents with pagination support.
*/
export async function resolveJoins({
adapter,
collectionSlug,
docs,
joins,
locale,
projection,
versions = false,
}: ResolveJoinsArgs): Promise<void> {
// Early return if no joins are specified or no documents to process
if (!joins || docs.length === 0) {
return
}
// Get the collection configuration from the adapter
const collectionConfig = adapter.payload.collections[collectionSlug]?.config
if (!collectionConfig) {
return
}
// Build a map of join paths to their configurations for quick lookup
// This flattens the nested join structure into a single map keyed by join path
const joinMap: Record<string, { targetCollection: string } & SanitizedJoin> = {}
// Add regular joins
for (const [target, joinList] of Object.entries(collectionConfig.joins)) {
for (const join of joinList) {
joinMap[join.joinPath] = { ...join, targetCollection: target }
}
}
// Add polymorphic joins
for (const join of collectionConfig.polymorphicJoins || []) {
// For polymorphic joins, we use the collections array as the target
joinMap[join.joinPath] = { ...join, targetCollection: join.field.collection as string }
}
// Process each requested join concurrently
const joinPromises = Object.entries(joins).map(async ([joinPath, joinQuery]) => {
if (!joinQuery) {
return null
}
// If a projection is provided, and the join path is not in the projection, skip it
if (projection && !projection[joinPath]) {
return null
}
// Get the join definition from our map
const joinDef = joinMap[joinPath]
if (!joinDef) {
return null
}
// Normalize collections to always be an array for unified processing
const allCollections = Array.isArray(joinDef.field.collection)
? joinDef.field.collection
: [joinDef.field.collection]
// Use the provided locale or fall back to the default locale for localized fields
const localizationConfig = adapter.payload.config.localization
const effectiveLocale =
locale ||
(typeof localizationConfig === 'object' &&
localizationConfig &&
localizationConfig.defaultLocale)
// Extract relationTo filter from the where clause to determine which collections to query
const relationToFilter = extractRelationToFilter(joinQuery.where || {})
// Determine which collections to query based on relationTo filter
const collections = relationToFilter
? allCollections.filter((col) => relationToFilter.includes(col))
: allCollections
// Check if this is a polymorphic collection join (where field.collection is an array)
const isPolymorphicJoin = Array.isArray(joinDef.field.collection)
// Apply pagination settings
const limit = joinQuery.limit ?? joinDef.field.defaultLimit ?? 10
const page = joinQuery.page ?? 1
const skip = (page - 1) * limit
// Process collections concurrently
const collectionPromises = collections.map(async (joinCollectionSlug) => {
const targetConfig = adapter.payload.collections[joinCollectionSlug]?.config
if (!targetConfig) {
return null
}
const useDrafts = versions && Boolean(targetConfig.versions?.drafts)
let JoinModel
if (useDrafts) {
JoinModel = adapter.versions[targetConfig.slug]
} else {
JoinModel = adapter.collections[targetConfig.slug]
}
if (!JoinModel) {
return null
}
// Extract all parent document IDs to use in the join query
const parentIDs = docs.map((d) => (versions ? (d.parent ?? d._id ?? d.id) : (d._id ?? d.id)))
// Build the base query
let whereQuery: null | Record<string, unknown> = null
whereQuery = isPolymorphicJoin
? filterWhereForCollection(
joinQuery.where || {},
targetConfig.flattenedFields,
true, // exclude relationTo for individual collections
)
: joinQuery.where || {}
// Skip this collection if the WHERE clause cannot be satisfied for polymorphic collection joins
if (whereQuery === null) {
return null
}
whereQuery = useDrafts
? await JoinModel.buildQuery({
locale,
payload: adapter.payload,
where: combineQueries(appendVersionToQueryKey(whereQuery as Where), {
latest: {
equals: true,
},
}),
})
: await buildQuery({
adapter,
collectionSlug: joinCollectionSlug,
fields: targetConfig.flattenedFields,
locale,
where: whereQuery as Where,
})
// Handle localized paths and version prefixes
let dbFieldName = joinDef.field.on
if (effectiveLocale && typeof localizationConfig === 'object' && localizationConfig) {
const pathSegments = joinDef.field.on.split('.')
const transformedSegments: string[] = []
const fields = useDrafts
? buildVersionCollectionFields(adapter.payload.config, targetConfig, true)
: targetConfig.flattenedFields
for (let i = 0; i < pathSegments.length; i++) {
const segment = pathSegments[i]!
transformedSegments.push(segment)
// Check if this segment corresponds to a localized field
const fieldAtSegment = fields.find((f) => f.name === segment)
if (fieldAtSegment && fieldAtSegment.localized) {
transformedSegments.push(effectiveLocale)
}
}
dbFieldName = transformedSegments.join('.')
}
// Add version prefix for draft queries
if (useDrafts) {
dbFieldName = `version.${dbFieldName}`
}
// Check if the target field is a polymorphic relationship
const isPolymorphic = joinDef.targetField
? Array.isArray(joinDef.targetField.relationTo)
: false
if (isPolymorphic) {
// For polymorphic relationships, we need to match both relationTo and value
whereQuery[`${dbFieldName}.relationTo`] = collectionSlug
whereQuery[`${dbFieldName}.value`] = { $in: parentIDs }
} else {
// For regular relationships and polymorphic collection joins
whereQuery[dbFieldName] = { $in: parentIDs }
}
// Build the sort parameters for the query
const fields = useDrafts
? buildVersionCollectionFields(adapter.payload.config, targetConfig, true)
: targetConfig.flattenedFields
const sort = buildSortParam({
adapter,
config: adapter.payload.config,
fields,
locale,
sort: useDrafts
? getQueryDraftsSort({
collectionConfig: targetConfig,
sort: joinQuery.sort || joinDef.field.defaultSort || targetConfig.defaultSort,
})
: joinQuery.sort || joinDef.field.defaultSort || targetConfig.defaultSort,
timestamps: true,
})
const projection = buildJoinProjection(dbFieldName, useDrafts, sort)
const [results, dbCount] = await Promise.all([
JoinModel.find(whereQuery, projection, {
sort,
...(isPolymorphicJoin ? {} : { limit, skip }),
}).lean(),
isPolymorphicJoin ? Promise.resolve(0) : JoinModel.countDocuments(whereQuery),
])
const count = isPolymorphicJoin ? results.length : dbCount
transform({
adapter,
data: results,
fields: useDrafts
? buildVersionCollectionFields(adapter.payload.config, targetConfig, false)
: targetConfig.fields,
operation: 'read',
})
// Return results with collection info for grouping
return {
collectionSlug: joinCollectionSlug,
count,
dbFieldName,
results,
sort,
useDrafts,
}
})
const collectionResults = await Promise.all(collectionPromises)
// Group the results by parent ID
const grouped: Record<
string,
{
docs: Record<string, unknown>[]
sort: Record<string, string>
}
> = {}
let totalCount = 0
for (const collectionResult of collectionResults) {
if (!collectionResult) {
continue
}
const { collectionSlug, count, dbFieldName, results, sort, useDrafts } = collectionResult
totalCount += count
for (const result of results) {
if (useDrafts) {
result.id = result.parent
}
const parentValues = getByPathWithArrays(result, dbFieldName) as (
| { relationTo: string; value: number | string }
| number
| string
)[]
if (parentValues.length === 0) {
continue
}
for (let parentValue of parentValues) {
if (!parentValue) {
continue
}
if (typeof parentValue === 'object') {
parentValue = parentValue.value
}
const joinData = {
relationTo: collectionSlug,
value: result.id,
}
const parentKey = parentValue as string
if (!grouped[parentKey]) {
grouped[parentKey] = {
docs: [],
sort,
}
}
// Always store the ObjectID reference in polymorphic format
grouped[parentKey].docs.push({
...result,
__joinData: joinData,
})
}
}
}
for (const results of Object.values(grouped)) {
results.docs.sort((a, b) => {
for (const [fieldName, sortOrder] of Object.entries(results.sort)) {
const sort = sortOrder === 'asc' ? 1 : -1
const aValue = a[fieldName] as Date | number | string
const bValue = b[fieldName] as Date | number | string
if (aValue < bValue) {
return -1 * sort
}
if (aValue > bValue) {
return 1 * sort
}
}
return 0
})
results.docs = results.docs.map(
(doc) => (isPolymorphicJoin ? doc.__joinData : doc.id) as Record<string, unknown>,
)
}
// Determine if the join field should be localized
const localeSuffix =
fieldShouldBeLocalized({
field: joinDef.field,
parentIsLocalized: joinDef.parentIsLocalized,
}) &&
adapter.payload.config.localization &&
effectiveLocale
? `.${effectiveLocale}`
: ''
// Adjust the join path with locale suffix if needed
const localizedJoinPath = `${joinPath}${localeSuffix}`
return {
grouped,
isPolymorphicJoin,
joinQuery,
limit,
localizedJoinPath,
page,
skip,
totalCount,
}
})
// Wait for all join operations to complete
const joinResults = await Promise.all(joinPromises)
// Process the results and attach them to documents
for (const joinResult of joinResults) {
if (!joinResult) {
continue
}
const { grouped, isPolymorphicJoin, joinQuery, limit, localizedJoinPath, skip, totalCount } =
joinResult
// Attach the joined data to each parent document
for (const doc of docs) {
const id = (versions ? (doc.parent ?? doc._id ?? doc.id) : (doc._id ?? doc.id)) as string
const all = grouped[id]?.docs || []
// Calculate the slice for pagination
// When limit is 0, it means unlimited - return all results
const slice = isPolymorphicJoin
? limit === 0
? all
: all.slice(skip, skip + limit)
: // For non-polymorphic joins, we assume that page and limit were applied at the database level
all
// Create the join result object with pagination metadata
const value: Record<string, unknown> = {
docs: slice,
hasNextPage: limit === 0 ? false : totalCount > skip + slice.length,
}
// Include total count if requested
if (joinQuery.count) {
value.totalDocs = totalCount
}
// Navigate to the correct nested location in the document and set the join data
// This handles nested join paths like "user.posts" by creating intermediate objects
const segments = localizedJoinPath.split('.')
let ref: Record<string, unknown>
if (versions) {
if (!doc.version) {
doc.version = {}
}
ref = doc.version as Record<string, unknown>
} else {
ref = doc
}
for (let i = 0; i < segments.length - 1; i++) {
const seg = segments[i]!
if (!ref[seg]) {
ref[seg] = {}
}
ref = ref[seg] as Record<string, unknown>
}
// Set the final join data at the target path
ref[segments[segments.length - 1]!] = value
}
}
}
/**
* Extracts relationTo filter values from a WHERE clause
* @param where - The WHERE clause to search
* @returns Array of collection slugs if relationTo filter found, null otherwise
*/
function extractRelationToFilter(where: Record<string, unknown>): null | string[] {
if (!where || typeof where !== 'object') {
return null
}
// Check for direct relationTo conditions
if (where.relationTo && typeof where.relationTo === 'object') {
const relationTo = where.relationTo as Record<string, unknown>
if (relationTo.in && Array.isArray(relationTo.in)) {
return relationTo.in as string[]
}
if (relationTo.equals) {
return [relationTo.equals as string]
}
}
// Check for relationTo in logical operators
if (where.and && Array.isArray(where.and)) {
for (const condition of where.and) {
const result = extractRelationToFilter(condition)
if (result) {
return result
}
}
}
if (where.or && Array.isArray(where.or)) {
for (const condition of where.or) {
const result = extractRelationToFilter(condition)
if (result) {
return result
}
}
}
return null
}
/**
* Filters a WHERE clause to only include fields that exist in the target collection
* This is needed for polymorphic joins where different collections have different fields
* @param where - The original WHERE clause
* @param availableFields - The fields available in the target collection
* @param excludeRelationTo - Whether to exclude relationTo field (for individual collections)
* @returns A filtered WHERE clause, or null if the query cannot match this collection
*/
function filterWhereForCollection(
where: Record<string, unknown>,
availableFields: Array<{ name: string }>,
excludeRelationTo: boolean = false,
): null | Record<string, unknown> {
if (!where || typeof where !== 'object') {
return where
}
const fieldNames = new Set(availableFields.map((f) => f.name))
// Add special fields that are available in polymorphic relationships
if (!excludeRelationTo) {
fieldNames.add('relationTo')
}
const filtered: Record<string, unknown> = {}
for (const [key, value] of Object.entries(where)) {
if (key === 'and') {
// Handle AND operator - all conditions must be satisfiable
if (Array.isArray(value)) {
const filteredConditions: Record<string, unknown>[] = []
for (const condition of value) {
const filteredCondition = filterWhereForCollection(
condition,
availableFields,
excludeRelationTo,
)
// If any condition in AND cannot be satisfied, the whole AND fails
if (filteredCondition === null) {
return null
}
if (Object.keys(filteredCondition).length > 0) {
filteredConditions.push(filteredCondition)
}
}
if (filteredConditions.length > 0) {
filtered[key] = filteredConditions
}
}
} else if (key === 'or') {
// Handle OR operator - at least one condition must be satisfiable
if (Array.isArray(value)) {
const filteredConditions = value
.map((condition) =>
filterWhereForCollection(condition, availableFields, excludeRelationTo),
)
.filter((condition) => condition !== null && Object.keys(condition).length > 0)
if (filteredConditions.length > 0) {
filtered[key] = filteredConditions
}
// If no OR conditions can be satisfied, we still continue (OR is more permissive)
}
} else if (key === 'relationTo' && excludeRelationTo) {
// Skip relationTo field for non-polymorphic collections
continue
} else if (fieldNames.has(key)) {
// Include the condition if the field exists in this collection
filtered[key] = value
} else {
// Field doesn't exist in this collection - this makes the query unsatisfiable
return null
}
}
return filtered
}
type SanitizedJoin = SanitizedJoins[string][number]
/**
* Builds projection for join queries
*/
function buildJoinProjection(
baseFieldName: string,
useDrafts: boolean,
sort: Record<string, string>,
): Record<string, 1> {
const projection: Record<string, 1> = {
_id: 1,
[baseFieldName]: 1,
}
if (useDrafts) {
projection.parent = 1
}
for (const fieldName of Object.keys(sort)) {
projection[fieldName] = 1
}
return projection
}
/**
* Enhanced utility function to safely traverse nested object properties using dot notation
* Handles arrays by searching through array elements for matching values
* @param doc - The document to traverse
* @param path - Dot-separated path (e.g., "array.category")
* @returns Array of values found at the specified path (for arrays) or single value
*/
function getByPathWithArrays(doc: unknown, path: string): unknown[] {
const segments = path.split('.')
let current = doc
for (let i = 0; i < segments.length; i++) {
const segment = segments[i]!
if (current === undefined || current === null) {
return []
}
// Get the value at the current segment
const value = (current as Record<string, unknown>)[segment]
if (value === undefined || value === null) {
return []
}
// If this is the last segment, return the value(s)
if (i === segments.length - 1) {
return Array.isArray(value) ? value : [value]
}
// If the value is an array and we have more segments to traverse
if (Array.isArray(value)) {
const remainingPath = segments.slice(i + 1).join('.')
const results: unknown[] = []
// Search through each array element
for (const item of value) {
if (item && typeof item === 'object') {
const subResults = getByPathWithArrays(item, remainingPath)
results.push(...subResults)
}
}
return results
}
// Continue traversing
current = value
}
return []
}

View File

@@ -426,6 +426,11 @@ export const transform = ({
data.id = data.id.toHexString()
}
// Handle BigInt conversion for custom ID fields of type 'number'
if (adapter.useBigIntForNumberIDs && typeof data.id === 'bigint') {
data.id = Number(data.id)
}
if (!adapter.allowAdditionalKeys) {
stripFields({
config,

View File

@@ -21,6 +21,25 @@ export const allDatabaseAdapters = {
strength: 1,
},
})`,
firestore: `
import { mongooseAdapter, compatabilityOptions } from '@payloadcms/db-mongodb'
export const databaseAdapter = mongooseAdapter({
...compatabilityOptions.firestore,
url:
process.env.DATABASE_URI ||
process.env.MONGODB_MEMORY_SERVER_URI ||
'mongodb://127.0.0.1/payloadtests',
collation: {
strength: 1,
},
// The following options prevent some tests from failing.
// More work needed to get tests succeeding without these options.
ensureIndexes: true,
transactionOptions: {},
disableIndexHints: false,
useAlternativeDropDatabase: false,
})`,
postgres: `
import { postgresAdapter } from '@payloadcms/db-postgres'

View File

@@ -13,7 +13,7 @@ const dirname = path.dirname(filename)
const writeDBAdapter = process.env.WRITE_DB_ADAPTER !== 'false'
process.env.PAYLOAD_DROP_DATABASE = process.env.PAYLOAD_DROP_DATABASE || 'true'
if (process.env.PAYLOAD_DATABASE === 'mongodb') {
if (process.env.PAYLOAD_DATABASE === 'mongodb' || process.env.PAYLOAD_DATABASE === 'firestore') {
throw new Error('Not supported')
}

View File

@@ -1,5 +1,8 @@
import type { Payload } from 'payload'
export function isMongoose(_payload?: Payload) {
return _payload?.db?.name === 'mongoose' || ['mongodb'].includes(process.env.PAYLOAD_DATABASE)
return (
_payload?.db?.name === 'mongoose' ||
['firestore', 'mongodb'].includes(process.env.PAYLOAD_DATABASE)
)
}

View File

@@ -14,13 +14,17 @@ declare global {
*/
// eslint-disable-next-line no-restricted-exports
export default async () => {
if (process.env.DATABASE_URI) {
return
}
process.env.NODE_ENV = 'test'
process.env.PAYLOAD_DROP_DATABASE = 'true'
process.env.NODE_OPTIONS = '--no-deprecation'
process.env.DISABLE_PAYLOAD_HMR = 'true'
if (
(!process.env.PAYLOAD_DATABASE || process.env.PAYLOAD_DATABASE === 'mongodb') &&
(!process.env.PAYLOAD_DATABASE ||
['firestore', 'mongodb'].includes(process.env.PAYLOAD_DATABASE)) &&
!global._mongoMemoryServer
) {
console.log('Starting memory db...')

View File

@@ -38,7 +38,7 @@ const dirname = path.dirname(filename)
type EasierChained = { id: string; relation: EasierChained }
const mongoIt = process.env.PAYLOAD_DATABASE === 'mongodb' ? it : it.skip
const mongoIt = ['firestore', 'mongodb'].includes(process.env.PAYLOAD_DATABASE || '') ? it : it.skip
describe('Relationships', () => {
beforeAll(async () => {
@@ -791,6 +791,47 @@ describe('Relationships', () => {
expect(localized_res_2.docs).toStrictEqual([movie_1, movie_2])
})
it('should sort by multiple properties of a relationship', async () => {
await payload.delete({ collection: 'directors', where: {} })
await payload.delete({ collection: 'movies', where: {} })
const createDirector = {
collection: 'directors',
data: {
name: 'Dan',
},
} as const
const director_1 = await payload.create(createDirector)
const director_2 = await payload.create(createDirector)
const movie_1 = await payload.create({
collection: 'movies',
depth: 0,
data: { director: director_1.id, name: 'Some Movie 1' },
})
const movie_2 = await payload.create({
collection: 'movies',
depth: 0,
data: { director: director_2.id, name: 'Some Movie 2' },
})
const res_1 = await payload.find({
collection: 'movies',
sort: ['director.name', 'director.createdAt'],
depth: 0,
})
const res_2 = await payload.find({
collection: 'movies',
sort: ['director.name', '-director.createdAt'],
depth: 0,
})
expect(res_1.docs).toStrictEqual([movie_1, movie_2])
expect(res_2.docs).toStrictEqual([movie_2, movie_1])
})
it('should sort by a property of a hasMany relationship', async () => {
const movie1 = await payload.create({
collection: 'movies',