Chore/overview docs (#6906)

## Description

More progress to docs.
This commit is contained in:
James Mikrut
2024-06-24 13:17:50 -04:00
committed by GitHub
parent 7f753fb3b5
commit effba3e45b
14 changed files with 307 additions and 201 deletions

View File

@@ -29,7 +29,7 @@ If a Collection supports [`Authentication`](/docs/authentication/overview), the
**Example Collection config:**
```ts
import { CollectionConfig } from 'payload/types';
import type { CollectionConfig } from 'payload';
export const Posts: CollectionConfig = {
slug: "posts",
@@ -86,7 +86,7 @@ Read access functions can return a boolean result or optionally return a [query
**Example:**
```ts
import { Access } from 'payload/config'
import type { Access } from 'payload'
const canReadPage: Access = ({ req: { user } }) => {
// allow authenticated users
@@ -118,7 +118,7 @@ Update access functions can return a boolean result or optionally return a [quer
**Example:**
```ts
import { Access } from 'payload/config'
import type { Access } from 'payload'
const canUpdateUser: Access = ({ req: { user }, id }) => {
// allow users with a role of 'admin'
@@ -144,7 +144,7 @@ Similarly to the Update function, returns a boolean or a [query constraint](/doc
**Example:**
```ts
import { Access } from 'payload/config'
import type { Access } from 'payload'
const canDeleteCustomer: Access = async ({ req, id }) => {
if (!id) {

View File

@@ -19,7 +19,7 @@ Field Access Control is specified with functions inside a field's config. All fi
**Example Collection config:**
```ts
import { CollectionConfig } from 'payload/types';
import { CollectionConfig } from 'payload';
export const Posts: CollectionConfig = {
slug: 'posts',

View File

@@ -18,7 +18,7 @@ You can define Global-level Access Control within each Global's `access` propert
**Example Global config:**
```ts
import { GlobalConfig } from 'payload/types'
import { GlobalConfig } from 'payload'
const Header: GlobalConfig = {
slug: 'header',

View File

@@ -45,7 +45,7 @@ Any of the features in Payload Cloud that require environment variables will aut
<Banner type="warning">
Note: For security reasons, any variables you wish to provide to the Admin panel must be prefixed
with `PAYLOAD_PUBLIC_`.  Learn more
with `NEXT_PUBLIC_`.  Learn more
[here](https://payloadcms.com/docs/admin/environment-vars).
</Banner>

View File

@@ -0,0 +1,81 @@
---
title: Express
label: Express
order: 60
desc: Payload utilizes Express middleware packages, you can customize how they work by passing in configuration options.
keywords: config, configuration, documentation, Content Management System, cms, headless, javascript, node, react, express
---
Payload utilizes a few Express-specific middleware packages within its own routers. You can customize how they work by passing in configuration options to the main Payload config's `express` property.
## Custom Middleware
Payload allows you to pass in custom Express middleware to be used on all of the routes it opens. This is useful for adding logging or any other custom functionality to your endpoints.
There are 2 exposed properties. Each property is an array of middleware functions.
- `preMiddleware` - runs before any of the Payload middleware
- `postMiddleware` - runs after all of the Payload middleware
```ts
{
express: {
preMiddleware: [
(req, res, next) => {
// do something
next()
}
],
postMiddleware: [
(req, res, next) => {
// do something
next()
}
]
}
}
// Example logging middleware function
const requestLoggerMiddleware = (req, res, next) => {
req.payload.logger.info(`request: ${req.method} ${req.url}`)
next()
}
```
## JSON
`express.json()` is used to parse JSON body content into JavaScript objects accessible on the Express `req`. Payload allows you to customize all of the `json` method's options. Common examples of customization use-cases are increasing the max allowed JSON body size which defaults to `2MB`.
**Example payload.config.js for how to increase the max JSON size allowed to be sent to Payload endpoints:**
```js
{
express: {
json: {
limit: '4mb',
}
}
}
```
You can find a list of all available options that are able to be passed to `express.json()` [here](https://expressjs.com/en/api.html).
## Compression
Payload uses the `compression` package to optimize transfer size for all of the routes it opens, and you can pass customization options through the Payload config.
To customize compression options, pass an object to the Payload config's `express` property.
**Example payload.config.js:**
```js
{
express: {
compression: {
// settings go here
}
}
}
```
Typically, the default options for this package are suitable. However, for a list of all available customization options, [click here](http://expressjs.com/en/resources/middleware/compression.html).

View File

@@ -124,3 +124,90 @@ Drops all entities from the database and re-runs all migrations from scratch.
```text
npm run payload migrate:fresh
```
## When to run migrations
Depending on which database adapter you use, your migration workflow might differ subtly.
In relational databases, migrations will be **required** for non-development database environments. But with MongoDB, you might only need to run migrations once in a while (or never even need them).
#### MongoDB
In MongoDB, you'll only ever really need to run migrations for times where you change your database shape, and you have lots of existing data that you'd like to transform from Shape A to Shape B.
In this case, you can create a migration by running `pnpm payload migrate:create`, and then write the logic that you need to perform to migrate your documents to their new shape. You can then either run your migrations in CI before you build / deploy, or you can run them locally, against your production database, by using your production database connection string on your local computer and running the `pnpm payload migrate` command.
#### Postgres
In relational databases like Postgres, migrations are a bit more important, because each time you add a new field or a new collection, you'll need to update the shape of your database to match your Payload config (otherwise you'll see errors upon trying to read / write your data).
That means that Postgres users of Payload should become familiar with the entire migration workflow from top to bottom.
Here is an overview of a common workflow for working locally against a development database, creating migrations, and then running migrations against your production database before deploying.
**1 - work locally using push mode**
Payload uses Drizzle ORM's powerful `push` mode to automatically sync data changes to your database for you while in development mode. By default, this is enabled and is the suggested workflow to using Postgres and Payload while doing local development.
You can disable this setting and solely use migrations to manage your local development database (pass `push: false` to your Postgres adapter), but if you do disable it, you may see frequent errors while running development mode. This is because Payload will have updated to your new data shape, but your local database will not have updated.
For this reason, we suggest that you leave `push` as its default setting and treat your local dev database as a sandbox.
For more information about push mode and prototyping in development, [click here](/docs/beta/database/postgres#prototyping-in-dev-mode).
The typical workflow in Payload is to build out your Payload configs, install plugins, and make progress in development mode - allowing Drizzle to push your changes to your local database for you. Once you're finished, you can create a migration.
But importantly, you do not need to run migrations against your development database, because Drizzle will have already pushed your changes to your database for you.
<Banner type="warning">
Warning: do not mix "push" and migrations with your local development database. If you use "push"
locally, and then try to migrate, Payload will throw a warning, telling you that these two methods
are not meant to be used interchangeably.
</Banner>
**2 - create a migration**
Once you're done with working in your Payload config, you can create a migration. It's best practice to try and complete a specific task or fully build out a feature before you create a migration.
But once you're ready, you can run `pnpm payload migrate:create`, which will perform the following steps for you:
- We will look for any existing migrations, and automatically generate SQL changes necessary to convert your schema from its prior state to the new state of your Payload config
- We will then create a new migration file in your `/migrations` folder that contains all the SQL necessary to be run
We won't immediately run this migration for you, however.
<Banner type="success">
Tip: migrations created by Payload are relatively programmatic in nature, so there should not be any surprises, but before you check in the created migration it's a good idea to always double-check the contents of the migration files.
</Banner>
**3 - set up your build process to run migrations**
Generally, you want to run migrations before you build Payload for production. This typically happens in your CI pipeline and can usually be configured on platforms like Payload Cloud, Vercel, or Netlify by specifying your build script.
A common set of scripts in a `package.json`, set up to run migrations in CI, might look like this:
```js
"scripts": {
// For running in dev mode
"dev": "next dev --turbo",
// To build your Next + Payload app for production
"build": "next build",
// A "tie-in" to Payload's CLI for convenience
// this helps you run `pnpm payload migrate:create` and similar
"payload": "cross-env NODE_OPTIONS=--no-deprecation payload",
// This command is what you'd set your `build script` to.
// Notice how it runs `payload migrate` and then `pnpm build`?
// This will run all migrations for you before building, in your CI,
// against your production database
"ci": "payload migrate && pnpm build",
},
```
In the example above, we've specified a `ci` script which we can use as our "build script" in the platform that we are deploying to production with.
This will require that your build pipeline can connect to your database, and it will simply run the `payload migrate` command prior to starting the build process. By calling `payload migrate`, Payload will automatically execute any migrations in your `/migrations` folder that have not yet been executed against your production database, in the order that they were created.
If it fails, the deployment will be rejected. But now, with your build script set up to run your migrations, you will be all set! Next time you deploy, your CI will execute the required migrations for you, and your database will be caught up with the shape that your Payload config requires.

View File

@@ -8,11 +8,6 @@ keywords: Postgres, documentation, typescript, Content Management System, cms, h
To use Payload with Postgres, install the package `@payloadcms/db-postgres`. It leverages Drizzle ORM and `node-postgres` to interact with a Postgres database that you provide.
<Banner>
The Postgres database adapter is currently in beta. If you would like to help us test this
package, we'd love to hear if you find any bugs or issues!
</Banner>
It automatically manages changes to your database for you in development mode, and exposes a full suite of migration controls for you to leverage in order to keep other database environments in sync with your schema. DDL transformations are automatically generated.
To configure Payload to use Postgres, pass the `postgresAdapter` to your Payload config as follows:
@@ -81,17 +76,6 @@ Alternatively, you can disable `push` and rely solely on migrations to keep your
## Migration workflows
Migrations are extremely powerful thanks to the seamless way that Payload and Drizzle work together. Let's take the following scenario:
In Postgres, migrations are a fundamental aspect of working with Payload and you should become familiar with how they work.
1. You are building your Payload config locally, with a local database used for testing.
1. You have left the default setting of `push` enabled, so every time you change your Payload config (add or remove fields, collections, etc.), Drizzle will automatically push changes to your local DB.
1. Once you're done with your changes, or have completed a feature, you can run `npm run payload migrate:create`.
1. Payload and Drizzle will look for any existing migrations, and automatically generate all SQL changes necessary to convert your schema from its prior state into the state of your current Payload config, and store the resulting DDL in a newly created migration.
1. Once you're ready to go to production, you will be able to run `npm run payload migrate` against your production database, which will apply any new migrations that have not yet run.
1. Now your production database is in sync with your Payload config!
<Banner type="warning">
Warning: do not mix "push" and migrations with your local development database. If you use "push"
locally, and then try to migrate, Payload will throw a warning, telling you that these two methods
are not meant to be used interchangeably.
</Banner>
For more information about migrations, [click here](/docs/beta/database/migrations#when-to-run-migrations).

View File

@@ -34,7 +34,7 @@ All collection Hook properties accept arrays of synchronous or asynchronous func
`collections/exampleHooks.js`
```ts
import { CollectionConfig } from 'payload/types';
import type { CollectionConfig } from 'payload';
export const ExampleHooks: CollectionConfig = {
slug: 'example-hooks',
@@ -70,7 +70,7 @@ The `beforeOperation` hook can be used to modify the arguments that operations a
Available Collection operations include `create`, `read`, `update`, `delete`, `login`, `refresh`, and `forgotPassword`.
```ts
import { CollectionBeforeOperationHook } from 'payload/types'
import type { CollectionBeforeOperationHook } from 'payload'
const beforeOperationHook: CollectionBeforeOperationHook = async ({
args, // original arguments passed into the operation
@@ -92,7 +92,7 @@ Please do note that this does not run before the client-side validation. If you
3. `validate` runs on the server
```ts
import { CollectionBeforeValidateHook } from 'payload/types'
import type { CollectionBeforeValidateHook } from 'payload'
const beforeValidateHook: CollectionBeforeValidateHook = async ({
data, // incoming data to update or create with
@@ -109,7 +109,7 @@ const beforeValidateHook: CollectionBeforeValidateHook = async ({
Immediately following validation, `beforeChange` hooks will run within `create` and `update` operations. At this stage, you can be confident that the data that will be saved to the document is valid in accordance to your field validations. You can optionally modify the shape of data to be saved.
```ts
import { CollectionBeforeChangeHook } from 'payload/types'
import type { CollectionBeforeChangeHook } from 'payload'
const beforeChangeHook: CollectionBeforeChangeHook = async ({
data, // incoming data to update or create with
@@ -126,7 +126,7 @@ const beforeChangeHook: CollectionBeforeChangeHook = async ({
After a document is created or updated, the `afterChange` hook runs. This hook is helpful to recalculate statistics such as total sales within a global, syncing user profile changes to a CRM, and more.
```ts
import { CollectionAfterChangeHook } from 'payload/types'
import type { CollectionAfterChangeHook } from 'payload'
const afterChangeHook: CollectionAfterChangeHook = async ({
doc, // full document data
@@ -143,7 +143,7 @@ const afterChangeHook: CollectionAfterChangeHook = async ({
Runs before `find` and `findByID` operations are transformed for output by `afterRead`. This hook fires before hidden fields are removed and before localized fields are flattened into the requested locale. Using this Hook will provide you with all locales and all hidden fields via the `doc` argument.
```ts
import { CollectionBeforeReadHook } from 'payload/types'
import type { CollectionBeforeReadHook } from 'payload'
const beforeReadHook: CollectionBeforeReadHook = async ({
doc, // full document data
@@ -159,7 +159,7 @@ const beforeReadHook: CollectionBeforeReadHook = async ({
Runs as the last step before documents are returned. Flattens locales, hides protected fields, and removes fields that users do not have access to.
```ts
import { CollectionAfterReadHook } from 'payload/types'
import type { CollectionAfterReadHook } from 'payload'
const afterReadHook: CollectionAfterReadHook = async ({
doc, // full document data
@@ -176,7 +176,7 @@ const afterReadHook: CollectionAfterReadHook = async ({
Runs before the `delete` operation. Returned values are discarded.
```ts
import { CollectionBeforeDeleteHook } from 'payload/types';
import type { CollectionBeforeDeleteHook } from 'payload';
const beforeDeleteHook: CollectionBeforeDeleteHook = async ({
req, // full Request object
@@ -189,7 +189,7 @@ const beforeDeleteHook: CollectionBeforeDeleteHook = async ({
Runs immediately after the `delete` operation removes records from the database. Returned values are discarded.
```ts
import { CollectionAfterDeleteHook } from 'payload/types';
import type { CollectionAfterDeleteHook } from 'payload';
const afterDeleteHook: CollectionAfterDeleteHook = async ({
req, // full Request object
@@ -205,7 +205,7 @@ The `afterOperation` hook can be used to modify the result of operations or exec
Available Collection operations include `create`, `find`, `findByID`, `update`, `updateByID`, `delete`, `deleteByID`, `login`, `refresh`, and `forgotPassword`.
```ts
import { CollectionAfterOperationHook } from 'payload/types'
import type { CollectionAfterOperationHook } from 'payload'
const afterOperationHook: CollectionAfterOperationHook = async ({
args, // arguments passed into the operation
@@ -222,7 +222,7 @@ const afterOperationHook: CollectionAfterOperationHook = async ({
For auth-enabled Collections, this hook runs during `login` operations where a user with the provided credentials exist, but before a token is generated and added to the response. You can optionally modify the user that is returned, or throw an error in order to deny the login operation.
```ts
import { CollectionBeforeLoginHook } from 'payload/types'
import type { CollectionBeforeLoginHook } from 'payload'
const beforeLoginHook: CollectionBeforeLoginHook = async ({
req, // full Request object
@@ -237,7 +237,7 @@ const beforeLoginHook: CollectionBeforeLoginHook = async ({
For auth-enabled Collections, this hook runs after successful `login` operations. You can optionally modify the user that is returned.
```ts
import { CollectionAfterLoginHook } from 'payload/types';
import type { CollectionAfterLoginHook } from 'payload';
const afterLoginHook: CollectionAfterLoginHook = async ({
req, // full Request object
@@ -251,7 +251,7 @@ const afterLoginHook: CollectionAfterLoginHook = async ({
For auth-enabled Collections, this hook runs after `logout` operations.
```ts
import { CollectionAfterLogoutHook } from 'payload/types';
import type { CollectionAfterLogoutHook } from 'payload';
const afterLogoutHook: CollectionAfterLogoutHook = async ({
req, // full Request object
@@ -263,7 +263,7 @@ const afterLogoutHook: CollectionAfterLogoutHook = async ({
For auth-enabled Collections, this hook runs after `refresh` operations.
```ts
import { CollectionAfterRefreshHook } from 'payload/types';
import type { CollectionAfterRefreshHook } from 'payload';
const afterRefreshHook: CollectionAfterRefreshHook = async ({
req, // full Request object
@@ -277,7 +277,7 @@ const afterRefreshHook: CollectionAfterRefreshHook = async ({
For auth-enabled Collections, this hook runs after `me` operations.
```ts
import { CollectionAfterMeHook } from 'payload/types';
import type { CollectionAfterMeHook } from 'payload';
const afterMeHook: CollectionAfterMeHook = async ({
req, // full Request object
@@ -290,7 +290,7 @@ const afterMeHook: CollectionAfterMeHook = async ({
For auth-enabled Collections, this hook runs after successful `forgotPassword` operations. Returned values are discarded.
```ts
import { CollectionAfterForgotPasswordHook } from 'payload/types'
import type { CollectionAfterForgotPasswordHook } from 'payload'
const afterForgotPasswordHook: CollectionAfterForgotPasswordHook = async ({
args, // arguments passed into the operation
@@ -319,5 +319,5 @@ import type {
CollectionAfterRefreshHook,
CollectionAfterMeHook,
CollectionAfterForgotPasswordHook,
} from 'payload/types'
} from 'payload'
```

View File

@@ -30,7 +30,7 @@ functionalities to be easily reusable across your projects.
Example field configuration:
```ts
import { Field } from 'payload/types';
import type { Field } from 'payload';
const ExampleField: Field = {
name: 'name',
@@ -107,7 +107,7 @@ Runs before the `update` operation. This hook allows you to pre-process or forma
validation.
```ts
import { Field } from 'payload/types'
import type { Field } from 'payload'
const usernameField: Field = {
name: 'username',
@@ -134,7 +134,7 @@ you can be confident that the field data that will be saved to the document is v
validations.
```ts
import { Field } from 'payload/types'
import type { Field } from 'payload'
const emailField: Field = {
name: 'email',
@@ -162,7 +162,7 @@ The `afterChange` hook is executed after a field's value has been changed and sa
for post-processing or triggering side effects based on the new value of the field.
```ts
import { Field } from 'payload/types'
import type { Field } from 'payload'
const membershipStatusField: Field = {
name: 'membershipStatus',
@@ -199,7 +199,7 @@ The `afterRead` hook is invoked after a field value is read from the database. T
transforming the field data for output.
```ts
import { Field } from 'payload/types'
import type { Field } from 'payload'
const dateField: Field = {
name: 'createdAt',
@@ -230,7 +230,7 @@ By Default, unique and required text fields Payload will append "- Copy" to the
Here is an example of a number field with a hook that increments the number to avoid unique constraint errors when duplicating a document:
```ts
import { Field } from 'payload/types'
import type { Field } from 'payload'
const numberField: Field = {
name: 'number',
@@ -249,7 +249,7 @@ const numberField: Field = {
Payload exports a type for field hooks which can be accessed and used as follows:
```ts
import type { FieldHook } from 'payload/types'
import type { FieldHook } from 'payload'
// Field hook type is a generic that takes three arguments:
// 1: The document type

View File

@@ -21,7 +21,7 @@ All Global Hook properties accept arrays of synchronous or asynchronous function
`globals/example-hooks.js`
```ts
import { GlobalConfig } from 'payload/types';
import type { GlobalConfig } from 'payload';
const ExampleHooks: GlobalConfig = {
slug: 'header',
@@ -43,7 +43,7 @@ const ExampleHooks: GlobalConfig = {
Runs before the `update` operation. This hook allows you to add or format data before the incoming data is validated.
```ts
import { GlobalBeforeValidateHook } from 'payload/types'
import type { GlobalBeforeValidateHook } from 'payload'
const beforeValidateHook: GlobalBeforeValidateHook = async ({
data, // incoming data to update or create with
@@ -59,7 +59,7 @@ const beforeValidateHook: GlobalBeforeValidateHook = async ({
Immediately following validation, `beforeChange` hooks will run within the `update` operation. At this stage, you can be confident that the data that will be saved to the document is valid in accordance to your field validations. You can optionally modify the shape of data to be saved.
```ts
import { GlobalBeforeChangeHook } from 'payload/types'
import type { GlobalBeforeChangeHook } from 'payload'
const beforeChangeHook: GlobalBeforeChangeHook = async ({
data, // incoming data to update or create with
@@ -75,7 +75,7 @@ const beforeChangeHook: GlobalBeforeChangeHook = async ({
After a global is updated, the `afterChange` hook runs. Use this hook to purge caches of your applications, sync site data to CRMs, and more.
```ts
import { GlobalAfterChangeHook } from 'payload/types'
import type { GlobalAfterChangeHook } from 'payload'
const afterChangeHook: GlobalAfterChangeHook = async ({
doc, // full document data
@@ -91,7 +91,7 @@ const afterChangeHook: GlobalAfterChangeHook = async ({
Runs before `findOne` global operation is transformed for output by `afterRead`. This hook fires before hidden fields are removed and before localized fields are flattened into the requested locale. Using this Hook will provide you with all locales and all hidden fields via the `doc` argument.
```ts
import { GlobalBeforeReadHook } from 'payload/types'
import type { GlobalBeforeReadHook } from 'payload'
const beforeReadHook: GlobalBeforeReadHook = async ({
doc, // full document data
@@ -104,7 +104,7 @@ const beforeReadHook: GlobalBeforeReadHook = async ({
Runs as the last step before a global is returned. Flattens locales, hides protected fields, and removes fields that users do not have access to.
```ts
import { GlobalAfterReadHook } from 'payload/types'
import type { GlobalAfterReadHook } from 'payload'
const afterReadHook: GlobalAfterReadHook = async ({
doc, // full document data
@@ -124,5 +124,5 @@ import type {
GlobalAfterChangeHook,
GlobalBeforeReadHook,
GlobalAfterReadHook,
} from 'payload/types'
} from 'payload'
```

View File

@@ -16,15 +16,16 @@ to consider these main aspects:
1. [Basics](#basics)
1. [Security](#security)
1. [Your MongoDB](#mongodb)
1. [Your database](#database)
1. [Permanent File Storage](#file-storage)
1. [Docker](#docker)
Payload can be deployed
## Basics
In order for Payload to run, it requires both the server code and the built admin panel. These will be the `dist`
and `build` directories by default. If you've used `create-payload-app` to create your project, executing the `build`
npm script will build both and output these directories.
Payload runs fully in Next.js, so the [Next.js build process](https://nextjs.org/docs/app/building-your-application/deploying) is used for building Payload. If you've used `create-payload-app` to create your project, executing the `build`
npm script will build Payload for production.
## Security
@@ -34,9 +35,7 @@ deploying to Production, it's a good idea to double-check that you are making pr
### The Secret Key
When you initialize Payload, you provide it with a `secret` property. This property should be impossible to guess and
extremely difficult for brute-force attacks to crack. Make sure your Production `secret` is a long, complex string. It's
often best practice to store it in an `env` file which is not checked into your Git repository, using `dotenv` to supply
it to your `payload.init` call.
extremely difficult for brute-force attacks to crack. Make sure your Production `secret` is a long, complex string.
### Double-check and thoroughly test all Access Control
@@ -58,34 +57,11 @@ wield that power responsibly before deploying to Production.
to perform appropriate actions.
</Banner>
### Building the Admin panel
### Running in Production
Before running in Production, you need to have built a production-ready copy of the Payload Admin panel. To do this,
Payload provides the `build` NPM script. You can use it by adding a `script` to your `package.json` file like this:
Depending on where you deploy Payload, you may need to provide a start script to your deployment platform in order to start up Payload in production mode.
`package.json`:
```json
{
"name": "project-name-here",
"scripts": {
"build": "payload build"
},
"dependencies": {
// your dependencies
}
}
```
Then, to build Payload, you would run `npm run build` in your project folder. A production-ready Payload Admin bundle will be
created in the `build` directory.
### Setting Node to Production
Make sure you set the environment variable `NODE_ENV` to `production`. Based on this variable, many Node packages
automatically optimize themselves. In production, Payload automatically disables
the [GraphQL Playground](/docs/graphql/overview#graphql-playground), serves a production-ready version of the Admin
panel, and other changes.
Note that this is different than running `next dev`. Generally, Next.js apps come configured with a `start` script which runs `next start`.
### Secure Cookie Settings
@@ -95,37 +71,14 @@ can [enable secure cookies](/docs/authentication/config) in your Authentication-
### Preventing API Abuse
Payload comes with a robust set of built-in anti-abuse measures, such as locking out users after X amount of failed
login attempts, request rate limiting, GraphQL query complexity limits, max `depth` settings, and
login attempts, GraphQL query complexity limits, max `depth` settings, and
more. [Click here to learn more](/docs/production/preventing-abuse).
## MongoDB
## Database
Payload can be used with any MongoDB compatible database including AWS DocumentDB or Azure Cosmos DB.
Payload can be used with any Postgres database or MongoDB-compatible database including AWS DocumentDB or Azure Cosmos DB. Make sure your production environment has access to the database that Payload uses.
### Managing MongoDB yourself
If you are using a [persistent filesystem-based cloud host](#persistent-vs-ephemeral-filesystems) such as
a [DigitalOcean Droplet](https://www.digitalocean.com/products/droplets/) or
an [Amazon EC2](https://aws.amazon.com/ec2/?ec2-whats-new.sort-by=item.additionalFields.postDateTime&ec2-whats-new.sort-order=desc)
server, you might opt to install MongoDB directly on that server itself so that Node can communicate with it locally.
With this approach, you can benefit from faster response times, but scaling can become more involved as your app's user
base grows.
### Letting someone else do it
Alternatively, you can rely on a third-party MongoDB host such as [MongoDB Atlas](https://www.mongodb.com/). With Atlas
or a similar cloud provider, you can trust them to take care of your database's availability, security, redundancy, and
backups.
<Banner type="warning">
<strong>Note:</strong>
<br />
If versions are enabled and a collection has many documents you may need a minimum of an m10
mongoDB atlas cluster if you reach a sorting `exceeded memory limit` error to view a collection
list in the admin UI. The limitations of the m2 and m5 tier clusters are here: [Atlas M0 (Free
Cluster), M2, and M5
Limitations](https://www.mongodb.com/docs/atlas/reference/free-shared-limitations/?_ga=2.176267877.1329169847.1677683154-860992573.1647438381#operational-limitations).
</Banner>
Out of the box, Payload templates pass the `process.env.DATABASE_URI` environment variable to its database adapters, so make sure you've got that environment variable (and all others that you use) assigned in your deployment platform.
### DocumentDB
@@ -174,52 +127,21 @@ perpetually.
with a persistent filesystem or have an integration with a third-party file host like Amazon S3.
</Banner>
### Using ephemeral filesystem providers like Heroku
### Using cloud storage providers
If you don't use Payload's `upload` functionality, you can go ahead and use Heroku or similar platform easily.
Everything will work exactly as you want it to.
If you don't use Payload's `upload` functionality, you can completely disregard this section.
But, if you do, and you still want to use an ephemeral filesystem provider, you can write a hook-based solution to
_copy_ the files your users upload to a more permanent storage solution like Amazon S3 or DigitalOcean Spaces.
But, if you do, and you still want to use an ephemeral filesystem provider, you can use one of Payload's official cloud storage plugins or write your own to save the files your users upload to a more permanent storage solution like Amazon S3 or DigitalOcean Spaces.
**To automatically send uploaded files to S3 or similar, you could:**
Payload provides a list of official cloud storage adapters for you to use:
- Write an asynchronous `beforeChange` hook for all Collections that support Uploads, which takes any uploaded `file`
from the `req` and sends it to an S3 bucket
- Write an `afterRead` hook to save a `s3URL` field that automatically takes the `filename` stored and formats a full S3
URL
- Write an `afterDelete` hook that automatically deletes files from the S3 bucket
- [Azure Blob Storage](https://github.com/payloadcms/payload/tree/beta/packages/storage-azure)
- [Google Cloud Storage](https://github.com/payloadcms/payload/tree/beta/packages/storage-gcs)
- [AWS S3](https://github.com/payloadcms/payload/tree/beta/packages/storage-s3)
- [Uploadthing](https://github.com/payloadcms/payload/tree/beta/packages/storage-uploadthing)
- [Vercel Blob Storage](https://github.com/payloadcms/payload/tree/beta/packages/storage-vercel-blob)
With the above configuration, deploying to Heroku or similar becomes no problem.
## DigitalOcean Tutorials
DigitalOcean provides extremely helpful documentation that can walk you through the entire process of creating a
production-ready Droplet to host your Payload app:
1. Create a new Ubuntu 20.04 droplet on [DigitalOcean](https://digitalocean.com)
1. [Initial server setup](https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-20-04)
1. [Install nginx](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04)
1. [Install and secure MongoDB](https://www.digitalocean.com/community/tutorials/how-to-install-mongodb-on-ubuntu-20-04)
1. [Create a new MongoDB and user](https://medium.com/@mhagemann/how-to-add-a-new-user-to-a-mongodb-database-d896776b5362)
1. [Set up Node for production](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-20-04)
### Swap Space
Swap refers to a section of storage on the hard drive that is reserved to temporarily store data that can no longer fit
within RAM. This allows for the expansion of your server's working memory, with some limitations. Swap space comes into
play when available RAM can no longer accommodate actively used application data, enabling the system to continue
functioning.
Insufficient space can lead to deployment errors and memory-related issues, resulting in application crashes, sluggish
performance, or an unresponsive server.
Common deployment error due to **space limitations** (as reported by users):
- `Error: Command failed with exit code 1`
To configure swap, we recommend following this tutorial
on [How To Add Swap Space](https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-22-04).
Follow the docs to configure any one of these storage providers. For local development, it might be handy to simply store uploads on your own computer, and then when it comes to production, simply enable the plugin for the cloud storage vendor of your choice.
## Docker
@@ -227,31 +149,75 @@ This is an example of a multi-stage docker build of Payload for production. Ensu
variables on deployment, like `PAYLOAD_SECRET`, `PAYLOAD_CONFIG_PATH`, and `DATABASE_URI` if needed.
```dockerfile
FROM node:18-alpine as base
# From https://github.com/vercel/next.js/blob/canary/examples/with-docker/Dockerfile
FROM base as builder
FROM node:18-alpine AS base
WORKDIR /home/node
COPY package*.json ./
# Install dependencies only when needed
FROM base AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then corepack enable pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN yarn install
RUN yarn build
FROM base as runtime
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
ENV NODE_ENV=production
RUN \
if [ -f yarn.lock ]; then yarn run build; \
elif [ -f package-lock.json ]; then npm run build; \
elif [ -f pnpm-lock.yaml ]; then corepack enable pnpm && pnpm run build; \
else echo "Lockfile not found." && exit 1; \
fi
WORKDIR /home/node
COPY package*.json ./
# Production image, copy all the files and run next
FROM base AS runner
WORKDIR /app
RUN yarn install --production
COPY --from=builder /home/node/dist ./dist
COPY --from=builder /home/node/build ./build
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
# Set the correct permission for prerender cache
RUN mkdir .next
RUN chown nextjs:nodejs .next
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
CMD ["node", "dist/server.js"]
ENV PORT 3000
# server.js is created by next build from the standalone output
# https://nextjs.org/docs/pages/api-reference/next-config-js/output
CMD HOSTNAME="0.0.0.0" node server.js
```
## Docker Compose
@@ -270,15 +236,14 @@ services:
- .:/home/node/app
- node_modules:/home/node/app/node_modules
working_dir: /home/node/app/
command: sh -c "yarn install && yarn dev"
command: sh -c "corepack enable && corepack prepare pnpm@latest --activate && pnpm install && pnpm dev"
depends_on:
- mongo
environment:
DATABASE_URI: mongodb://mongo:27017/payload
PORT: 3000
NODE_ENV: development
PAYLOAD_SECRET: TESTING
# - postgres
env_file:
- .env
# Ensure your DATABASE_URI uses 'mongo' as the hostname ie. mongodb://mongo/my-db-name
mongo:
image: mongo:latest
ports:
@@ -290,7 +255,17 @@ services:
logging:
driver: none
# Uncomment the following to use postgres
# postgres:
# restart: always
# image: postgres:latest
# volumes:
# - pgdata:/var/lib/postgresql/data
# ports:
# - "5432:5432"
volumes:
data:
# pgdata:
node_modules:
```

View File

@@ -14,27 +14,6 @@ Payload has built-in security best practices that can be configured to your appl
Set the max number of failed login attempts before a user account is locked out for a period of time. Set the `maxLoginAttempts` on the collections that feature Authentication to a reasonable but low number for your users to get in. Use the `lockTime` to set a number in milliseconds from the time a user fails their last allowed attempt that a user must wait to try again.
## Rate Limiting Requests
To prevent DDoS, brute-force, and similar attacks, you can set IP-based rate limits so that once a certain threshold of requests has been hit by a single IP, further requests from the same IP will be ignored. The Payload config `rateLimit` property accepts an object with the following properties:
| Option | Description |
| ---------------- | ----------------------------------------------------------------------------------------------------------------------- |
| **`window`** | Time in milliseconds to track requests per IP. Defaults to `90000` (15 minutes). |
| **`max`** | Number of requests served from a single IP before limiting. Defaults to `500`. |
| **`skip`** | Function that can return true (or promise resulting in true) that will bypass limit. |
| **`trustProxy`** | True or false, to enable to allow requests to pass through a proxy such as a load balancer or an `nginx` reverse proxy. |
<Banner type="warning">
<strong>Warning:</strong>
<br />
Very commonly, NodeJS apps are served behind `nginx` reverse proxies and similar. If you use
rate-limiting while you're behind a proxy, <strong>all</strong> IP addresses from everyone that
uses your API will appear as if they are from a local origin (127.0.0.1), and your users will get
rate-limited very quickly without cause. If you plan to host your app behind a proxy, make sure
you set <strong>trustProxy</strong> to <strong>true</strong>.
</Banner>
## Max Depth
Querying a collection and automatically including related documents via `depth` incurs a performance cost. Also, it's possible that your configs may have circular relationships, meaning scenarios where an infinite amount of relationships might populate back and forth until your server times out and crashes. You can prevent any potential of depth-related issues by setting a `maxDepth` property on your Payload config.. The maximum allowed depth should be as small as possible without interrupting dev experience, and it defaults to `10`.

View File

@@ -26,7 +26,7 @@ Collections and Globals both support the same options for configuring autosave.
**Example config with versions, drafts, and autosave enabled:**
```ts
import { CollectionConfig } from 'payload/types'
import type { CollectionConfig } from 'payload'
export const Pages: CollectionConfig = {
slug: 'pages',

View File

@@ -87,7 +87,7 @@ You can use the `read` [Access Control](/docs/access-control/collections#read) m
Here is an example that utilizes the `_status` field to require a user to be logged in to retrieve drafts:
```ts
import { CollectionConfig } from 'payload/types'
import type { CollectionConfig } from 'payload'
export const Pages: CollectionConfig = {
slug: 'pages',
@@ -129,7 +129,7 @@ export const Pages: CollectionConfig = {
Here is an example for how to write an access control function that grants access to both documents where `_status` is equal to "published" and where `_status` does not exist:
```ts
import { CollectionConfig } from 'payload/types'
import type { CollectionConfig } from 'payload'
export const Pages: CollectionConfig = {
slug: 'pages',