This PR makes it so that `modifyResponseHeaders` is supported in our
adapters when set on the collection config. Previously it would be
ignored.
This means that users can now modify or append new headers to what's
returned by each service.
```ts
import type { CollectionConfig } from 'payload'
export const Media: CollectionConfig = {
slug: 'media',
upload: {
modifyResponseHeaders: ({ headers }) => {
const newHeaders = new Headers(headers) // Copy existing headers
newHeaders.set('X-Frame-Options', 'DENY') // Set new header
return newHeaders
},
},
}
```
Also adds support for `void` return on the `modifyResponseHeaders`
function in the case where the user just wants to use existing headers
and doesn't need more control.
eg:
```ts
import type { CollectionConfig } from 'payload'
export const Media: CollectionConfig = {
slug: 'media',
upload: {
modifyResponseHeaders: ({ headers }) => {
headers.set('X-Frame-Options', 'DENY') // You can directly set headers without returning
},
},
}
```
Manual testing checklist (no CI e2es setup for these envs yet):
- [x] GCS
- [x] S3
- [x] Azure
- [x] UploadThing
- [x] Vercel Blob
---------
Co-authored-by: James <james@trbl.design>
Currently, we globally enable both DOM and Node.js types. While this
mostly works, it can cause conflicts - particularly with `fetch`. For
example, TypeScript may incorrectly allow browser-only properties (like
`cache`) and reject valid Node.js ones like `dispatcher`.
This PR disables DOM types for server-only packages like payload,
ensuring Node-specific typings are applied. This caught a few instances
of incorrect fetch usage that were previously masked by overlapping DOM
types.
This is not a perfect solution - packages that contain both server and
client code (like richtext-lexical or next) will still suffer from this
issue. However, it's an improvement in cases where we can cleanly
separate server and client types, like for the `payload` package which
is server-only.
## Use-case
This change enables https://github.com/payloadcms/payload/pull/12622 to
explore using node-native fetch + `dispatcher`, instead of `node-fetch`
+ `agent`.
Currently, it will incorrectly report that `dispatcher` is not a valid
property for node-native fetch
Previously, if you enabled presigned URL downloads for a collection, all
the files would use them. However, it might be possible that you want to
use presigned URLs only for specific files (like videos), this PR allows
you to pass `shouldUseSignedURL` to control that behavior dynamically.
```ts
s3Storage({
collections: {
media: {
signedDownloads: {
shouldUseSignedURL: ({ collection, filename, req }) => {
return req.headers.get('X-Disable-Signed-URL') !== 'false'
},
},
},
},
})
```
This PR introduces a few changes to improve turbopack compatibility and
ensure e2e tests pass with turbopack enabled
## Changes to improve turbopack compatibility
- Use correct sideEffects configuration to fix scss issues
- Import scss directly instead of duplicating our scss rules
- Fix some scss rules that are not supported by turbopack
- Bump Next.js and all other dependencies used to build payload
## Changes to get tests to pass
For an unknown reason, flaky tests flake a lot more often in turbopack.
This PR does the following to get them to pass:
- add more `wait`s
- fix actual flakes by ensuring previous operations are properly awaited
## Blocking turbopack bugs
- [X] https://github.com/vercel/next.js/issues/76464
- Fix PR: https://github.com/vercel/next.js/pull/76545
- Once fixed: change `"sideEffectsDisabled":` back to `"sideEffects":`
## Non-blocking turbopack bugs
- [ ] https://github.com/vercel/next.js/issues/76956
## Related PRs
https://github.com/payloadcms/payload/pull/12653https://github.com/payloadcms/payload/pull/12652
I think it's easier to review this PR commit by commit, so I'll explain
it this way:
## Commits
1. [parallelize eslint script (still showing logs results in
serial)](c9ac49c12d):
Previously, `--concurrency 1` was added to the script to make the logs
more readable. However, turborepo has an option specifically for these
use cases: `--log-order=grouped` runs the tasks in parallel but outputs
them serially. As a result, the lint script is now significantly faster.
2. [run pnpm
lint:fix](9c128c276a)
The auto-fix was run, which resolved some eslint errors that were
slipped in due to the use of `no-verify`. Most of these were
`perfectionist` fixes (property ordering) and the removal of unnecessary
assertions. Starting with this PR, this won't happen again in the
future, as we'll be verifying the linter in every PR across the entire
codebase (see commit 7).
3. [fix eslint non-autofixable
errors](700f412a33)
All manual errors have been resolved except for the configuration errors
addressed in commit 5. Most were React compiler violations, which have
been disabled and commented out "TODO" for now. There's also an unused
`use no memo` and a couple of `require` errors.
4. [move react-compiler linter to eslint-config
package](4f7cb4d63a)
To simplify the eslint configuration. My concern was that there would be
a performance regression when used in non-react related packages, but
none was experienced. This is probably because it only runs on .tsx
files.
5. [remove redundant eslint config files and fix
allowDefaultProject](a94347995a)
The main feature introduced by `typescript-eslint` v8 was
`projectService`, which automatically searches each file for the closest
`tsconfig`, greatly simplifying configuration in monorepos
([source](https://typescript-eslint.io/blog/announcing-typescript-eslint-v8#project-service)).
Once I moved `projectService` to `packages/eslint-config`, all the other
configuration files could be easily removed.
I confirmed that pnpm lint still works on individual packages.
The other important change was that the pending eslint errors from
commits 2 and 3 were resolved. That is, some files were giving the
error: "[File] was not found by the project service. Consider either
including it in the tsconfig.json or including it in
allowDefaultProject." Below I copy the explanatory comment I left in the
code:
```ts
// This is necessary because `tsconfig.base.json` defines `"rootDir": "${configDir}/src"`,
// And the following files aren't in src because they aren't transpiled.
// This is typescript-eslint's way of adding files that aren't included in tsconfig.
// See: https://typescript-eslint.io/troubleshooting/typed-linting/#i-get-errors-telling-me--was-not-found-by-the-project-service-consider-either-including-it-in-the-tsconfigjson-or-including-it-in-allowdefaultproject
// The best practice is to have a tsconfig.json that covers ALL files and is used for
// typechecking (with noEmit), and a `tsconfig.build.json` that is used for the build
// (or alternatively, swc, tsup or tsdown). That's what we should ideally do, in which case
// this hardcoded list wouldn't be necessary. Note that these files don't currently go
// through ts, only through eslint.
```
6. [Differentiate errors from warnings in VScode ESLint
Rules](5914d2f48d)
There's no reason to do that. If an eslint rule isn't an error, it
should be disabled or converted to a warning.
7. [Disable skip lint, and lint over the entire repo now that it's
faster](e4b28f1360)
The GitHub action linted only the files that had changed in the PR.
While this seems like a good idea, once exceptions were introduced with
[skip lint], they opened the door to propagating more and more errors.
Often, the linter was skipped, not because someone introduced new
errors, but because they were trying to avoid those that had already
crept in, sometimes accidentally introducing new ones.
On the other hand, `pnpm lint` now runs in parallel (commit 1), so it's
not that slow. Additionally, it runs in parallel with other GitHub
actions like e2e tests, which take much longer, so it can't represent a
bottleneck in CI.
8. [fix lint in next
package](4506595f91)
Small fix missing from commit 5
9. [Merge remote-tracking branch 'origin/main' into
fix-eslint](563d4909c1)
10. [add again eslint.config.js in payload
package](78f6ffcae7)
The comment in the code explains it. Basically, after the merge from
main, the payload package runs out of memory when linting, probably
because it grew in recent PRs. That package will sooner or later
collapse for our tooling, so we may have to split it. It's already too
big.
## Future Actions
- Resolve React compiler violations, as mentioned in commit 3.
- Decouple the `tsconfig` used for typechecking and build across the
entire monorepo (as explained in point 5) to ensure ts coverage even for
files that aren't transpiled (such as scripts).
- Remove the few remaining `eslint.config.js`. I had to leave the
`richtext-lexical` and `next` ones for now. They could be moved to the
root config and scoped to their packages, as we do for example with
`templates/vercel-postgres/**`. However, I couldn't get it to work, I
don't know why.
- Make eslint in the test folder usable. Not only are we not linting
`test` in CI, but now the `pnpm eslint .` command is so large that my
computer freezes. If each suite were its own package, this would be
solved, and dynamic codegen + git hooks to modify tsconfig.base.json
wouldn't be necessary
([related](https://github.com/payloadcms/payload/pull/11984)).
Adds pre-signed URLs support file downloads with the S3 adapter. Can be
enabled per-collection:
```ts
s3Storage({
collections: {
media: { signedDownloads: true }, // or { signedDownloads: { expiresIn: 3600 }} for custom expiresIn (default 7200)
},
bucket: process.env.S3_BUCKET,
config: {
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
},
endpoint: process.env.S3_ENDPOINT,
forcePathStyle: process.env.S3_FORCE_PATH_STYLE === 'true',
region: process.env.S3_REGION,
},
}),
```
The main use case is when you care about the Payload access control (so
you don't want to use `disablePayloadAccessControl: true` but you don't
want your files to be served through Payload (which can affect
performance with large videos for example).
This feature instead generates a signed URL (after verifying the access
control) and redirects you directly to the S3 provider.
This is an addition to https://github.com/payloadcms/payload/pull/11382
which added pre-signed URLs for file uploads.
Ensures all s3 sockets are cleaned up. Now passes through default
request handler options that `@smithy/node-http-handler` now handles
properly.
Fixes#6382
```ts
const defaultRequestHandlerOpts: NodeHttpHandlerOptions = {
httpAgent: {
keepAlive: true,
maxSockets: 100,
},
httpsAgent: {
keepAlive: true,
maxSockets: 100,
},
}
```
If you continue to have socket issues, you can customize any of the
options by setting `requestHandler` property on your s3 config. This
will take precedence if set.
```ts
requestHandler: {
httpAgent: {
maxSockets: 300,
keepAlive: true,
},
httpsAgent: {
maxSockets: 300,
keepAlive: true,
},
// Optional, only set these if you continue to see issues. Be wary of timeouts if you're dealing with large files.
// time limit (ms) for receiving response.
requestTimeout: 5_000,
// time limit (ms) for establishing connection.
connectionTimeout: 5_000,
}),
```
Fixes https://github.com/payloadcms/payload/issues/11473
Previously, when `disablePayloadAccessControl: true` was defined, client
uploads were working improperly. The reason is that
`addDataAndFileToRequest` expects `staticHandler` to be defined and we
don't add in case if `disablePayloadAccessControl: true`.
This PR makes it so otherwise and if we have `clientUploads`, it pushes
the "proxied" handler that responses only when the file was requested in
the context of client upload (from `addDataAndFileToRequest`)
### What?
Fixes client uploads when storage collection config has the `prefix`
property configured. Previously, it failed with "Object key was not
found".
### Why?
This is expected to work.
### How?
The client upload handler now receives to its props `prefix`. Then it
threads it to the server-side `staticHandler` through
`clientUploadContext` and then to `getFilePrefix`, which checks for
`clientUploadContext.prefix` and returns if there is.
Previously, `staticHandler` tried to load the file without including
prefix consideration.
This changes only these adapters:
* S3
* Azure
* GCS
With the Vercel Blob adapter, `prefix` works correctly.
Ensures that even if you pass `enabled: false` to the storage adapter
options, e.g:
```ts
s3Storage({
enabled: false,
collections: {
[mediaSlug]: true,
},
bucket: process.env.S3_BUCKET,
config: {
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
},
},
})
```
the client handler component is added to the import map. This prevents
errors when you use the adapter only on production, but you don't
regenerate the import map before running the build
### What?
Within collections using the `storage-s3` plugins, we eventually start
receiving the following warnings:
`@smithy/node-http-handler:WARN socket usage at capacity=50 and 156
additional requests are enqueued. See
https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/node-configuring-maxsockets.html
or increase socketAcquisitionWarningTimeout=(millis) in the
NodeHttpHandler config.`
Also referenced in this issue: #6382
The
[solution](https://github.com/payloadcms/payload/issues/6382#issuecomment-2325468104)
provided by @denolfe in that issue only delayed the reappearance of the
problem somewhat, but did not resolve it.
### Why?
As far as I understand, in the `staticHandler` of the plugin, when
getting items from storage, and they are currently cached, the cached
results are immediately returned without handling the stream. As per
[this](https://github.com/aws/aws-sdk-js-v3/blob/main/supplemental-docs/CLIENTS.md#nodejs-requesthandler)
entry in the aws-sdk docs, if the streaming response is not read, or
manually destroyed, a socket might not properly close.
### How?
Before returning the cached items, manually destroy the streaming
response to make certain the socket is being properly closed.
Additionally, add an error check to also consume/destroy the streaming
response in case an error occurs, to not leave orphaned sockets.
Fixes#6382