This PR makes it so that `modifyResponseHeaders` is supported in our
adapters when set on the collection config. Previously it would be
ignored.
This means that users can now modify or append new headers to what's
returned by each service.
```ts
import type { CollectionConfig } from 'payload'
export const Media: CollectionConfig = {
slug: 'media',
upload: {
modifyResponseHeaders: ({ headers }) => {
const newHeaders = new Headers(headers) // Copy existing headers
newHeaders.set('X-Frame-Options', 'DENY') // Set new header
return newHeaders
},
},
}
```
Also adds support for `void` return on the `modifyResponseHeaders`
function in the case where the user just wants to use existing headers
and doesn't need more control.
eg:
```ts
import type { CollectionConfig } from 'payload'
export const Media: CollectionConfig = {
slug: 'media',
upload: {
modifyResponseHeaders: ({ headers }) => {
headers.set('X-Frame-Options', 'DENY') // You can directly set headers without returning
},
},
}
```
Manual testing checklist (no CI e2es setup for these envs yet):
- [x] GCS
- [x] S3
- [x] Azure
- [x] UploadThing
- [x] Vercel Blob
---------
Co-authored-by: James <james@trbl.design>
Currently, we globally enable both DOM and Node.js types. While this
mostly works, it can cause conflicts - particularly with `fetch`. For
example, TypeScript may incorrectly allow browser-only properties (like
`cache`) and reject valid Node.js ones like `dispatcher`.
This PR disables DOM types for server-only packages like payload,
ensuring Node-specific typings are applied. This caught a few instances
of incorrect fetch usage that were previously masked by overlapping DOM
types.
This is not a perfect solution - packages that contain both server and
client code (like richtext-lexical or next) will still suffer from this
issue. However, it's an improvement in cases where we can cleanly
separate server and client types, like for the `payload` package which
is server-only.
## Use-case
This change enables https://github.com/payloadcms/payload/pull/12622 to
explore using node-native fetch + `dispatcher`, instead of `node-fetch`
+ `agent`.
Currently, it will incorrectly report that `dispatcher` is not a valid
property for node-native fetch
<!--
Thank you for the PR! Please go through the checklist below and make
sure you've completed all the steps.
Please review the
[CONTRIBUTING.md](https://github.com/payloadcms/payload/blob/main/CONTRIBUTING.md)
document in this repository if you haven't already.
The following items will ensure that your PR is handled as smoothly as
possible:
- PR Title must follow conventional commits format. For example, `feat:
my new feature`, `fix(plugin-seo): my fix`.
- Minimal description explained as if explained to someone not
immediately familiar with the code.
- Provide before/after screenshots or code diffs if applicable.
- Link any related issues/discussions from GitHub or Discord.
- Add review comments if necessary to explain to the reviewer the logic
behind a change
### What?
### Why?
### How?
Fixes #
-->
### What?
In a similar vein to #11734, #11733, #10327 - this PR returns a 404 in
the response when a file is not found while using the `storage-gcs`
adapter. Currently a 500 is returned.
### Why?
To return the correct error level in the response when a file is not
found when using `storage-gcs`.
### How?
The GCS nodejs library exposes the `ApiError` as a general error - these
changes check that the caught error is an instance of this class and if
the provided code is a `404`.
This PR introduces a few changes to improve turbopack compatibility and
ensure e2e tests pass with turbopack enabled
## Changes to improve turbopack compatibility
- Use correct sideEffects configuration to fix scss issues
- Import scss directly instead of duplicating our scss rules
- Fix some scss rules that are not supported by turbopack
- Bump Next.js and all other dependencies used to build payload
## Changes to get tests to pass
For an unknown reason, flaky tests flake a lot more often in turbopack.
This PR does the following to get them to pass:
- add more `wait`s
- fix actual flakes by ensuring previous operations are properly awaited
## Blocking turbopack bugs
- [X] https://github.com/vercel/next.js/issues/76464
- Fix PR: https://github.com/vercel/next.js/pull/76545
- Once fixed: change `"sideEffectsDisabled":` back to `"sideEffects":`
## Non-blocking turbopack bugs
- [ ] https://github.com/vercel/next.js/issues/76956
## Related PRs
https://github.com/payloadcms/payload/pull/12653https://github.com/payloadcms/payload/pull/12652
I think it's easier to review this PR commit by commit, so I'll explain
it this way:
## Commits
1. [parallelize eslint script (still showing logs results in
serial)](c9ac49c12d):
Previously, `--concurrency 1` was added to the script to make the logs
more readable. However, turborepo has an option specifically for these
use cases: `--log-order=grouped` runs the tasks in parallel but outputs
them serially. As a result, the lint script is now significantly faster.
2. [run pnpm
lint:fix](9c128c276a)
The auto-fix was run, which resolved some eslint errors that were
slipped in due to the use of `no-verify`. Most of these were
`perfectionist` fixes (property ordering) and the removal of unnecessary
assertions. Starting with this PR, this won't happen again in the
future, as we'll be verifying the linter in every PR across the entire
codebase (see commit 7).
3. [fix eslint non-autofixable
errors](700f412a33)
All manual errors have been resolved except for the configuration errors
addressed in commit 5. Most were React compiler violations, which have
been disabled and commented out "TODO" for now. There's also an unused
`use no memo` and a couple of `require` errors.
4. [move react-compiler linter to eslint-config
package](4f7cb4d63a)
To simplify the eslint configuration. My concern was that there would be
a performance regression when used in non-react related packages, but
none was experienced. This is probably because it only runs on .tsx
files.
5. [remove redundant eslint config files and fix
allowDefaultProject](a94347995a)
The main feature introduced by `typescript-eslint` v8 was
`projectService`, which automatically searches each file for the closest
`tsconfig`, greatly simplifying configuration in monorepos
([source](https://typescript-eslint.io/blog/announcing-typescript-eslint-v8#project-service)).
Once I moved `projectService` to `packages/eslint-config`, all the other
configuration files could be easily removed.
I confirmed that pnpm lint still works on individual packages.
The other important change was that the pending eslint errors from
commits 2 and 3 were resolved. That is, some files were giving the
error: "[File] was not found by the project service. Consider either
including it in the tsconfig.json or including it in
allowDefaultProject." Below I copy the explanatory comment I left in the
code:
```ts
// This is necessary because `tsconfig.base.json` defines `"rootDir": "${configDir}/src"`,
// And the following files aren't in src because they aren't transpiled.
// This is typescript-eslint's way of adding files that aren't included in tsconfig.
// See: https://typescript-eslint.io/troubleshooting/typed-linting/#i-get-errors-telling-me--was-not-found-by-the-project-service-consider-either-including-it-in-the-tsconfigjson-or-including-it-in-allowdefaultproject
// The best practice is to have a tsconfig.json that covers ALL files and is used for
// typechecking (with noEmit), and a `tsconfig.build.json` that is used for the build
// (or alternatively, swc, tsup or tsdown). That's what we should ideally do, in which case
// this hardcoded list wouldn't be necessary. Note that these files don't currently go
// through ts, only through eslint.
```
6. [Differentiate errors from warnings in VScode ESLint
Rules](5914d2f48d)
There's no reason to do that. If an eslint rule isn't an error, it
should be disabled or converted to a warning.
7. [Disable skip lint, and lint over the entire repo now that it's
faster](e4b28f1360)
The GitHub action linted only the files that had changed in the PR.
While this seems like a good idea, once exceptions were introduced with
[skip lint], they opened the door to propagating more and more errors.
Often, the linter was skipped, not because someone introduced new
errors, but because they were trying to avoid those that had already
crept in, sometimes accidentally introducing new ones.
On the other hand, `pnpm lint` now runs in parallel (commit 1), so it's
not that slow. Additionally, it runs in parallel with other GitHub
actions like e2e tests, which take much longer, so it can't represent a
bottleneck in CI.
8. [fix lint in next
package](4506595f91)
Small fix missing from commit 5
9. [Merge remote-tracking branch 'origin/main' into
fix-eslint](563d4909c1)
10. [add again eslint.config.js in payload
package](78f6ffcae7)
The comment in the code explains it. Basically, after the merge from
main, the payload package runs out of memory when linting, probably
because it grew in recent PRs. That package will sooner or later
collapse for our tooling, so we may have to split it. It's already too
big.
## Future Actions
- Resolve React compiler violations, as mentioned in commit 3.
- Decouple the `tsconfig` used for typechecking and build across the
entire monorepo (as explained in point 5) to ensure ts coverage even for
files that aren't transpiled (such as scripts).
- Remove the few remaining `eslint.config.js`. I had to leave the
`richtext-lexical` and `next` ones for now. They could be moved to the
root config and scoped to their packages, as we do for example with
`templates/vercel-postgres/**`. However, I couldn't get it to work, I
don't know why.
- Make eslint in the test folder usable. Not only are we not linting
`test` in CI, but now the `pnpm eslint .` command is so large that my
computer freezes. If each suite were its own package, this would be
solved, and dynamic codegen + git hooks to modify tsconfig.base.json
wouldn't be necessary
([related](https://github.com/payloadcms/payload/pull/11984)).
Fixes https://github.com/payloadcms/payload/issues/11473
Previously, when `disablePayloadAccessControl: true` was defined, client
uploads were working improperly. The reason is that
`addDataAndFileToRequest` expects `staticHandler` to be defined and we
don't add in case if `disablePayloadAccessControl: true`.
This PR makes it so otherwise and if we have `clientUploads`, it pushes
the "proxied" handler that responses only when the file was requested in
the context of client upload (from `addDataAndFileToRequest`)
Client uploads were always enabled because a wrong variable was used,
when passing `enabled` to `initClientUploads`,
`gcsStorageOptions.enabled` instead of `gcsStorageOptions.clientUploads`
To enable client uploads with GCS you also additionally need to
configure CORS on Google Cloud, therefore this change breaks existing
logic
### What?
Fixes client uploads when storage collection config has the `prefix`
property configured. Previously, it failed with "Object key was not
found".
### Why?
This is expected to work.
### How?
The client upload handler now receives to its props `prefix`. Then it
threads it to the server-side `staticHandler` through
`clientUploadContext` and then to `getFilePrefix`, which checks for
`clientUploadContext.prefix` and returns if there is.
Previously, `staticHandler` tried to load the file without including
prefix consideration.
This changes only these adapters:
* S3
* Azure
* GCS
With the Vercel Blob adapter, `prefix` works correctly.
Ensures that even if you pass `enabled: false` to the storage adapter
options, e.g:
```ts
s3Storage({
enabled: false,
collections: {
[mediaSlug]: true,
},
bucket: process.env.S3_BUCKET,
config: {
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
},
},
})
```
the client handler component is added to the import map. This prevents
errors when you use the adapter only on production, but you don't
regenerate the import map before running the build