I think it's easier to review this PR commit by commit, so I'll explain
it this way:
## Commits
1. [parallelize eslint script (still showing logs results in
serial)](c9ac49c12d):
Previously, `--concurrency 1` was added to the script to make the logs
more readable. However, turborepo has an option specifically for these
use cases: `--log-order=grouped` runs the tasks in parallel but outputs
them serially. As a result, the lint script is now significantly faster.
2. [run pnpm
lint:fix](9c128c276a)
The auto-fix was run, which resolved some eslint errors that were
slipped in due to the use of `no-verify`. Most of these were
`perfectionist` fixes (property ordering) and the removal of unnecessary
assertions. Starting with this PR, this won't happen again in the
future, as we'll be verifying the linter in every PR across the entire
codebase (see commit 7).
3. [fix eslint non-autofixable
errors](700f412a33)
All manual errors have been resolved except for the configuration errors
addressed in commit 5. Most were React compiler violations, which have
been disabled and commented out "TODO" for now. There's also an unused
`use no memo` and a couple of `require` errors.
4. [move react-compiler linter to eslint-config
package](4f7cb4d63a)
To simplify the eslint configuration. My concern was that there would be
a performance regression when used in non-react related packages, but
none was experienced. This is probably because it only runs on .tsx
files.
5. [remove redundant eslint config files and fix
allowDefaultProject](a94347995a)
The main feature introduced by `typescript-eslint` v8 was
`projectService`, which automatically searches each file for the closest
`tsconfig`, greatly simplifying configuration in monorepos
([source](https://typescript-eslint.io/blog/announcing-typescript-eslint-v8#project-service)).
Once I moved `projectService` to `packages/eslint-config`, all the other
configuration files could be easily removed.
I confirmed that pnpm lint still works on individual packages.
The other important change was that the pending eslint errors from
commits 2 and 3 were resolved. That is, some files were giving the
error: "[File] was not found by the project service. Consider either
including it in the tsconfig.json or including it in
allowDefaultProject." Below I copy the explanatory comment I left in the
code:
```ts
// This is necessary because `tsconfig.base.json` defines `"rootDir": "${configDir}/src"`,
// And the following files aren't in src because they aren't transpiled.
// This is typescript-eslint's way of adding files that aren't included in tsconfig.
// See: https://typescript-eslint.io/troubleshooting/typed-linting/#i-get-errors-telling-me--was-not-found-by-the-project-service-consider-either-including-it-in-the-tsconfigjson-or-including-it-in-allowdefaultproject
// The best practice is to have a tsconfig.json that covers ALL files and is used for
// typechecking (with noEmit), and a `tsconfig.build.json` that is used for the build
// (or alternatively, swc, tsup or tsdown). That's what we should ideally do, in which case
// this hardcoded list wouldn't be necessary. Note that these files don't currently go
// through ts, only through eslint.
```
6. [Differentiate errors from warnings in VScode ESLint
Rules](5914d2f48d)
There's no reason to do that. If an eslint rule isn't an error, it
should be disabled or converted to a warning.
7. [Disable skip lint, and lint over the entire repo now that it's
faster](e4b28f1360)
The GitHub action linted only the files that had changed in the PR.
While this seems like a good idea, once exceptions were introduced with
[skip lint], they opened the door to propagating more and more errors.
Often, the linter was skipped, not because someone introduced new
errors, but because they were trying to avoid those that had already
crept in, sometimes accidentally introducing new ones.
On the other hand, `pnpm lint` now runs in parallel (commit 1), so it's
not that slow. Additionally, it runs in parallel with other GitHub
actions like e2e tests, which take much longer, so it can't represent a
bottleneck in CI.
8. [fix lint in next
package](4506595f91)
Small fix missing from commit 5
9. [Merge remote-tracking branch 'origin/main' into
fix-eslint](563d4909c1)
10. [add again eslint.config.js in payload
package](78f6ffcae7)
The comment in the code explains it. Basically, after the merge from
main, the payload package runs out of memory when linting, probably
because it grew in recent PRs. That package will sooner or later
collapse for our tooling, so we may have to split it. It's already too
big.
## Future Actions
- Resolve React compiler violations, as mentioned in commit 3.
- Decouple the `tsconfig` used for typechecking and build across the
entire monorepo (as explained in point 5) to ensure ts coverage even for
files that aren't transpiled (such as scripts).
- Remove the few remaining `eslint.config.js`. I had to leave the
`richtext-lexical` and `next` ones for now. They could be moved to the
root config and scoped to their packages, as we do for example with
`templates/vercel-postgres/**`. However, I couldn't get it to work, I
don't know why.
- Make eslint in the test folder usable. Not only are we not linting
`test` in CI, but now the `pnpm eslint .` command is so large that my
computer freezes. If each suite were its own package, this would be
solved, and dynamic codegen + git hooks to modify tsconfig.base.json
wouldn't be necessary
([related](https://github.com/payloadcms/payload/pull/11984)).
Adds pre-signed URLs support file downloads with the S3 adapter. Can be
enabled per-collection:
```ts
s3Storage({
collections: {
media: { signedDownloads: true }, // or { signedDownloads: { expiresIn: 3600 }} for custom expiresIn (default 7200)
},
bucket: process.env.S3_BUCKET,
config: {
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
},
endpoint: process.env.S3_ENDPOINT,
forcePathStyle: process.env.S3_FORCE_PATH_STYLE === 'true',
region: process.env.S3_REGION,
},
}),
```
The main use case is when you care about the Payload access control (so
you don't want to use `disablePayloadAccessControl: true` but you don't
want your files to be served through Payload (which can affect
performance with large videos for example).
This feature instead generates a signed URL (after verifying the access
control) and redirects you directly to the S3 provider.
This is an addition to https://github.com/payloadcms/payload/pull/11382
which added pre-signed URLs for file uploads.
Ensures all s3 sockets are cleaned up. Now passes through default
request handler options that `@smithy/node-http-handler` now handles
properly.
Fixes#6382
```ts
const defaultRequestHandlerOpts: NodeHttpHandlerOptions = {
httpAgent: {
keepAlive: true,
maxSockets: 100,
},
httpsAgent: {
keepAlive: true,
maxSockets: 100,
},
}
```
If you continue to have socket issues, you can customize any of the
options by setting `requestHandler` property on your s3 config. This
will take precedence if set.
```ts
requestHandler: {
httpAgent: {
maxSockets: 300,
keepAlive: true,
},
httpsAgent: {
maxSockets: 300,
keepAlive: true,
},
// Optional, only set these if you continue to see issues. Be wary of timeouts if you're dealing with large files.
// time limit (ms) for receiving response.
requestTimeout: 5_000,
// time limit (ms) for establishing connection.
connectionTimeout: 5_000,
}),
```
Fixes https://github.com/payloadcms/payload/issues/11473
Previously, when `disablePayloadAccessControl: true` was defined, client
uploads were working improperly. The reason is that
`addDataAndFileToRequest` expects `staticHandler` to be defined and we
don't add in case if `disablePayloadAccessControl: true`.
This PR makes it so otherwise and if we have `clientUploads`, it pushes
the "proxied" handler that responses only when the file was requested in
the context of client upload (from `addDataAndFileToRequest`)
### What?
Fixes client uploads when storage collection config has the `prefix`
property configured. Previously, it failed with "Object key was not
found".
### Why?
This is expected to work.
### How?
The client upload handler now receives to its props `prefix`. Then it
threads it to the server-side `staticHandler` through
`clientUploadContext` and then to `getFilePrefix`, which checks for
`clientUploadContext.prefix` and returns if there is.
Previously, `staticHandler` tried to load the file without including
prefix consideration.
This changes only these adapters:
* S3
* Azure
* GCS
With the Vercel Blob adapter, `prefix` works correctly.
Ensures that even if you pass `enabled: false` to the storage adapter
options, e.g:
```ts
s3Storage({
enabled: false,
collections: {
[mediaSlug]: true,
},
bucket: process.env.S3_BUCKET,
config: {
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
},
},
})
```
the client handler component is added to the import map. This prevents
errors when you use the adapter only on production, but you don't
regenerate the import map before running the build
### What?
Within collections using the `storage-s3` plugins, we eventually start
receiving the following warnings:
`@smithy/node-http-handler:WARN socket usage at capacity=50 and 156
additional requests are enqueued. See
https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/node-configuring-maxsockets.html
or increase socketAcquisitionWarningTimeout=(millis) in the
NodeHttpHandler config.`
Also referenced in this issue: #6382
The
[solution](https://github.com/payloadcms/payload/issues/6382#issuecomment-2325468104)
provided by @denolfe in that issue only delayed the reappearance of the
problem somewhat, but did not resolve it.
### Why?
As far as I understand, in the `staticHandler` of the plugin, when
getting items from storage, and they are currently cached, the cached
results are immediately returned without handling the stream. As per
[this](https://github.com/aws/aws-sdk-js-v3/blob/main/supplemental-docs/CLIENTS.md#nodejs-requesthandler)
entry in the aws-sdk docs, if the streaming response is not read, or
manually destroyed, a socket might not properly close.
### How?
Before returning the cached items, manually destroy the streaming
response to make certain the socket is being properly closed.
Additionally, add an error check to also consume/destroy the streaming
response in case an error occurs, to not leave orphaned sockets.
Fixes#6382
Previously we had been downgrading rimraf to v3 simply to handle clean
with glob patterns across platforms. In v4 and newer of rimraf you can
add `-g` to use glob patterns.
This change updates rimraf and adds the flag to handle globs in our
package scripts to be windows compatible.
This PR makes changes to every storage adapter in order to add
browser-based caching by returning etags, then checking for them into
incoming requests and responding a status code of `304` so the data
doesn't have to be returned again.
Performance improvements for cached subsequent requests:

This respects `disableCache` in the dev tools.
Also fixes a bug with getting the latest image when using the Vercel
Blob Storage adapter.
Should fix messed up import suggestions and simplifies all tsconfigs
through inheritance.
One main issue was that packages were inheriting `baseURL: "."` from the
root tsconfig. This caused incorrect import suggestions that start with
"packages/...".
This PR ensures that packages do not inherit this baseURL: "." property,
while ensuring the root, non-inherited tsconfig still keeps it to get
tests to work (the importMap needs it)