## Fix
We were able to narrow it down to this call
816fb28f55/packages/plugin-cloud-storage/src/utilities/getFilePrefix.ts (L26-L41)
Adding `draft: true` fixes the issue. It seems that the `prefix` can
only be found in a draft, and without `draft: true` those drafts aren't
searched.
### Issue reproduction
In the community folder, enable versioning for the media collection and
install the `s3storage` plugin (see Git patch). I use `minio` to have a
local S3 compatible backend and then I run the app with:
`AWS_ACCESS_KEY_ID=minioadmin AWS_SECRET_ACCESS_KEY=minioadmin
START_MEMORY_DB=true pnpm dev _community`.
Next, open the media collection and create a new entry. Then open that
entry, remove the file it currently has, and upload a new file. Save as
draft.
Now the media can no longer be accessed and the thumbnails are broken.
If you make an edit but save it by publishing the issue goes away. I
also couldn't reproduce this by adding a text field, changing that, and
saving the document as draft.
```diff
diff --git test/_community/collections/Media/index.ts test/_community/collections/Media/index.ts
index bb5edd0349..689423053c 100644
--- test/_community/collections/Media/index.ts
+++ test/_community/collections/Media/index.ts
@@ -9,6 +9,9 @@ export const MediaCollection: CollectionConfig = {
read: () => true,
},
fields: [],
+ versions: {
+ drafts: true,
+ },
upload: {
crop: true,
focalPoint: true,
diff --git test/_community/config.ts test/_community/config.ts
index ee1aee6e46..c81ec5f933 100644
--- test/_community/config.ts
+++ test/_community/config.ts
@@ -7,6 +7,7 @@ import { devUser } from '../credentials.js'
import { MediaCollection } from './collections/Media/index.js'
import { PostsCollection, postsSlug } from './collections/Posts/index.js'
import { MenuGlobal } from './globals/Menu/index.js'
+import { s3Storage } from '@payloadcms/storage-s3'
const filename = fileURLToPath(import.meta.url)
const dirname = path.dirname(filename)
@@ -24,6 +25,21 @@ export default buildConfigWithDefaults({
// ...add more globals here
MenuGlobal,
],
+ plugins: [
+ s3Storage({
+ enabled: true,
+ bucket: 'amboss',
+ config: {
+ region: 'eu-west-1',
+ endpoint: 'http://localhost:9000',
+ },
+ collections: {
+ media: {
+ prefix: 'media',
+ },
+ },
+ }),
+ ],
onInit: async (payload) => {
await payload.create({
collection: 'users',
```
## Screen recording
https://github.com/user-attachments/assets/b13be4a3-e858-427a-8bfa-6592b87748ee
There are cases when a storage plugin is disabled during development and
enabled in production. This will result in import maps that differ
depending on if they're generated during development or production.
In a lot of cases, those import maps are generated during
development-only. During production, we just re-use what was generated
locally. This will cause missing import map entries for those plugins
that are disabled during development.
This PR ensures the import map entries are added regardless of the
enabled state of those plugins. This is necessary for our
generate-templates script to not omit the vercel blob storage import map
entry.
**BREAKING CHANGE:**
This bumps the **minimum required Next.js** version from 15.0.0 to
15.2.3. This update is necessary due to a critical security
vulnerability found in earlier Next.js versions, which requires an
exception to our standard semantic versioning process.
Additionally, this bumps all templates to the latest Next.js and Payload
versions.
Fixes
https://github.com/payloadcms/payload/issues/11731#issue-2923088087
Only 2 s3Storage() instances with `clientUploads: true` are currently
working. If you add a 3rd instance, uploading files errors with
"Collection credit-certificates was not found in S3 options".
There is already an implementation that changes the
`/storage-s3-generate-signed-url` URL for each new s3Storage plugin
instance so that multiple instances don't break each other. Currently
the code looks like this:
```ts
/**
* Tracks how many times the same handler was already applied.
* This allows to apply the same plugin multiple times, for example
* to use different buckets for different collections.
*/
let handlerCount = 0
for (const endpoint of config.endpoints) {
if (endpoint.path === serverHandlerPath) {
handlerCount++
}
}
if (handlerCount) {
serverHandlerPath = `${serverHandlerPath}-${handlerCount}`
}
```
When we print the endpoints generated by this code we get:
```ts
{
handler: [AsyncFunction (anonymous)],
method: 'post',
path: '/storage-s3-generate-signed-url'
},
{
handler: [AsyncFunction (anonymous)],
method: 'post',
path: '/storage-s3-generate-signed-url-1'
},
{
handler: [AsyncFunction (anonymous)],
method: 'post',
path: '/storage-s3-generate-signed-url-1'
}
```
As you can see, the path or the 3rd instance is the same as the 2nd
instance. Presumably this functionality was originally tested with 2
instances and not more, allowing this bug to slip through. This
completely breaks uploads for the 3rd instance.
We need to change the conditional that checks whether the plugin exists
already to use `.startsWith()`:
```ts
/**
* Tracks how many times the same handler was already applied.
* This allows to apply the same plugin multiple times, for example
* to use different buckets for different collections.
*/
let handlerCount = 0
for (const endpoint of config.endpoints) {
// We want to match on 'path', 'path-1', 'path-2', etc.
if (endpoint.path?.startsWith(serverHandlerPath)) {
handlerCount++
}
}
if (handlerCount) {
serverHandlerPath = `${serverHandlerPath}-${handlerCount}`
}
```
With the fix we can see that the endpoints increment correctly, allowing
for a true arbitrary number of s3Storage plugins with client uploads.
```ts
{
handler: [AsyncFunction (anonymous)],
method: 'post',
path: '/storage-s3-generate-signed-url'
},
{
handler: [AsyncFunction (anonymous)],
method: 'post',
path: '/storage-s3-generate-signed-url-1'
},
{
handler: [AsyncFunction (anonymous)],
method: 'post',
path: '/storage-s3-generate-signed-url-2'
}
```
Fixes https://github.com/payloadcms/payload/issues/11473
Previously, when `disablePayloadAccessControl: true` was defined, client
uploads were working improperly. The reason is that
`addDataAndFileToRequest` expects `staticHandler` to be defined and we
don't add in case if `disablePayloadAccessControl: true`.
This PR makes it so otherwise and if we have `clientUploads`, it pushes
the "proxied" handler that responses only when the file was requested in
the context of client upload (from `addDataAndFileToRequest`)
When uploading file via client side upload we invalidate it then on the
server side with re-uploading. This works fine with most adapters since
they just replace the old file under the same key. UploadThing works
differently and generates a new key every time.
Example of the issue:
<img width="611" alt="image"
src="https://github.com/user-attachments/assets/9c01b52a-d159-4f32-9f66-3b5fbadab7b4"
/>
Now, we clear the old file before doing re-upload.
### What?
Fixes client uploads when storage collection config has the `prefix`
property configured. Previously, it failed with "Object key was not
found".
### Why?
This is expected to work.
### How?
The client upload handler now receives to its props `prefix`. Then it
threads it to the server-side `staticHandler` through
`clientUploadContext` and then to `getFilePrefix`, which checks for
`clientUploadContext.prefix` and returns if there is.
Previously, `staticHandler` tried to load the file without including
prefix consideration.
This changes only these adapters:
* S3
* Azure
* GCS
With the Vercel Blob adapter, `prefix` works correctly.
This bumps next.js to 15.2.0 in our monorepo, as well as all @types/react and @types/react-dom versions. Additionally, it removes the obsolete `peerDependencies` property from our root package.json.
This PR also fixes 2 bugs introduced by Next.js 15.2.0. This highlights why running our test suite against the latest Next.js, to make sure Payload is compatible, version is important.
## 1. handleWhereChange running endlessly
Upgrading to Next.js 15.2.0 caused `handleWhereChange` to be continuously called by a `useEffect` when the list view filters were opened, leading to a React error - I did not investigate why upgrading the Next.js version caused that, but this PR fixes it by making use of the more predictable `useEffectEvent`.
## 2. Custom Block and Array label React key errors
Upgrading to Next.js 15.2.0 caused react key errors when rendering custom block and array row labels on the server. This has been fixed by rendering those with a key
## 3. Table React key errors
When rendering a `Table`, a React key error is thrown since Next.js 15.2.0
This PR modifies `tsconfig.base.json` by setting the following
strictness properties to true: `strict`, `noUncheckedIndexedAccess` and
`noImplicitOverride`.
In packages where compilation errors were observed, these settings were
opted out, and TODO comments were added to make it easier to track the
roadmap for converting everything to strict mode.
The following packages now have increased strictness, which prevents new
errors from being accidentally introduced:
- storage-vercel-blob
- storage-s3*
- storage-gcs
- plugin-sentry
- payload-cloud*
- email-resend*
- email-nodemailer*
*These packages already had `strict: true`, but now have
`noUncheckedIndexedAccess` and `noImplicitOverride`.
Note that this only affects the `/packages` folder, but not
`/templates`, `/test` or `/examples` which have a different `tsconfig`.
Previously we had been downgrading rimraf to v3 simply to handle clean
with glob patterns across platforms. In v4 and newer of rimraf you can
add `-g` to use glob patterns.
This change updates rimraf and adds the flag to handle globs in our
package scripts to be windows compatible.