Bug description
When using DigitalOcean Spaces as the S3 attachment backend, some uploads fail with HTTP/2 stream errors:
uploading object rdHcTL5FTE0s failed: Put "https://nyc3.digitaloceanspaces.com/ntfy-attachments/rdHcTL5FTE0s":
http2: Transport: cannot retry err [stream error: stream ID 19; PROTOCOL_ERROR; received from peer]
after Request.Body was written; define Request.GetBody to avoid this error
Users see an HTTP 500 (ntfy error 50001) response when publishing messages with attachments.
Root cause
Go's default HTTP client negotiates HTTP/2 via ALPN. Some S3-compatible providers (DigitalOcean Spaces, MinIO, and others) have incomplete or buggy HTTP/2 implementations that can send RST_STREAM with PROTOCOL_ERROR mid-upload. Go's HTTP/2 transport cannot retry the request because the streaming body has already been consumed (no GetBody is defined).
This is a well-documented issue across multiple projects:
Fix
Add a disable_http2=true query parameter to the S3 URL configuration that forces the S3 client to use HTTP/1.1 only:
attachment-cache-dir: "s3://KEY:SECRET@BUCKET/PREFIX?region=nyc3&endpoint=https://nyc3.digitaloceanspaces.com&disable_http2=true"
When set, the HTTP client is constructed with TLSNextProto set to an empty map, which prevents HTTP/2 ALPN negotiation — the same approach used by rclone.
Bug description
When using DigitalOcean Spaces as the S3 attachment backend, some uploads fail with HTTP/2 stream errors:
Users see an HTTP 500 (ntfy error 50001) response when publishing messages with attachments.
Root cause
Go's default HTTP client negotiates HTTP/2 via ALPN. Some S3-compatible providers (DigitalOcean Spaces, MinIO, and others) have incomplete or buggy HTTP/2 implementations that can send
RST_STREAMwithPROTOCOL_ERRORmid-upload. Go's HTTP/2 transport cannot retry the request because the streaming body has already been consumed (noGetBodyis defined).This is a well-documented issue across multiple projects:
--s3-disable-http2to work around it: PROTOCOL_ERROR on S3 rclone/rclone#4673Fix
Add a
disable_http2=truequery parameter to the S3 URL configuration that forces the S3 client to use HTTP/1.1 only:When set, the HTTP client is constructed with
TLSNextProtoset to an empty map, which prevents HTTP/2 ALPN negotiation — the same approach used by rclone.