Your workflow is unique 👨💻 - tell us how you use Dropbox here.
Forum Discussion
bb-jacin
3 years agoExplorer | Level 3
/files/save_url has high rate of failure
Hi, I am currently testing using the /files/save_url API endpoint and am experiencing a high rate of failure. I am testing using a set of 6 small image files stored in S3-compatible storage environme...
nyou045
1 month agoExplorer | Level 4
Looks like this is still a problem in 2026. Here's an example errors output from my Python script:
FAILED after 9s https://nz-imagery.s3.amazonaws.com/auckland/auckland_2022_0.075m/rgb/2193/AY30_1000_4743.json 1162 bytes SaveUrlResult('async_job_id', 'XGNn-ChuPpYAAAAAAAAAAQ') SaveUrlJobStatus('failed', SaveUrlError('download_failed', None))
FAILED after 6s https://nz-imagery.s3.amazonaws.com/auckland/auckland_2022_0.075m/rgb/2193/AY31_1000_2427.json 1161 bytes SaveUrlResult('async_job_id', 'wE6jqobcPoEAAAAAAAAAAQ') SaveUrlJobStatus('failed', SaveUrlError('download_failed', None))
FAILED after 6s https://nz-imagery.s3.amazonaws.com/auckland/auckland_2022_0.075m/rgb/2193/AY31_1000_3031.json 1163 bytes SaveUrlResult('async_job_id', '8M1dCfp8QgUAAAAAAAAAAQ') SaveUrlJobStatus('failed', SaveUrlError('download_failed', None))
When it does work, the performance seems a lot lower than I expected too - it takes ~95s to save a 99.4MB file (1MB/s, 8.374Mbit/s)
Copied after 95s https://nz-imagery.s3.amazonaws.com/auckland/auckland_2022_0.075m/rgb/2193/AY31_1000_3520.tiff 99435762 bytes
DB-Des
Dropbox Community Moderator
1 month agoHi nyou045
Thank you for providing those details. Based on what we're seeing, the download failures you're encountering with the /2/files/save_url endpoint are due to rate limiting. In other words, your integration is sending too many requests within a short period of time, and the API is throttling those requests as a result.
As a recommendation, apps should be written to handle rate limit responses automatically. The best practice is to retry the request, respecting the 'Retry-After' header, if given in the response, or using an exponential back-off, if not. I recommend referring to the error documentation and Error Handling Guide for more information.
Also, it's important to note that rate limiting doesn't reduce or "throttle" transfer speeds for requests that are successful. If you're seeing slow performance on requests that do succeed, that's most likely due to environmental factors such as network conditions between your infrastructure and Dropbox, bandwidth/latency constraints, DNS/proxy/firewall behavior, or resource constraints on the host running the requests (CPU, memory, disk, or concurrency limits).
- nyou0451 month agoExplorer | Level 4
The rate limiting seems a bit too low given how slow it is - could that rate limit be increased? I was trying to save several TB. It took several days, due the lack of parallelisation. I expected the official Dropbox Python SDK to handle that. It's not my infrastructure, it's AWS S3.
About Dropbox API Support and Feedback
Get help with the Dropbox API from fellow developers and experts.
The Dropbox Community team is active from Monday to Friday. We try to respond to you as soon as we can, usually within 2 hours.
If you need more help you can view your support options (expected response time for an email or ticket is 24 hours), or contact us on X, Facebook or Instagram.
For more info on available support options for your Dropbox plan, see this article.
If you found the answer to your question in this Community thread, please 'like' the post to say thanks and to let us know it was useful!