Take Your Search Game to the Next Level with Dropbox Dash 🚀✨ Curious how it works? Ask us here!
Forum Discussion
icydog
3 years agoExplorer | Level 4
API rate limiting on file download
I am using Duplicacy for backing up data to my Dropbox Business account. It is a Dropbox API app and writes to the Apps directory. The issue I am having is that when downloading (for restoring or verification of a backup), I'm being rate-limited to an effective download rate of about 20 MB/s, which is a lot slower than I'd like, especially in a disaster recovery scenario.
Duplicacy seems to be making one `/2/files/get_metadata` and one `/2/files/download` per chunk being downloaded. Each chunk (file) tends to be around 5-20 MB. Duplicacy does respect the Retry-After and backs off as requested.
I have two questions:
- Is the Dropbox API rate limit based on number of API calls, throughput, or both?
- How could I optimize this to maximize overall throughput on download?
3 Replies
Sort By
- Greg-DB
Dropbox Staff
1. The Dropbox API rate limiting system operates on the number of calls per time period, not bandwidth/throughput.
2. There may be some optimizations that could be made, but they would need to be made by the programmer of the app. It sounds like you're an end-user of Duplicacy, and not the programmer, so you may want to reach out to them regarding this. For example though, based on your description, I would suggest they investigate if they actually need to make one /2/files/get_metadata call per "chunk". For a large number of files, that could add up to a large number of calls. They may want to look into using /2/files/list_folder[/continue] instead, as that enables apps to list multiple files/folders under any particular path using fewer calls (as each call can return multiple entries).
- icydogExplorer | Level 4
Wow, it's nice to see you're still at Dropbox, Greg! Thanks for the very informative and concise answer as always, especially on #1.
Duplicacy is open-source so it can be changed, but its architecture makes removing the get_metadata call here difficult (for one thing, Duplicacy is generic and needs to work with a variety of storage backends). list_folder doesn't work either because successive chunks are unlikely to be stored in the same directory, and enumerating the entire storage with the recursive option is infeasible for large data sets (I have millions of files). I appreciate the pointers!
-davidz
- Greg-DB
Dropbox Staff
Thanks for the additional context! In that case, I'm sending this along as a feature request for a batch version of /2/files/get_metadata, but I can't promise if or when that might be implemented.
About Dropbox API Support & Feedback
Find help with the Dropbox API from other developers.5,997 PostsLatest Activity: 27 minutes ago
If you need more help you can view your support options (expected response time for an email or ticket is 24 hours), or contact us on X or Facebook.
For more info on available support options for your Dropbox plan, see this article.
If you found the answer to your question in this Community thread, please 'like' the post to say thanks and to let us know it was useful!