Dropbox API Support & Feedback
Find help with the Dropbox API from other developers.
Hello @Greg-DB
We have developed a tool to help our customers move data from Windows File System to Dropbox.
The upload is done in batches of 8 files using the Dropbox APIs - UploadSessionStartAsync, UploadSessionFinishBatchAsync and UploadSessionFinishBatchCheckAsync. Uploads were tested keeping the chunk size between 10-40 MB.
The upload speed we experienced using our tool varied from 1 Mbps to 10 Mbps (Megabits per second) which was very slow.
We want our tool to upload data in tune of several TBs which would take unreasonable time to complete.
Our experience with a third party tool - CFP (Cloud Fast Path) was very good as we experienced speeds of up to 210 Mbps (Megabits per second) on the same machine.
We tested both tools on a machine in our network as well as on an Azure VM.
How can we optimize and improve our tool to achieve faster upload speeds ?
Thanks,
Gagan
It sounds like you're using all of the available batch endpoints for this use cases, so the only remaining change that comes to mind would be to increase the chunk size, so that you use fewer requests for sending the file data overall.
You can use a chunk size of up to 150 MB, but be aware that increasing the chunk size increases how long each request takes and consequently increases the chances of the network connection failing before it completes. And further, using a larger chunk size means more data needs to be sent on a retry.
Hi there!
If you need more help you can view your support options (expected response time for a ticket is 24 hours), or contact us on X or Facebook.
For more info on available support options for your Dropbox plan, see this article.
If you found the answer to your question in this Community thread, please 'like' the post to say thanks and to let us know it was useful!