We have designed a workflow application that relies heavily on the fine file handling capabilities of Dropbox. Basically we have a server side application that moves files(high grade images in this case) from one location to another in our Dropbox account. To accomplish this we have set up an app in the app console and generated an OAuth token. We use this token when making batch(move/copy) operation requests to the Dropbox API using its Java SDK.
Currently we have been processing hundreds of folders each containing 10-12 high quality images (2-5 MB). The system works fine most of the time. But recently, we have been facing some frequent issues with the Dropbox API citing internal error and too many write requests. I assume since we are using a single token to make requests to Dropbox, it results in a lot of activity in a single user account. This is resulting in such issues.
One solution that we are currently working on is on reducing the number of entries in a single batch move or copy request. But this will result in increase in the number of batch operations we submit to Dropbox (thinking about the API rate limits).
Is there some better strategy to integrate with Dropbox API for our workflow scenario?
I understand it is more of a system design query, but any help would be deeply appreciated.
The 'too_many_write_operations' error indicates "lock contention". That's not explicit rate limiting, but rather a result of how Dropbox works on the backend. This is a technical inability to make a modification in the account or shared folder at the time of the API call. This error indicates that there was simultaneous activity in the account or shared folder preventing your app from making the state-modifying call (e.g., adding, editing, moving, or deleting files) it is attempting. (It's not based on using a single access token; technically it's per-namespace. The Namespace Guide has more information on what namespaces are.)
The simultaneous activity could be coming from your app itself, or elsewhere, e.g., from the user's desktop client. It can come from the same user, or another member of a shared folder. You can find more information about lock contention here:
In short, to avoid this error, you should avoid making multiple concurrent state modifications. E.g., don't issue multiple such requests at a time, and use batch endpoints whenever possible. That won't guarantee that you won't run in to this error though, as contention can still come from other sources, so you may also want to implement automatic retries in your app.
So, for the 'too_many_write_operations' issue, reducing the batch size wouldn't necessarilly help (and in fact may make it worse; the batch methods are preferred since they only take one call for all files in the batch).
The 'internal error' issue may be somewhat alleviated by reducing the batch size though. If you share a failed job ID for one of those, we can look into it to see if we can offer any more specific insight. Feel free to open an API ticket if you wish to share it privately.