cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
Want to know what we learned at IBC? Check out our learnings on media, remote working and more right here.

Dropbox API Support & Feedback

Find help with the Dropbox API from other developers.

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

API Lock Files Batch works for some files and then stops working

API Lock Files Batch works for some files and then stops working

E_Fuks
Helpful | Level 5

Hi, 

 

we have build a script which locks files in the dropbox. For a big amount of roughly 100.000 files it works great. We have split the data in chunks of 2500 files, so that the API works. Doing an API Call for each 2500 files.

After that we get an error 5xx.  Restarting the script leads to another working 100.000 files. Setting up a loop waiting 5-10 minutes before retrying to lock the same files, leads to another locked 2500 files before the error comes again. And from that point, the loop does 2500 files, then put s out the 5xx error, waits and does the next 2500 and so on. 

 

Any thoughts or ideas? If it would be an Api-Limitation a restart of the script should not work for another 100.000 files, does it?

 

3 Replies 3

Здравко
Legendary | Level 20

Hi @E_Fuks,

If you don't receive any error because of big files number ("too_many_files"), better keep lock the files in one call (or as fewer calls as possible). Every call needs time to get GIL ownership, that's why your algorithm gets slow down and increase Dropbox server loads. Seems like mistake in the call response, since an error 'too_many_write_operations' better represent the situation. If you receive such error, repeat the call after exponential growing delay as many times as needed.

Hope this helps.

E_Fuks
Helpful | Level 5

Thanks, that was also my initial thought. But we get a error argument list too long using 5000 files or more.

That's why we dropped the quantity of files.

Здравко
Legendary | Level 20

@E_Fuks wrote:

... For a big amount of roughly 100.000 files it works great. ...



@E_Fuks wrote:

... But we get a error argument list too long using 5000 files or more. ...


Don't you contradict to yourself? 😁

Anyway... When and if the files count is too big, just decrease it. When and if overload happens, just delay and repeat the call and, if need, increase exponentially the delay. 😉 That's it.

Good luck.

Need more support?
Who's talking

Top contributors to this post

  • User avatar
    Здравко Legendary | Level 20
  • User avatar
    E_Fuks Helpful | Level 5
What do Dropbox user levels mean?