cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
Want to learn some quick and useful tips to make your day easier? Check out how Calvin uses Replay to get feedback from other teams at Dropbox here.

Dropbox API Support & Feedback

Find help with the Dropbox API from other developers.

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Re: Rate limiting when uploading files with rclone

Rate limiting when uploading files with rclone

ncw
Collaborator | Level 8
Go to solution

I've received complaints from rclone users that file uploads are progressing really slowly.

 

Digging into it what I see is this:

 

2020-09-03 11:44:33 DEBUG : too_many_requests/: Too many requests or write operations. Trying again in 15 seconds.

What happened there is rclone received the `upload failed: too_many_requests` error and backed off for 15 seconds according to the Retry-After header.

 

Dropbox didn't used to do this with rclone so I suspect we've hit some sort of rate limiting.

 

Note that rclone has 35,000 users at Dropbox generating 60M API calls per day!

 

I'm not sure this is the right channel for this kind of support - maybe rclone should be a tech partner?

 

Any help much appreciated - thanks!

 

Nick Craig-Wood

Rclone lead developer

[email removed as per Community Guidelines]

1 Accepted Solution

Accepted Solutions

Greg-DB
Dropbox Staff
Go to solution

Thanks for the detailed feedback! I'm sharing this with the team.

View solution in original post

15 Replies 15

Greg-DB
Dropbox Staff
Go to solution

Thanks for the report Nick! This may be due to some changes about how we return rate limit/lock contention errors and their Retry-After time windows. We'll look into it, but for reference, can you let me know, to the best of your knowledge, when you started seeing this?

 

Also, does rclone ever submit multiple uploads for the same "namespace" at the same time? If so, I recommend reviewing the Performance Guide, if you haven't already. In particular check out the "Batch Upload" section for guidance on how to most efficiently upload multiple files.

 

By the way, for support for issues like this, you can always find us on the forum here, or contact us directly by opening an API ticket here. Either way is fine. If you would additionally like to consider becoming technology partner, you can find more information and an application form here.

ncw
Collaborator | Level 8
Go to solution

We'll look into it, but for reference, can you let me know, to the best of your knowledge, when you started seeing this?

 

Unfortunately these errors don't show up in the integration tests logs otherwise I'd have an accurate timeline for you.

 

I received the first user report about this on the 2nd September 2020 and I verified it for myself today. The user that reported it to me was a new user though and it is possible/likely that the problem existed before then.

 

> Also, does rclone ever submit multiple uploads for the same "namespace" at the same time?

 

Yes it does. My understanding that a namespace could be a folder so rclone does uploads lots of file to the same folder and this is indeed the problem area that I'm seeing.

 

It looks like batch uploads could be helpful. Will they make a lot of performance difference?

 

The API doesn't look hard to implement but batch uploads are really bad fit architecturally for rclone - I'll need to think how that might work.

 

I ran my tests again now and it looks like rclone is rate limited to uploading 2 ish files per second. After about 100 uploads I start seeing the `too_many_requests` errors with a Retry-After of 15s.

 

Does batch uploading work-around that?

 

Thanks for the contact links - I couldn't find the developer ticket option earlier!

 
 

Greg-DB
Dropbox Staff
Go to solution

Thanks for the information! I'll follow up here once I have an update on this.

 

That batch upload functionality can improve performance overall, especially when you are seeing contention issues, but it will vary from case to case. It helps by only taking a lock once per batch, as opposed to once per file.

ncw
Collaborator | Level 8
Go to solution

I have implemented batching, and yes it does make a HUGE difference. In fact for my test directory with 200MB of small images (average size 400k) it transfers it 20x faster!

 

I think the major disadvantage for rclone is that it can't check the hashes of the uploaded files any more. This is because the batch completes after rclone has finished with the input file due to the architecture of rclone. However the user can run "rclone check" after transfers and that is what I'll recommend to users using the batching feature.

 

So thank you for pointing me at that feature. It makes a huge difference.

Greg-DB
Dropbox Staff
Go to solution

Great, thanks for following up! I'm glad to hear it helps a lot in this case.

ncw
Collaborator | Level 8
Go to solution

My Test user is uploading 10TB of images. So lots of 2-3MB files.

 

Using batching the upload is proceeding very quickly (at about 35MB/s) which is great. However every 30 minutes or so they get this error

 

 

too_many_requests/.: Too many requests or write operations.

 

Along with a 300 second Retry-After which rclone obeys.

 

Rclone is now using batching of 1000 files at a time. It only sends one batch at a time and waits for it to complete. I think that is all the ingredients for successful batching.

 

If I upload lots of very small (10 byte) files, I can provoke this message after uploading about 5000 of them.

 

Any ideas on how I can avoid this 300 second lockout? I tried pacing the uploads but it didn't seem to help.

 

Thanks

 

 

Greg-DB
Dropbox Staff
Go to solution

That rate limit with the longer Retry-After should just be a result of making a very large number of API calls for a particular user in a short period of time (as opposed to lock contention). If you're sending thousands of requests very quickly like that, you can run in to it. (It sounds like you would have normally run in to lock contention before you hit this limit, but that's no longer an issue with the batch commit so you can run much faster.) The only way to avoid this then is to just call at a slower rate. It sounds like you already tried that, but if it didn't help you'll need to limit it further.

ncw
Collaborator | Level 8
Go to solution

> The only way to avoid this then is to just call at a slower rate. It sounds like you already tried that, but if it didn't help you'll need to limit it further.

 

I tried limiting it to 10 per second.

 

I'll do some experiments to see what I think the right number is.

 

Or even better do you have a figure you could tell me?

Greg-DB
Dropbox Staff
Go to solution

Unfortunately I don't have any specific rate limit numbers to share.

Need more support?